Affiliation(s):
Wireless Signal Processing and Network Laboratory, Beijing University of Posts and Telecommunications, Beijing 100876, China;
moreAffiliation(s): Wireless Signal Processing and Network Laboratory, Beijing University of Posts and Telecommunications, Beijing 100876, China; Research Institute of China Telecom Co., Ltd., Beijing 102209, China; Beijing Telecom, No. 21 Chaoyangmen North Street, Dongcheng District, Beijing;
less
Yi-zhuo Cai, Bo Lei, Qian-ying Zhao, Jing Peng, Min Wei, Yu-shun zhang, Xing Zhang. Communication Efficiency Optimization of Federated Learning for Computing and Network Convergence of 6G Networks[J]. Frontiers of Information Technology & Electronic Engineering,in press.https://doi.org/10.1631/FITEE.2300122
@article{title="Communication Efficiency Optimization of Federated Learning for Computing and Network Convergence of 6G Networks", author="Yi-zhuo Cai, Bo Lei, Qian-ying Zhao, Jing Peng, Min Wei, Yu-shun zhang, Xing Zhang", journal="Frontiers of Information Technology & Electronic Engineering", year="in press", publisher="Zhejiang University Press & Springer", doi="https://doi.org/10.1631/FITEE.2300122" }
%0 Journal Article %T Communication Efficiency Optimization of Federated Learning for Computing and Network Convergence of 6G Networks %A Yi-zhuo Cai %A Bo Lei %A Qian-ying Zhao %A Jing Peng %A Min Wei %A Yu-shun zhang %A Xing Zhang %J Frontiers of Information Technology & Electronic Engineering %P %@ 2095-9184 %D in press %I Zhejiang University Press & Springer doi="https://doi.org/10.1631/FITEE.2300122"
TY - JOUR T1 - Communication Efficiency Optimization of Federated Learning for Computing and Network Convergence of 6G Networks A1 - Yi-zhuo Cai A1 - Bo Lei A1 - Qian-ying Zhao A1 - Jing Peng A1 - Min Wei A1 - Yu-shun zhang A1 - Xing Zhang J0 - Frontiers of Information Technology & Electronic Engineering SP - EP - %@ 2095-9184 Y1 - in press PB - Zhejiang University Press & Springer ER - doi="https://doi.org/10.1631/FITEE.2300122"
Abstract: Federated learning effectively addresses issues such as data privacy by collaborating across participating devices to train global models. However, factors such as network topology and device computing power can affect its training or communication process in complex network environments. Computing and network convergence (CNC) of sixth generation (6G) networks, a new network architecture and paradigm with computing-measurable, perceptible, distributable, dispatchable, and manageable capabilities, can effectively support federated learning training and improve its communication efficiency. By guiding the participating devices’ training in federated learning based on business requirements, resource load, network conditions, and computing power of devices, CNC can reach this goal. In this article, to improve the communication efficiency of federated learning in complex networks, we study the communication efficiency optimization of federated learning for CNC of 6G networks, methods that gives decisions on its training process for different network conditions and computing power of participating devices. The experiments address two architectures that exist for devices in federated learning and arrange devices to participate in training based on arithmetic power while achieving optimization of communication efficiency in the process of transferring model parameters. The results show that the method we proposed can (1) cope well with complex network situations, (2) effectively balance the delay distribution of participating devices for local training, (3) improve the communication efficiency during the transfer of model parameters, and (4) improve the resource utilization in the network.
Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article
Reference
Open peer comments: Debate/Discuss/Question/Opinion
Open peer comments: Debate/Discuss/Question/Opinion
<1>