CLC number: TP393
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 2023-10-17
Cited: 0
Clicked: 1801
Yizhuo CAI, Bo LEI, Qianying ZHAO, Jing PENG, Min WEI, Yushun ZHANG, Xing ZHANG. Communication efficiency optimization of federated learning for computing and network convergence of 6G networks[J]. Frontiers of Information Technology & Electronic Engineering, 2024, 25(5): 713-727.
@article{title="Communication efficiency optimization of federated learning for computing and network convergence of 6G networks",
author="Yizhuo CAI, Bo LEI, Qianying ZHAO, Jing PENG, Min WEI, Yushun ZHANG, Xing ZHANG",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="25",
number="5",
pages="713-727",
year="2024",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2300122"
}
%0 Journal Article
%T Communication efficiency optimization of federated learning for computing and network convergence of 6G networks
%A Yizhuo CAI
%A Bo LEI
%A Qianying ZHAO
%A Jing PENG
%A Min WEI
%A Yushun ZHANG
%A Xing ZHANG
%J Frontiers of Information Technology & Electronic Engineering
%V 25
%N 5
%P 713-727
%@ 2095-9184
%D 2024
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2300122
TY - JOUR
T1 - Communication efficiency optimization of federated learning for computing and network convergence of 6G networks
A1 - Yizhuo CAI
A1 - Bo LEI
A1 - Qianying ZHAO
A1 - Jing PENG
A1 - Min WEI
A1 - Yushun ZHANG
A1 - Xing ZHANG
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 25
IS - 5
SP - 713
EP - 727
%@ 2095-9184
Y1 - 2024
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2300122
Abstract: federated learning effectively addresses issues such as data privacy by collaborating across participating devices to train global models. However, factors such as network topology and computing power of devices can affect its training or communication process in complex network environments. computing and network convergence (CNC) of sixth-generation (6G) networks, a new network architecture and paradigm with computing-measurable, perceptible, distributable, dispatchable, and manageable capabilities, can effectively support federated learning training and improve its communication efficiency. By guiding the participating devices’ training in federated learning based on business requirements, resource load, network conditions, and computing power of devices, CNC can reach this goal. In this paper, to improve the communication efficiency of federated learning in complex networks, we study the communication efficiency optimization methods of federated learning for CNC of 6G networks that give decisions on the training process for different network conditions and computing power of participating devices. The simulations address two architectures that exist for devices in federated learning and arrange devices to participate in training based on arithmetic power while achieving optimization of communication efficiency in the process of transferring model parameters. The results show that the methods we proposed can cope well with complex network situations, effectively balance the delay distribution of participating devices for local training, improve the communication efficiency during the transfer of model parameters, and improve the resource utilization in the network.
[1]Cao QM, Zhang X, Zhang YS, et al., 2021. Layered model aggregation based federated learning in mobile edge networks. IEEE/CIC Int Conf on Communications in China, p.1-6.
[2]Chen MZ, Yang ZH, Saad W, et al., 2021. A joint learning and communications framework for federated learning over wireless networks. IEEE Trans Wirel Commun, 20(1):269-283.
[3]Deng YH, Lyu F, Ren J, et al., 2021. Share: shaping data distribution at edge for communication-efficient hierarchical federated learning. IEEE 41st Int Conf on Distributed Computing Systems, p.24-34.
[4]Dinh CT, Tran NH, Nguyen MNH, et al., 2021. Federated learning over wireless networks: convergence analysis and resource allocation. IEEE/ACM Trans Netw, 29(1):398-409.
[5]Fraboni Y, Vidal R, Kameni L, et al., 2021. Clustered sampling: low-variance and improved representativity for clients selection in federated learning. Proc 38th Int Conf on Machine Learning, p.3407-3416.
[6]He CY, Annavaram M, Avestimehr S, 2020. Group knowledge transfer: federated learning of large CNNs at the edge. Proc 34 th Int Conf on Neural Information Processing Systems, p.14068-14080.
[7]Kairouz P, McMahan HB, Avent B, et al., 2021. Advances and open problems in federated learning. Found Trends Mach Learn, 14(1-2):1-210.
[8]Konečný J, McMahan HB, Yu FX, et al., 2016. Federated learning: strategies for improving communication efficiency.
[9]Li DL, Wang JP, 2019. FedMD: heterogenous federated learning via model distillation.
[10]Li T, Sahu AK, Talwalkar A, et al., 2020. Federated learning: challenges, methods, and future directions. IEEE Signal Process Mag, 37(3):50-60.
[11]Lin T, Kong LJ, Stich SU, et al., 2020. Ensemble distillation for robust model fusion in federated learning. Proc 34 th Int Conf on Neural Information Processing Systems, p.2351-2363.
[12]Liu LM, Zhang J, Song SH, et al., 2020. Client-edge-cloud hierarchical federated learning. IEEE Int Conf on Communications, p.1-6.
[13]McMahan HB, Moore E, Ramage D, et al., 2017. Communication-efficient learning of deep networks from decentralized data. Proc 20 th Int Conf on Artificial Intelligence and Statistics, p.1273-1282.
[14]Pinyoanuntapong P, Janakaraj P, Wang P, et al., 2020. FedAir: towards multi-hop federated learning over-the-air. IEEE 21st Int Workshop on Signal Processing Advances in Wireless Communications, p.1-5.
[15]Qin ZJ, Li GY, Ye H, 2021. Federated learning and wireless communications. IEEE Wirel Commun, 28(5):134-140.
[16]So J, Güler B, Avestimehr AS, 2021. Turbo-aggregate: breaking the quadratic aggregation barrier in secure federated learning. IEEE J Sel Areas Inform Theory, 2(1):479-489.
[17]Sun W, Li ZJ, Wang QBJ, et al., 2023. FedTAR: task and resource-aware federated learning for wireless computing power networks. IEEE Int Things J, 10(5):4257-4270.
[18]Sun YK, Lei B, Liu J, et al., 2022. Computing power network: a survey. China Commun, early access.
[19]Wahab OA, Mourad A, Otrok H, et al., 2021. Federated machine learning: survey, multi-level classification, desirable criteria and future directions in communication and networking systems. IEEE Commun Surv Tut, 23(2):1342-1397.
[20]Wu WT, He LG, Lin WW, et al., 2021. Safa: a semi-asynchronous protocol for fast federated learning with low overhead. IEEE Trans Comput, 70(5):655-668.
[21]Yang HH, Liu ZZ, Quek TQS, et al., 2020. Scheduling policies for federated learning in wireless networks. IEEE Trans Commun, 68(1):317-333.
[22]Yang ZH, Chen MZ, Saad W, et al., 2021. Energy efficient federated learning over wireless communication networks. IEEE Trans Wirel Commun, 20(3):1935-1949.
[23]Yang ZH, Chen MZ, Wong KK, et al., 2022. Federated learning for 6G: applications, challenges, and opportunities. Engineering, 8:33-41.
Open peer comments: Debate/Discuss/Question/Opinion
<1>