ZTE Communications ›› 2023, Vol. 21 ›› Issue (2): 25-33.DOI: 10.12142/ZTECOM.202302005
• Special Topic • Previous Articles Next Articles
ZHAO Moke, HUANG Yansong, LI Xuan()
Received:
2023-03-15
Online:
2023-06-13
Published:
2023-06-13
About author:
ZHAO Moke received her BE degree in electronic engineering from Beijing University of Posts and Telecommunications (BUPT), China in 2022. She is pursuing her master’s degree in electronic engineering at BUPT. Her research interests include edge computing and wireless communications in 6G.|HUANG Yansong received his BE degree in electronic engineering from Beijing University of Posts and Telecommunications (BUPT), China in 2022. He is working toward his master’s degree in electronic engineering at BUPT. His research interests include integrated sensing, communication and computation in 6G.|LI Xuan (ZHAO Moke, HUANG Yansong, LI Xuan. Federated Learning for 6G: A Survey From Perspective of Integrated Sensing, Communication and Computation[J]. ZTE Communications, 2023, 21(2): 25-33.
Add to citation manager EndNote|Ris|BibTeX
URL: https://zte.magtechjournal.com/EN/10.12142/ZTECOM.202302005
Challenge | Specific Method | Advantages and Disadvantages |
---|---|---|
Participant selection | Participating clients are selected based on the heterogeneous nature of the data, quality of participants and training, and resource constraints. | Selecting participants can make full use of resources and is conducive to continuous training. However, when the data scale is too large, the overall performance cannot be guaranteed in the scenario of edge intelligence applications, and the training process needs to be optimized. |
Adaptive aggregation | The best tradeoff is found between local updates and global parameter aggregation under a given resource budget to speed up the local training process. | By adapting the frequency of global aggregation, the performance of the model can be improved, and the utilization of available resources can be improved. However, the convergence of adaptive aggregation schemes currently only considers convex loss functions. |
Incentive mechanism | FL requires an effective incentive mechanism for participation and balances rewards and limited communication and computing resources to improve data quality. | By quantifying data quality, the overall benefit of FL is generally improved, but due to the heterogeneity of the environment, the excitation obtained by different edge devices in FL does not match, making it difficult to balance game rewards and resource consumption. |
Model compression | The transmission model is compressed to improve the communication efficiency between the server and client. Knowledge distillation exchanges model outputs, allowing edge devices to adopt larger local models. | Client-to-server parameter compression may cause convergence problems, increase computational complexity, and reduce training accuracy. Knowledge distillation alleviates the problem of independent and identical distribution of data to a certain extent, but the quality of wireless channel will affect the accuracy of model training. |
Privacy protection | Privacy protection may be achieved through the inference of attacks, the encryption of data and models, and the improvement of privacy protection performance by blockchain technology. | FL may solve the privacy leakage problems caused by the model parameter sharing and multi-party communication and cooperation mechanism of FL. However, further research is needed when it comes to the security problems caused by data poisoning and the removal of traces left by participants’ data in the local model, etc. |
Table 1 Challenges in federated learning (FL) and their state-of-the-art solutions
Challenge | Specific Method | Advantages and Disadvantages |
---|---|---|
Participant selection | Participating clients are selected based on the heterogeneous nature of the data, quality of participants and training, and resource constraints. | Selecting participants can make full use of resources and is conducive to continuous training. However, when the data scale is too large, the overall performance cannot be guaranteed in the scenario of edge intelligence applications, and the training process needs to be optimized. |
Adaptive aggregation | The best tradeoff is found between local updates and global parameter aggregation under a given resource budget to speed up the local training process. | By adapting the frequency of global aggregation, the performance of the model can be improved, and the utilization of available resources can be improved. However, the convergence of adaptive aggregation schemes currently only considers convex loss functions. |
Incentive mechanism | FL requires an effective incentive mechanism for participation and balances rewards and limited communication and computing resources to improve data quality. | By quantifying data quality, the overall benefit of FL is generally improved, but due to the heterogeneity of the environment, the excitation obtained by different edge devices in FL does not match, making it difficult to balance game rewards and resource consumption. |
Model compression | The transmission model is compressed to improve the communication efficiency between the server and client. Knowledge distillation exchanges model outputs, allowing edge devices to adopt larger local models. | Client-to-server parameter compression may cause convergence problems, increase computational complexity, and reduce training accuracy. Knowledge distillation alleviates the problem of independent and identical distribution of data to a certain extent, but the quality of wireless channel will affect the accuracy of model training. |
Privacy protection | Privacy protection may be achieved through the inference of attacks, the encryption of data and models, and the improvement of privacy protection performance by blockchain technology. | FL may solve the privacy leakage problems caused by the model parameter sharing and multi-party communication and cooperation mechanism of FL. However, further research is needed when it comes to the security problems caused by data poisoning and the removal of traces left by participants’ data in the local model, etc. |
1 |
LETAIEF K B, SHI Y M, LU J M, et al. Edge artificial intelligence for 6G: vision, enabling technologies, and applications [J]. IEEE journal on selected areas in communications, 2022, 40(1): 5–36. DOI: 10.1109/JSAC.2021.3126076
DOI |
2 | Cisco. Cisco annual internet report (2018–2023) white paper [R]. 2020 |
3 |
DUNNE R, MORRIS T, HARPER S. A survey of ambient intelligence [J]. ACM computing surveys, 2021, 54(4): 73. DOI: 10.1145/3447242
DOI |
4 | CUSTERS B, SEARS A M, DECHESNE F, et al. EU personal data protection in policy and practice [M]. Switzerland: Springer Nature, 2019 |
5 |
ZHU G X, LIU D Z, DU Y Q, et al. Toward an intelligent edge: wireless communication meets machine learning [J]. IEEE communications magazine, 2020, 58(1): 19–25. DOI: 10.1109/MCOM.001.1900103
DOI |
6 |
YU S, CHEN X, YANG L, et al. Intelligent edge: leveraging deep imitation learning for mobile edge computation offloading [J]. IEEE wireless communications, 2020, 27(1): 92–99. DOI: 10.1109/MWC.001.1900232
DOI |
7 | MCMAHAN H B, MOORE E, RAMAGE D, et al. Federated learning of deep networks using model averaging [EB/OL]. (2016-02-17)[2023-03-05]. |
8 |
YANG Z H, CHEN M Z, WONG K K, et al. Federated learning for 6G: applications, challenges, and opportunities [J]. Engineering, 2022, 8: 33–41. DOI: 10.1016/j.eng.2021.12.002
DOI |
9 | China Academy of Information and Communications Technology. Federal learning scenario application research report (2022) [R]. 2022 |
10 |
LIU P X, ZHU G X, WANG S, et al. Toward ambient intelligence: federated edge learning with task-oriented sensing, computation, and communication integration [J]. IEEE journal of selected topics in signal processing, 2023, 17(1): 158–172. DOI: 10.1109/JSTSP.2022.3226836
DOI |
11 |
LI X Y, LIU F, ZHOU Z Q, et al. Integrated sensing and over-the-air computation: dual-functional MIMO beamforming design [C]//1st International Conference on 6G Networking (6GNet). IEEE, 2022. DOI: 10.1109/6GNet54646.2022.9830500
DOI |
12 |
LIU F, CUI Y H, MASOUROS C, et al. Integrated sensing and communications: towards dual-functional wireless networks for 6G and beyond [J]. IEEE journal on selected areas in communications, 2022, 40: (6): 1728–1767. DOI: 10.1109/JSAC.2022.3156632
DOI |
13 |
LIU P X, ZHU G X, JIANG W, et al. Vertical federated edge learning with distributed integrated sensing and communication [J]. IEEE communications letters, 2022, 26(9): 2091–2095. DOI: 10.1109/LCOMM.2022.3181612
DOI |
14 |
LI C X, LI G, VARSHNEY P K. Communication-efficient federated learning based on compressed sensing [J]. IEEE Internet of Things journal, 2021, 8(20): 15531–15541. DOI: 10.1109/JIOT.2021.3073112
DOI |
15 |
JEON Y S, AMIRI M M, LI J, et al. A compressive sensing approach for federated learning over massive MIMO communication systems [J]. IEEE transactions on wireless communications, 2021, 20(3): 1990–2004. DOI: 10.1109/TWC.2020.3038407
DOI |
16 |
JIANG T, SHI Y M. Over-the-air computation via intelligent reflecting surfaces [C]//IEEE Global Communications Conference (GLOBECOM). IEEE, 2020: 1–6. DOI: 10.1109/GLOBECOM38437.2019.9013643
DOI |
17 |
ZHU G X, WANG Y, HUANG K B. Broadband analog aggregation for low-latency federated edge learning [J]. IEEE transactions on wireless communications, 19 (1): 491–506. DOI: 10.1109/TWC.2019.2946245
DOI |
18 |
LIN F P-C, HOSSEINALIPOUR S, AZAM S S, et al. Semi-decentralized federated learning with cooperative D2D local model aggregations [J]. IEEE journal on selected areas in communications, 2021, 39(12): 3851–3869. DOI: 10.1109/JSAC.2021.3118344
DOI |
19 |
HOSSEINALIPOUR S, BRINTON C G, AGGARWAL V, et al. From federated to fog learning: distributed machine learning over heterogeneous wireless networks [J]. IEEE communications magazine, 2020, 58(12): 41–47. DOI: 10.1109/MCOM.001.2000410
DOI |
20 |
FU M, ZHOU Y, SHI Y M, et al. UAV-assisted multi-cluster over-the-air computation [J]. IEEE transactions on wireless communications, 2022, early access. DOI: 10.1109/TWC.2022.3227768
DOI |
21 |
LIU W C, ZANG X, LI Y H, et al. Over-the-air computation systems: optimization, analysis and scaling laws [J]. IEEE transactions on wireless communications, 2020, 19(8): 5488–5502. DOI: 10.1109/TWC.2020.2993703
DOI |
22 |
ZHU G X, XU J, HUANG K B, et al. Over-the-air computing for wireless data aggregation in massive IoT [J]. IEEE wireless communications, 2021, 28(4): 57–65. DOI: 10.1109/MWC.011.2000467
DOI |
23 |
AMIRI M M, GUNDUZ D. Federated learning over wireless fading channels [J]. IEEE transactions on wireless communications, 2020, 19 (5): 3546–3557. DOI: 10.1109/TWC.2020.2974748
DOI |
24 |
CAO X W, ZHU G X, XU J, et al. Optimized power control design for over-the-air federated edge learning [J]. IEEE journal on selected areas in communications, 2022, 40(1): 342–358. DOI: 10.1109/JSAC.2021.3126060
DOI |
25 |
YANG K, JIANG T, SHI Y M, et al. Federated learning via over-the-air computation [J]. IEEE transactions on wireless communications, 2020, 19(3): 2022–2035. DOI: 10.1109/TWC.2019.2961673
DOI |
26 | MCMAHAN H B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data [C]//20th International Conference on Artificial Intelligence and Statistics. AISTATS, 2017: 1178–1187 |
27 |
SINGH P, SINGH M K, SINGH R, et al. Federated learning: challenges, methods, and future directions [M]//Federated learning for IoT applications. Cham: Springer International Publishing, 2022: 199–214. DOI: 10.1007/978-3-030-85559-8_13
DOI |
28 |
WANG L P, WANG W, LI B. CMFL: mitigating communication overhead for federated learning [C]//39th International Conference on Distributed Computing Systems (ICDCS). IEEE, 2019: 954–964. DOI: 10.1109/ICDCS.2019.00099
DOI |
29 | LAI F, ZHU X F, MADHYASTHA H, et al. Oort: informed participant selection for scalable federated learning [EB/OL]. (2020-10-12)[2023-03-06]. |
30 |
MONDAL S, MITRA S, MUKHERJEE A, et al. Participant selection algorithms for large-scale mobile crowd sensing environment [J]. Microsystem technologies, 2022, 28(12): 2641–2657. DOI: 10.1007/s00542-022-05271-2
DOI |
31 |
ZHANG W, LI Z, CHEN X. Quality-aware user recruitment based on federated learning in mobile crowd sensing [J]. Tsinghua science and technology, 2021, 26 (6): 869–877. DOI: 10.26599/TST.2020.9010046
DOI |
32 | KATHAROPOULOS A, FLEURET F. Biased importance sampling for deep neural network training [EB/OL]. (2017-05-31)[2023-03-05]. |
33 |
SATTLER F, MÜLLER K, SAMEK W. Clustered federated learning: model-agnostic distributed multitask optimization under privacy constraints [J]. IEEE transactions on neural networks and learning systems, 2021, 32 (8): 3710–3722. DOI: 10.1109/TNNLS.2020.3015958
DOI |
34 |
WANG T, WANG P, CAI S B, et al. Mobile edge-enabled trust evaluation for the Internet of Things [J]. Information fusion, 2021, 75: 90–100. DOI: 10.1016/j.inffus.2021.04.007
DOI |
35 | RIBERO M, VIKALO H. Communication-efficient federated learning via optimal client sampling [EB/OL]. (2020-07-30)[2023-03-07]. |
36 |
ABDULRAHMAN S, TOUT H, MOURAD A, et al. FedMCCS: multicriteria client selection model for optimal IoT federated learning [J]. IEEE Internet of Things journal, 2021, 8(6): 4723–4735. DOI: 10.1109/JIOT.2020.3028742
DOI |
37 |
XU J, WANG H Q. Client selection and bandwidth allocation in wireless federated learning networks: a long-term perspective [J]. IEEE transactions on wireless communications, 2021, 20 (2): 1188–1200. DOI: 10.1109/TWC.2020.3031503
DOI |
38 |
SHI W Q, SUN Y X, HUANG X F, et al. Scheduling policies for federated learning in wireless networks: an overview [J]. ZTE communications, 2020, 18(2): 11–19. DOI: 10.12142/ZTECOM.202002003
DOI |
39 | HADDADPOUR F, MAHDAVI M. On the convergence of local descent methods in federated learning [EB/OL]. (2019-10-31)[2023-03-06]. |
40 |
WANG S Q, TUOR T, SALONIDIS T, et al. Adaptive federated learning in resource constrained edge computing systems [J]. IEEE journal on selected areas in communications, 2019, 37(6): 1205–1221. DOI: 10.1109/JSAC.2019.2904348
DOI |
41 | ZHANG J Q, HUA Y, WANG H, et al. FedALA: adaptive local aggregation for personalized federated learning [EB/OL]. (2022-12-02)[2023-03-06]. |
42 | WANG Z L, HU Q, LI R N, et al. Incentive mechanism design for joint resource allocation in blockchain-based federated learning [EB/OL]. (2022-02-18)[2023-03-08]. |
43 |
FENG S H, NIYATO D, WANG P, et al. Joint service pricing and cooperative relay communication for federated learning [C]//International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData). IEEE, 2019. DOI: 10.1109/iThings/GreenCom/CPSCom/SmartData.2019.00148
DOI |
44 | TADELIS S. Game theory: an introduction [M]. USA: Princeton University Press, 2013 |
45 |
SUN W, XU N, WANG L, et al. Dynamic digital twin and federated learning with incentives for air-ground networks [J]. IEEE transactions on network science and engineering, 2022, 9 (1): 321–333. DOI: 10.1109/TNSE.2020.3048137
DOI |
46 |
HU P, GU H L, QI J, et al. Design of two-stage federal learning incentive mechanism under specific indicators [C]//2nd International Conference on Big Data Economy and Information Management (BDEIM). IEEE, 2021. DOI: 10.1109/BDEIM55082.2021.00103
DOI |
47 | HINTON G, VINYALS O, DEAN J. Distilling the knowledge in a neural network [EB/OL]. (2015-03-09)[2023-03-05]. |
48 |
J-H AHN, SIMEONE O, KANG J. Wireless federated distillation for distributed edge learning with heterogeneous data [C]//IEEE 30th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC). IEEE, 2019. DOI: 10.1109/PIMRC.2019.8904164
DOI |
49 | ROTHCHILD D, PANDA A, ULLAH E, et al. Fetchsgd: cmmunication-efficient federated learning with sketching [C]//37th International Conference on Machine Learning. PMLR, 2020:8253–8265 |
50 | LEE G, JEONG M, SHIN Y, et al. Preservation of the global knowledge by not-true distillation in federated learning [EB/OL]. (2021-06-06)[2023-03-07]. |
51 | LI D L, WANG J P. FedMD: heterogenous federated learning via model distillation [EB/OL]. (2019-10-08)[2023-03-01]. |
52 | LIN T, KONG L J, STICH S U, et al. Ensemble distillation for robust model fusion in federated learning [C]//34th Conference on Neural Information Processing Systems. NeurIPS, 2020: 2351–2363 |
53 |
ZHANG J, GUO S, MA X S, et al. Parameterized knowledge transfer for personalized federated learning [C]//35th Conference on Neural Information Processing Systems. NeurIPS, 2021. DOI: 10.48550/arXiv.2111.02862
DOI |
54 | CHO Y J, WANG J Y, CHIRUVOLU T, et al. Personalized federated learning for heterogeneous clients with clustered knowledge transfer [EB/OL]. (2021-09-16)[2023-03-05]. |
55 | DIVI S, FARRUKH H, CELIK B. Unifying distillation with personalization in federated learning [EB/OL]. (2021-05-31)[2023-03-06]. |
56 | MU Y Q. Deep leakage from gradients [EB/OL]. (2022-12-15)[2023-03-05]. |
57 |
NASR M, SHOKRI R, HOUMANSADR A. Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning [C]//IEEE Symposium on Security and Privacy (SP). IEEE, 2019. DOI: 10.1109/SP.2019.00065
DOI |
58 | ZHU L G, LIU Z J, HAN S. Deep leakage from gradients [C]//33rd Conference on Neural Information Processing Systems. NeurIPS, 2019: 14774–14784 |
59 |
ZHOU C Y, FU A M, YU S, et al. Privacy-preserving federated learning in fog computing [J]. IEEE Internet of Things journal, 2020, 7 (11): 10782–10793. DOI: 10.1109/JIOT.2020.2987958
DOI |
60 |
HOU D K, ZHANG J, MAN K L, et al. A systematic literature review of blockchain-based federated learning: architectures, applications and issues [C]//2nd Information Communication Technologies Conference (ICTC). IEEE, 2021: 302–307. DOI: 10.1109/ICTC51749.2021.9441499
DOI |
61 |
XU J, GLICKSBERG B S, SU C, et al. Federated learning for healthcare informatics [J]. Journal of healthcare informatics research, 2021, 5(1): 1–19. DOI: 10.1007/s41666-020-00082-4
DOI |
62 |
MUHAMMAD K, WANG Q Q, O'REILLY-MORGAN, et al. Fedfast: going beyond average for faster training of federated recommender systems [C]//26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2020: 1234–1242. DOI: 10.1145/3394486.3403176
DOI |
63 |
LI X X, LI Z T, OUYANG Y, et al. Using UAV to detect truth for clean data collection in sensor cloud systems [J]. ZTE communications, 2021, 19(3): 30-45. DOI: 10.12142/ZTECOM.202103005
DOI |
64 |
SHIRI H, PARK J, BENNIS M. Communication-efficient massive UAV online path control: federated learning meets mean-field game theory [J]. IEEE transactions on communications, 2020, 68 (11): 6840–6857. DOI: 10.1109/TCOMM.2020.3017281
DOI |
65 |
ZHANG P Y, XIE L F, XU J. Joint placement and resource allocation for UAV allocation for UAV assisted mobile edge computing networks with URLLC [J]. ZTE communications, 2020, 18(2): 49–56+82. DOI: 10.12142/ZTECOM.202002007
DOI |
66 |
ZHANG G X, DING X J, QU Z C. Space‑terrestrial integrated architecture for Internet of Things [J]. ZTE communications, 2020, 18(4): 3–9. DOI: 10.12142/ZTECOM.202004002
DOI |
[1] | TANG Shuntian, WANG Xinyi, XIA Fanghao, FEI Zesong. Kullback-Leibler Divergence Based ISAC Constellation and Beamforming Design in the Presence of Clutter [J]. ZTE Communications, 2024, 22(3): 4-12. |
[2] | YU Chao, LYU Bojie, QIU Haoyu, WANG Rui. Trajectory Tracking for MmWave Communication Systems via Cooperative Passive Sensing [J]. ZTE Communications, 2024, 22(3): 29-36. |
[3] | DU Ruolin, WEI Zhiqiang, YANG Zai. Integrated Sensing and Communication: Who Benefits More? [J]. ZTE Communications, 2024, 22(3): 37-47. |
[4] | WEI Zhiqing, ZHANG Yongji, JI Danna, LI Chenfei. Sensing and Communication Integrated Fast Neighbor Discovery for UAV Networks [J]. ZTE Communications, 2024, 22(3): 69-82. |
[5] | GU Cheng, LI Baochun. Hierarchical Federated Learning Architectures for the Metaverse [J]. ZTE Communications, 2024, 22(2): 39-48. |
[6] | YAN Yuna, LIU Ying, NI Tao, LIN Wensheng, LI Lixin. Content Popularity Prediction via Federated Learning in Cache-Enabled Wireless Networks [J]. ZTE Communications, 2023, 21(2): 18-24. |
[7] | YAN Jintao, CHEN Tan, XIE Bowen, SUN Yuxuan, ZHOU Sheng, NIU Zhisheng. Hierarchical Federated Learning: Architecture, Challenges, and Its Implementation in Vehicular Networks [J]. ZTE Communications, 2023, 21(1): 38-45. |
[8] | DING Yahao, SHIKH‑BAHAEI Mohammad, YANG Zhaohui, HUANG Chongwen, YUAN Weijie. Secure Federated Learning over Wireless Communication Networks with Model Compression [J]. ZTE Communications, 2023, 21(1): 46-54. |
[9] | ZHANG Weiting, LIANG Haotian, XU Yuhua, ZHANG Chuan. Reliable and Privacy-Preserving Federated Learning with Anomalous Users [J]. ZTE Communications, 2023, 21(1): 15-24. |
[10] | WANG Yiji, WEN Dingzhu, MAO Yijie, SHI Yuanming. RIS-Assisted Federated Learning in Multi-Cell Wireless Networks [J]. ZTE Communications, 2023, 21(1): 25-37. |
[11] | WANG Pengfei, SONG Wei, SUN Geng, WEI Zongzheng, ZHANG Qiang. Air-Ground Integrated Low-Energy Federated Learning for Secure 6G Communications [J]. ZTE Communications, 2022, 20(4): 32-40. |
[12] | NAN Yucen, FANG Minghao, ZOU Xiaojing, DOU Yutao, Albert Y. ZOMAYA. A Collaborative Medical Diagnosis System Without Sharing Patient Data [J]. ZTE Communications, 2022, 20(3): 3-16. |
[13] | HAN Xuming, GAO Minghan, WANG Limin, HE Zaobo, WANG Yanze. A Survey of Federated Learning on Non-IID Data [J]. ZTE Communications, 2022, 20(3): 17-26. |
[14] | LIU Qinbo, JIN Zhihao, WANG Jiabo, LIU Yang, LUO Wenjian. MSRA-Fed: A Communication-Efficient Federated Learning Method Based on Model Split and Representation Aggregate [J]. ZTE Communications, 2022, 20(3): 35-42. |
[15] | TANG Bo, ZHANG Chengming, WANG Kewen, GAO Zhengguang, HAN Bingtao. Neursafe-FL: A Reliable, Efficient, Easy-to- Use Federated Learning Framework [J]. ZTE Communications, 2022, 20(3): 43-53. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||