ZTE Communications ›› 2022, Vol. 20 ›› Issue (3): 35-42.DOI: 10.12142/ZTECOM.202203005
• Special Topic • Previous Articles Next Articles
LIU Qinbo1,2, JIN Zhihao1, WANG Jiabo1, LIU Yang1,3(), LUO Wenjian1,3
Received:
2022-06-20
Online:
2022-09-13
Published:
2022-09-14
Contact:
LIU Yang
About author:
LIU Qinbo received his BS degree in mathematics and physics basic science from the School of Mathematical Sciences, University of Electronic Science and Technology of China in 2021. He is currently pursuing his ME degree with the School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China. His research interests include federated learning and GNNs.|JIN Zhihao received his BE degree in computer science and technology from the School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China in 2022. His research interests include federated learning.|WANG Jiabo received his BE degree in software engineering from the School of Information Science and Technology, Dalian Maritime University, China in 2021. He is currently pursuing his ME degree with the School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China. His research interests include federated learning.|LUO Wenjian received his BS and PhD degrees from the Department of Computer Science and Technology, University of Science and Technology of China, in 1998 and 2003, respectively. He is currently a professor with the School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China. His current research interests include computational intelligence and applications, network security and data privacy, machine learning, and data mining. Dr. LUO is also a senior member of the Association for Computing Machinery (ACM) and the China Computer Federation (CCF). He has been a member of the organizational team of more than ten academic conferences, in various functions, such as the program chair, the symposium chair and the publicity chair. He also serves as the chair of the IEEE CIS ECTC Task Force on Artificial Immune Systems. He also serves as an associate editor or an editorial board member for several journals, including Information Sciences, Swarm and Evolutionary Computation, Journal of Information Security and Applications, Applied Soft Computing, and Complex & Intelligent Systems.
Supported by:
LIU Qinbo, JIN Zhihao, WANG Jiabo, LIU Yang, LUO Wenjian. MSRA-Fed: A Communication-Efficient Federated Learning Method Based on Model Split and Representation Aggregate[J]. ZTE Communications, 2022, 20(3): 35-42.
Add to citation manager EndNote|Ris|BibTeX
URL: http://zte.magtechjournal.com/EN/10.12142/ZTECOM.202203005
Label | 1st Neuron | 2nd Neuron | 3rd Neuron | 4th Neuron | 5th Neuron | 6th Neuron | 7th Neuron | 8th Neuron | 9th Neuron | 10th Neuron | 11th Neuron | 12th Neuron |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Label-3 | 1.00 | 2.94 | 1.98 | 11.82 | 1.80 | 2.16 | 0.01 | 9.60 | 0.28 | 4.29 | 3.84 | 11.03 |
Label-6 | 4.82 | 0.00 | 3.98 | 1.88 | 0.00 | 3.65 | 1.17 | 0.03 | 36.54 | 8.05 | 3.65 | 0.72 |
Label-7 | 0.00 | 0.01 | 1.37 | 13.00 | 4.92 | 5.20 | 3.33 | 0.10 | 17.91 | 12.54 | 3.82 | 0.08 |
Blended | 52.11 | 34.38 | 18.51 | 24.65 | 16.18 | 11.99 | 3.99 | 35.46 | 17.24 | 18.57 | 27.22 | 23.68 |
Table 1 Variance of the outputs of each neuron in the last hidden layer
Label | 1st Neuron | 2nd Neuron | 3rd Neuron | 4th Neuron | 5th Neuron | 6th Neuron | 7th Neuron | 8th Neuron | 9th Neuron | 10th Neuron | 11th Neuron | 12th Neuron |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Label-3 | 1.00 | 2.94 | 1.98 | 11.82 | 1.80 | 2.16 | 0.01 | 9.60 | 0.28 | 4.29 | 3.84 | 11.03 |
Label-6 | 4.82 | 0.00 | 3.98 | 1.88 | 0.00 | 3.65 | 1.17 | 0.03 | 36.54 | 8.05 | 3.65 | 0.72 |
Label-7 | 0.00 | 0.01 | 1.37 | 13.00 | 4.92 | 5.20 | 3.33 | 0.10 | 17.91 | 12.54 | 3.82 | 0.08 |
Blended | 52.11 | 34.38 | 18.51 | 24.65 | 16.18 | 11.99 | 3.99 | 35.46 | 17.24 | 18.57 | 27.22 | 23.68 |
Method | Initial Accuracy/% | Accuracy After Five Rounds of Training/% | Communication Load/B | Communication Load per 1% Accuracy Improvement |
---|---|---|---|---|
FedAvg | 71.62 | 86.10 | 6 352 000 | 54 834.25 |
MSRA-Fed | 59.97 | 80.11 | 123 280 | 765.14 |
Table 2 Communication efficiency comparison after five rounds of training on MNIST dataset
Method | Initial Accuracy/% | Accuracy After Five Rounds of Training/% | Communication Load/B | Communication Load per 1% Accuracy Improvement |
---|---|---|---|---|
FedAvg | 71.62 | 86.10 | 6 352 000 | 54 834.25 |
MSRA-Fed | 59.97 | 80.11 | 123 280 | 765.14 |
Method | Initial Accuracy/% | Accuracy After Ten Rounds of Training/% | Communication Load/B | Communication Load per 1% Accuracy Improvement |
---|---|---|---|---|
FedAvg | 71.62 | 86.15 | 12 704 000 | 109 291.12 |
MSRA-Fed | 59.97 | 80.48 | 226 080 | 1 337.86 |
Table 3 Communication efficiency comparison after ten rounds of training on MNIST dataset
Method | Initial Accuracy/% | Accuracy After Ten Rounds of Training/% | Communication Load/B | Communication Load per 1% Accuracy Improvement |
---|---|---|---|---|
FedAvg | 71.62 | 86.15 | 12 704 000 | 109 291.12 |
MSRA-Fed | 59.97 | 80.48 | 226 080 | 1 337.86 |
1 |
TALUKDER A, HAAS R. AIoT: AI meets IoT and web in smart healthcare [C]//13th ACM Web Science Conference 2021. ACM, 2021: 92–98. DOI: 10.1145/3462741.3466650
DOI |
2 | ALKHATIB S, WAYCOTT J, BUCHANAN G, et al. Privacy and the Internet of Things (IoT) monitoring solutions for older adults: a review [J]. Studies in health technology and informatics, 2018, 252: 8–14 |
3 |
LI T, SAHU A K, TALWALKAR A, et al. Federated learning: challenges, methods, and future directions [J]. IEEE signal processing magazine, 2020, 37(3): 50–60. DOI: 10.1007/978-3-030-85559-8_13
DOI |
4 |
NGUYEN D C, DING M, PATHIRANA P N, et al. Federated learning for internet of things: a comprehensive survey [J]. IEEE communications surveys & tutorials, 2021, 23(3): 1622-1658. DOI: 10.1109/COMST.2021.3075439
DOI |
5 |
MCMAHAN B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data [C]//20th International Conference on Artificial Intelligence and Statistics (AISTATS). PMLR, 2017: 1273–1282. DOI: 10.48550/arXiv.1602.05629
DOI |
6 | SHAHID O, POURIYEH S, PARIZI R M, et al. Communication efficiency in federated learning: Achievements and challenges [EB/OL]. (2021-07-23)[2022-05-01]. |
7 |
CHAI Z, ALI A, ZAWAD S, et al. TiFL: a tier-based federated learning system [C]//29th International Symposium on High-Performance Parallel and Distributed Computing. ACM, 2020: 125–136. DOI: 10.1145/3369583.3392686
DOI |
8 |
TAN Y, LONG G, LIU L, et al. Fedproto: federated prototype learning across heterogeneous clients [C]//AAAI Conference on Artificial Intelligence. AAAI, 2022: 8432–8440. DOI: 10.1609/aaai.v36i8.20819
DOI |
9 | THAPA C, CHAMIKARA M A P, CAMTEPE S, et al. Splitfed: when federated learning meets split learning [EB/OL]. (2022-02-16)[2022-05-01]. |
10 | KONEČNÝ J, MCMAHAN H B, RAMAGE D, et al. Federated optimization: distributed machine learning for on-device intelligence [EB/OL]. (2016-10-08)[2022-05-01]. |
11 |
KHAN L U, SAAD W, HAN Z, et al. Federated learning for internet of things: recent advances, taxonomy, and open challenges [J]. IEEE communications surveys & tutorials, 2021, 23(3): 1759–1799. DOI: 10.1109/COMST.2021.3090430
DOI |
12 |
WANG L, WANG W, LI B. CMFL: mitigating communication overhead for federated learning [C]//IEEE 39th International Conference on Distributed Computing Systems (ICDCS). IEEE, 2019: 954–964. DOI: 10.1109/ICDCS.2019.00099
DOI |
13 |
SATTLER F, WIEDEMANN S, MÜLLER K R, et al. Robust and communication-efficient federated learning from non-IID data [J]. IEEE transactions on neural networks and learning systems, 2019, 31(9): 3400–3413. DOI: 10.1109/TNNLS.2019.2944481
DOI |
14 | KONEČNÝ J, MCMAHAN H B, YU F X, et al. Federated learning: strategies for improving communication efficiency [EB/OL]. (2017-10-30)[2022-05-01]. |
15 |
SURESH A T, FELIX X Y, KUMAR S, et al. Distributed mean estimation with limited communication [C]//International conference on machine learning. PMLR, 2017: 3329–3337. DOI: 10.48550/arXiv.1611.00429
DOI |
16 | CALDAS S, KONEČNY J, MCMAHAN H B, et al. Expanding the reach of federated learning by reducing client resource requirements [EB/OL]. (2019-01-08)[2022-05-01]. |
17 | DEAN J, CORRADO G, MONGA R, et al. Large scale distributed deep networks [C]//25th International Conference on Neural Information Processing Systems, NIPS. 2012: 1223–1231 |
18 | HUANG Y, CHENG Y, BAPNA A, et al. GPipe: efficient training of giant neural networks using pipeline parallelism [C]//33rd International Conference on Neural Information Processing Systems. NIPS, 2019: 103–112 |
19 | JIANG L, WANG Y, ZHENG W, et al. LSTMSPLIT: Effective SPLIT Learning based LSTM on Sequential Time-Series Data [EB/OL]. (2022-03-08)[2022-05-01]. |
20 |
CHEN Y, SUN X, JIN Y. Communication-efficient federated deep learning with layerwise asynchronous model update and temporally weighted aggregation [J]. IEEE transactions on neural networks and learning systems, 2019, 31(10): 4229–4238. DOI: 10.1109/TNNLS.2019.2953131
DOI |
21 | THAPA C, CHAMIKARA M A P, CAMTEPE S A. Advancements of federated learning towards privacy preservation: from federated learning to split learning [M]//Federated Learning Systems. Springer, Cham, 2021: 79–109 |
22 |
WANG W, FENG F, HE X, et al. Denoising implicit feedback for recommendation [C]//14th ACM International Conference on Web Search and Data Mining. ACM, 2021: 373–381. DOI: 10.1145/3437963.3441800
DOI |
23 | SINGH A, VEPAKOMMA P, GUPTA O, et al. Detailed comparison of communication efficiency of split learning and federated learning [EB/OL]. (2019-01-08)[2022-05-01]. |
24 |
WAINAKH A, VENTOLA F, MÜßIG T, et al. User-level label leakage from gradients in federated learning [J]. Proceedings on privacy enhancing technologies, 2022(2): 227–244. DOI: 10.2478/popets-2022-0043
DOI |
25 |
KHOSRAVY M, NAKAMURA K, HIROSE Y, et al. Model inversion attack: Analysis under gray-box scenario on deep learning based face recognition system [J]. KSII transactions on internet and information systems (TIIS), 2021, 15(3): 1100–1118. DOI: 10.3837/tiis.2021.03.015
DOI |
[1] | YAN Yuna, LIU Ying, NI Tao, LIN Wensheng, LI Lixin. Content Popularity Prediction via Federated Learning in Cache-Enabled Wireless Networks [J]. ZTE Communications, 2023, 21(2): 18-24. |
[2] | ZHAO Moke, HUANG Yansong, LI Xuan. Federated Learning for 6G: A Survey From Perspective of Integrated Sensing, Communication and Computation [J]. ZTE Communications, 2023, 21(2): 25-33. |
[3] | ZHANG Weiting, LIANG Haotian, XU Yuhua, ZHANG Chuan. Reliable and Privacy-Preserving Federated Learning with Anomalous Users [J]. ZTE Communications, 2023, 21(1): 15-24. |
[4] | WANG Yiji, WEN Dingzhu, MAO Yijie, SHI Yuanming. RIS-Assisted Federated Learning in Multi-Cell Wireless Networks [J]. ZTE Communications, 2023, 21(1): 25-37. |
[5] | WANG Pengfei, SONG Wei, SUN Geng, WEI Zongzheng, ZHANG Qiang. Air-Ground Integrated Low-Energy Federated Learning for Secure 6G Communications [J]. ZTE Communications, 2022, 20(4): 32-40. |
[6] | NAN Yucen, FANG Minghao, ZOU Xiaojing, DOU Yutao, Albert Y. ZOMAYA. A Collaborative Medical Diagnosis System Without Sharing Patient Data [J]. ZTE Communications, 2022, 20(3): 3-16. |
[7] | HAN Xuming, GAO Minghan, WANG Limin, HE Zaobo, WANG Yanze. A Survey of Federated Learning on Non-IID Data [J]. ZTE Communications, 2022, 20(3): 17-26. |
[8] | TANG Bo, ZHANG Chengming, WANG Kewen, GAO Zhengguang, HAN Bingtao. Neursafe-FL: A Reliable, Efficient, Easy-to- Use Federated Learning Framework [J]. ZTE Communications, 2022, 20(3): 43-53. |
[9] | SHI Wenqi, SUN Yuxuan, HUANG Xiufeng, ZHOU Sheng, NIU Zhisheng. Scheduling Policies for Federated Learning in Wireless Networks: An Overview [J]. ZTE Communications, 2020, 18(2): 11-19. |
[10] | YANG Howard H., ZHAO Zhongyuan, QUEK Tony Q. S.. Enabling Intelligence at Network Edge:An Overview of Federated Learning [J]. ZTE Communications, 2020, 18(2): 2-10. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||