ZTE Communications ›› 2023, Vol. 21 ›› Issue (1): 46-54.DOI: 10.12142/ZTECOM.202301006
• Special Topic • Previous Articles
DING Yahao1, SHIKH‑BAHAEI Mohammad1, YANG Zhaohui2, HUANG Chongwen2(), YUAN Weijie3
Received:
2023-02-11
Online:
2023-03-25
Published:
2024-03-15
About author:
DING Yahao received her master's degree in communications and signal processing from Imperial College London, U.K. in 2020. She is currently pursuing her PhD degree in information and communication engineering with King's College London, U.K. Her current research interests include federated learning, security, and UAV swarms.DING Yahao, SHIKH‑BAHAEI Mohammad, YANG Zhaohui, HUANG Chongwen, YUAN Weijie. Secure Federated Learning over Wireless Communication Networks with Model Compression[J]. ZTE Communications, 2023, 21(1): 46-54.
Add to citation manager EndNote|Ris|BibTeX
URL: http://zte.magtechjournal.com/EN/10.12142/ZTECOM.202301006
Description | Parameter | Value |
---|---|---|
Total bandwidth of uplink | B | 20 MHz |
Bandwidth of each RB | B0 | 3.33 MHz |
Noise power spectral density | N0 | -174 dBm/MHz |
Total number of training samples for user | Ki | [10, 20, 15, 25, 10] |
Gradient compression ratio of user | αi | |
Number of gradients for each user | M | 9 |
Delay requirement of uplink | τ | 2 s |
Distance between user and BS | d | 30 m |
Number of RBs | Q | 6 |
Transmit power of user | Pi | 0.001–0.012 W |
Table 1 Simulation Parameters
Description | Parameter | Value |
---|---|---|
Total bandwidth of uplink | B | 20 MHz |
Bandwidth of each RB | B0 | 3.33 MHz |
Noise power spectral density | N0 | -174 dBm/MHz |
Total number of training samples for user | Ki | [10, 20, 15, 25, 10] |
Gradient compression ratio of user | αi | |
Number of gradients for each user | M | 9 |
Delay requirement of uplink | τ | 2 s |
Distance between user and BS | d | 30 m |
Number of RBs | Q | 6 |
Transmit power of user | Pi | 0.001–0.012 W |
1 | YANG Q, LIU Y, CHEN T J, et al. Federated machine learning: concept and applications [J]. ACM transactions on intelligent systems and technology, 2019, 10(2): 1–19. DOI: 10.1145/3298981 |
2 | JOCHEMS A, DEIST T M, VAN SOEST J, et al. Distributed learning: developing a predictive model based on data from multiple hospitals without data leaving the hospital—a real life proof of concept [J]. Radiotherapy and oncology, 2016, 121(3): 459–467. DOI: 10.1016/j.radonc.2016.10.002 |
3 | BONAWITZ K, EICHNER H, GRIESKAMP W, et al. Towards federated learning at scale: system design [C]//Conference on machine learning and systems. MLSys, 2019: 374–388 |
4 | SHOKRI R, STRONATI M, SONG C Z, et al. Membership inference attacks against machine learning models [C]//IEEE Symposium on Security and Privacy (SP). IEEE, 2017: 3–18. DOI: 10.1109/SP.2017.41 |
5 | HITAJ B, ATENIESE G, PEREZ-CRUZ F. Deep models under the GAN: information leakage from collaborative deep learning [C]//ACM SIGSAC Conference on Computer and Communications Security. ACM, 2017: 603–618. DOI: 10.1145/3133956.3134012 |
6 | WANG Z B, SONG M K, ZHANG Z F, et al. Beyond inferring class representatives: user-level privacy leakage from federated learning [C]//IEEE Conference on Computer Communications. IEEE, 2019: 2512–2520. DOI: 10.1109/INFOCOM.2019.8737416 |
7 | PHONG L T, AONO Y, HAYASHI T, et al. Privacy-preserving deep learning via additively homomorphic encryption [J]. IEEE transactions on information forensics and security, 2017, 13(5): 1333–1345. DOI: 10.1109/TIFS.2017.2787987 |
8 | ZHU L G, LIU Z J, HAN S. Deep leakage from gradients [C]//33rd Conference on Neural Information Processing Systems. NeurIPS, 2019: 8389 |
9 | ZHAO B, MOPURI K R, BILEN H. iDLG: improved deep leakage from gradients [EB/OL]. [2020-01-08]. |
10 | YIN H X, MALLYA A, VAHDAT A, et al. See through gradients: image batch recovery via GradInversion [C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021: 16332–16341. DOI: 10.1109/CVPR46437.2021.01607 |
11 | CHEN S, JIA R X, QI G J. Improved techniques for model inversion attack [EB/OL]. [2021-08-19]. |
12 | JERE M S, FARNAN T, KOUSHANFAR F. A taxonomy of attacks on federated learning [J]. IEEE security privacy, 2021, 19(2): 20–28. DOI: 10.1109/MSEC.2020.3039941 |
13 | GEIPING J, BAUERMEISTER H, DRÖGE H, et al. Inverting gradients: how easy is it to break privacy in federated learning? [C]//34th International Conference on Neural Information Processing Systems. NeurIPS, 2020: 16937–16947 |
14 | ZHANG C L, LI S Y, XIA J Z, et al. BatchCrypt: efficient homomorphic encryption for cross-silo federated learning [C]//USENIX Conference on Usenix Annual Technical Conference. ACM, 2020: 493–506. DOI: 10.5555/3489146.3489179 |
15 | CHENG K W, FAN T, JIN Y L, et al. SecureBoost: a lossless federated learning framework [J]. IEEE intelligent systems, 2021, 36(6): 87–98. DOI: 10.1109/MIS.2021.3082561 |
16 | MOHASSEL P, ZHANG Y P. Secureml: a system for scalable privacy-preserving machine learning [C]//IEEE Symposium on Security and Privacy (SP). IEEE, 2017: 19–38. DOI: 10.1109/SP.2017.12 |
17 | BONAWITZ K, IVANOV V, KREUTER B, et al. Practical secure aggregation for privacy-preserving machine learning [C]//ACM SIGSAC Conference on Computer and Communications Security. ACM, 2017: 1175–1191. DOI: 10.1145/3133956.3133982 |
18 | GEYER R C, KLEIN T, NABI M. Differentially private federated learning: a client level perspective [EB/OL]. [2022-03-01]. |
19 | MCMAHAN H B, RAMAGE D, TALWAR K, et al. Learning differentially private recurrent language models [EB/OL]. [2022-02-24]. |
20 | WEI K, LI J, DING M, et al. Federated learning with differential privacy: algorithms and performance analysis [J]. IEEE transactions on information forensics and security, 2020, 15: 3454–3469. DOI: 10.1109/TIFS.2020.2988575 |
21 | WEI W Q, LIU L, LOPER M, et al. A framework for evaluating gradient leakage attacks in federated learning [EB/OL]. [2021-04-23]. |
22 | SUN J W, LI A, WANG B H, et al. Soteria: provable defense against privacy leakage in federated learning from representation perspective [C]//IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021: 9307–9315. DOI: 10.1109/CVPR46437.2021.00919 |
23 | SCHELIGA D, MÄDER P, SEELAND M. Precode—a generic model extension to prevent deep gradient leakage [C]//IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). IEEE, 2022: 3605–3614. DOI: 10.1109/WACV51458.2022.00366 |
24 | WANG J X, GUO S, XIE X, et al. Protect privacy from gradient leakage attack in federated learning [C]//IEEE Conference on Computer Communications. IEEE, 2022: 580–589. DOI: 10.1109/INFOCOM48880.2022.9796841 |
25 | KONEČNÝ J, MCMAHAN H B, RAMAGE D, et al. Federated optimization: distributed machine learning for on-device intelligence [EB/OL]. [2021-10-08]. |
26 | HATAMIZADEH A, YIN H X, MOLCHANOV P, et al. Do gradient inversion attacks make federated learning unsafe? [EB/OL]. [2022-02-14]. |
27 | TAVANGARAN N, CHEN M Z, YANG Z H, et al. On differential privacy for federated learning in wireless systems with multiple base stations [EB/OL]. [2022-08-25]. |
28 | CHEN M Z, YANG Z H, SAAD W, et al. A joint learning and communications framework for federated learning over wireless networks [J]. IEEE transactions on wireless communications, 2021, 20(1): 269–283. DOI: 10.1109/TWC.2020.3024629 |
[1] | WANG Yiji, WEN Dingzhu, MAO Yijie, SHI Yuanming. RIS-Assisted Federated Learning in Multi-Cell Wireless Networks [J]. ZTE Communications, 2023, 21(1): 25-37. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||