ZTE Communications ›› 2020, Vol. 18 ›› Issue (2): 11-19.DOI: 10.12142/ZTECOM.202002003
收稿日期:
2020-02-10
出版日期:
2020-06-25
发布日期:
2020-08-07
SHI Wenqi, SUN Yuxuan, HUANG Xiufeng, ZHOU Sheng(), NIU Zhisheng
Received:
2020-02-10
Online:
2020-06-25
Published:
2020-08-07
About author:
SHI Wenqi received his B.S. degree in electronic engineering from Tsinghua University, China in 2017. He is pursuing his Ph.D. degree in electronic engineering with Tsinghua University. His research interests include edge computing, machine learning and machine learning applications in wireless communications.|SUN Yuxuan received her B.S. degree in telecommunications engineering from Tianjin University, China, in 2015. She is currently working toward the Ph.D. degree in electronic engineering with Tsinghua University. Her research interests include mobile edge computing, vehicular cloud computing and distributed machine learning.|HUANG Xiufeng received his B.S. degree in electronic engineering from Tsinghua University, China, in 2018. He is currently a Ph.D. student in electronic engineering with Tsinghua University. His research interests include machine learning, edge computing and performance optimization for machine learning applications in wireless networks.|ZHOU Sheng (Supported by:
. [J]. ZTE Communications, 2020, 18(2): 11-19.
SHI Wenqi, SUN Yuxuan, HUANG Xiufeng, ZHOU Sheng, NIU Zhisheng. Scheduling Policies for Federated Learning in Wireless Networks: An Overview[J]. ZTE Communications, 2020, 18(2): 11-19.
Technology | Highlights | Related Works |
---|---|---|
Power alignment | · Fundamental tradeoffs under Rayleigh fading channel | Ref. [ |
· Online energy-aware dynamic device scheduling policy | Ref. [ | |
· Device scheduling for multi-antenna analog aggregation | Ref. [ | |
Sparsification and error accumulation | · Gradient sparsification and error accumulation · Device scheduling policy under average power constraint | Refs. [26–27] |
Data redundancy | · Introducing data redundancy to deal with non-independent and identically distributed (non-i.i.d.) data | Ref. [ |
Table 1 Summary of recent papers on analog aggregation
Technology | Highlights | Related Works |
---|---|---|
Power alignment | · Fundamental tradeoffs under Rayleigh fading channel | Ref. [ |
· Online energy-aware dynamic device scheduling policy | Ref. [ | |
· Device scheduling for multi-antenna analog aggregation | Ref. [ | |
Sparsification and error accumulation | · Gradient sparsification and error accumulation · Device scheduling policy under average power constraint | Refs. [26–27] |
Data redundancy | · Introducing data redundancy to deal with non-independent and identically distributed (non-i.i.d.) data | Ref. [ |
Figure 3 Training accuracy of dynamic device scheduling policy in Ref. [24] under independent and identically distributed (i.i.d.) and non-i.i.d. data.
Technology | Highlights | Related Works |
---|---|---|
Aggregation frequency adaption | · Global aggregation frequency adaption under given resource constraints. | Ref. [ |
· Extending Ref. [ | Ref. [ | |
Local accuracy tuning | · Tuning local model accuracy to balance the tradeoff between local update and global aggregation · Energy- and convergence-aware resource allocation | Refs. [31–32] |
Device scheduling | · Energy- and convergence-aware joint scheduling and resource allocation | Ref. [ |
·Consider unreliable wireless transmissions | Refs. [35–36] | |
· Maximize the convergence rate with respect to time | Refs. [ |
Table 2 Summary of recent papers on digital aggregation
Technology | Highlights | Related Works |
---|---|---|
Aggregation frequency adaption | · Global aggregation frequency adaption under given resource constraints. | Ref. [ |
· Extending Ref. [ | Ref. [ | |
Local accuracy tuning | · Tuning local model accuracy to balance the tradeoff between local update and global aggregation · Energy- and convergence-aware resource allocation | Refs. [31–32] |
Device scheduling | · Energy- and convergence-aware joint scheduling and resource allocation | Ref. [ |
·Consider unreliable wireless transmissions | Refs. [35–36] | |
· Maximize the convergence rate with respect to time | Refs. [ |
1 |
CHIANG M, ZHANG T. Fog and IoT: an overview of research opportunities [J]. IEEE internet of things journal, 2016, 3(6): 854–864. DOI: 10.1109/JIOT.2016.2584538
DOI |
2 | ZHU G, LIU D, DU Y, et al. Towards an intelligent edge: wireless communication meets machine learning [EB/OL]. (2018⁃09⁃02)[2020⁃01⁃31]. |
3 |
PARK J, SAMARAKOON S, BENNIS M, et al. Wireless network intelligence at the edge [J]. Proceedings of the IEEE, 2019, 107(11): 2204–2239. DOI: 10.1109/JPROC.2019.2941458
DOI |
4 | LIM W Y, LUONG N C, HOANG D T, et al. Federated learning in mobile edge networks: a comprehensive survey [EB/OL]. (2019⁃09⁃26)[2020⁃01⁃31]. |
5 |
BRISIMI T S, CHEN R, MELA T, et al. Federated learning of predictive models from federated electronic health records [J]. International journal of medical informatics, 2018, 112: 59–67. DOI: 10.1016/j.ijmedinf.2018.01.007
DOI |
6 | MELIS L, SONG C, CRISTOFARO E DE, et al. Exploiting unintended feature leakage in collaborative learning [EB/OL]. (2018⁃05⁃10)[2019⁃11⁃01]. |
7 |
ABADI M, CHU A, GOODFELLOW I, et al. Deep learning with differential privacy [C]//Proceedings of 2016 Acm Sigsac Conference on Computer And Communications Security. New York, USA: ACM, 2016: 308–318. DOI: 10.1145/2976749.2978318
DOI |
8 | GEYER R C, KLEIN T, NABI M. Differentially private federated learning: a client level perspective [EB/OL]. (2018⁃03⁃01)[2020⁃01⁃31]. |
9 |
SHOKRI R, SHMATIKOV V. Privacy⁃Preserving Deep Learning [C]//Proceedings of 22nd ACM Sigsac Conference on Computer and Communications Security. New York, USA: ACM, 2015: 1310–1321. DOI: 10.1145/2810103.2813687
DOI |
10 | LIU Y, MA Z, MA S, et al. Boosting privately: privacy⁃preserving federated extreme boosting for mobile crowdsensing [EB/OL]. (2019⁃07⁃24) [2020⁃01⁃31]. |
11 |
AONO Y, HAYASHI T, WANG L, et al. Privacy⁃preserving deep learning via additively homomorphic encryption [J]. IEEE transactions on information forensics and security, 2017, 13(5): 1333–1345. DOI: 10.1109/TIFS.2017.2787987
DOI |
12 |
HAO M, LI H W, XU G W, et al. Towards efficient and privacy⁃preserving federated deep learning [C]//IEEE International Conference on Communications (ICC). Shanghai, China, 2019: 1–6. DOI: 10.1109/icc.2019.8761267
DOI |
13 | KONEČNÝ J, MCMAHAN B, RAMAGE D. Federated optimization: distributed optimization beyond the datacenter [EB/OL]. (2015⁃11⁃11)[2020⁃01⁃31]. |
14 |
AJI A F, HEAFIELD K. Sparse communication for distributed gradient descent [C]//Proceedings of 2017 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, USA: Association for Computational Linguistics, 2017. DOI: 10.18653/v1/d17⁃1045
DOI |
15 | CHEN T Y, GIANNAKIS G B, SUN T, et al. LAG: Lazily aggregated gradient for communication⁃efficient distributed learning [C]//Adavances in Neural Information Processing Systems 31 (NeurIPS 2018). Montreal, Canada, 2018: 5055–5065 |
16 | SHI W, ZHOU S, NIU Z. Device scheduling with fast convergence for wireless federated learning [EB/OL]. (2019⁃11⁃03)[2020⁃01⁃31]. |
17 | ZENG Q, DU Y, LEUNG K K, et al. Energy⁃efficient radio resource allocation for federated edge learning [EB/OL]. (2019⁃07⁃13)[2020⁃01⁃31]. |
18 | LI T, SAHU A K, TALWALKAR A, et al. Federated learning: challenges, methods, and future directions [EB/OL]. (2019⁃8⁃21)[2020⁃01⁃31]. |
19 |
WANG X, HAN Y, LEUNG V C M, et al. Convergence of edge computing and deep learning: a comprehensive survey [J]. IEEE communications surveys & tutorials, 2020. DOI: 10.1109/COMST.2020.2970550
DOI |
20 | SHI Y, YANG K, JIANG T, et al. Communication⁃efficient edge AI: algorithms and systems [EB/OL]. (2020⁃02⁃22). |
21 | LIM W Y B, LUONG N C, HOANG D T, et al. Federated learning in mobile edge networks: a comprehensive survey [EB/OL]. (2020⁃02⁃28). |
22 |
GUNDUZ D, DE KERRET P, SIDIROPOULOS N D, et al. Machine learning in the air [J]. IEEE journal on selected areas in communications, 2019, 37(10): 2184–2199. DOI: 10.1109/JSAC.2019.2933969
DOI |
23 |
ZHU G X, WANG Y, HUANG K B. Broadband analog aggregation for low⁃latency federated edge learning [J]. IEEE transactions on wireless communications, 2020, 19(1): 491–506. DOI: 10.1109/TWC.2019.2946245
DOI |
24 | SUN Y, ZHOU S, GUNDUZ D. Energy⁃aware analog aggregation for federated learning with redundant data [EB/OL]. (2019⁃11⁃01)[2020⁃01⁃31]. |
25 | YANG K, JIANG T, SHI Y, et al. Federated learning via over⁃the⁃air computation [EB/OL]. (2019⁃02⁃17)[2020⁃01⁃31]. |
26 |
AMIRI M M, GUNDUZ D. Machine learning at the wireless edge: distributed stochastic gradient descent over⁃the⁃air [C]//IEEE International Symposium on Information Theory (ISIT). Paris, France, 2019: 1432–1436. DOI:10.1109/isit.2019.8849334
DOI |
27 | AMIRI M M, GUNDUZ D. Federated learning over wireless fading channels [EB/OL]. (2019⁃07⁃23)[2020⁃01⁃31]. |
28 | ZHAO Y, LI M, LAI L, et al. Federated learning with non⁃iid data [EB/OL]. (2018⁃06⁃02)[2020⁃01⁃31]. |
29 |
WANG S Q, TUOR T, SALONIDIS T, et al. Adaptive federated learning in resource constrained edge computing systems [J]. IEEE journal on selected areas in communications, 2019, 37(6): 1205–1221. DOI: 10.1109/jsac.2019.2904348
DOI |
30 | LIU L, ZHANG J, SONG S H, et al. Edge⁃assisted hierarchical federated learning with non⁃iid data [EB/OL]. (2019⁃10⁃31)[2020⁃01⁃31]. |
31 |
TRAN N H, BAO W, ZOMAYA A, et al. Federated learning over wireless networks: optimization model design and analysis [C]//IEEE Conference on Computer Communications. Paris, France, 2019: 1387–1395. DOI: 10.1109/INFOCOM.2019.8737464
DOI |
32 | YANG Z, CHEN M, SAAD W, et al. Energy efficient federated learning over wireless communication networks [EB/OL]. (2019⁃11⁃6)[2020⁃01⁃31]. |
33 | BONAWITZ K, EICHNER H, GRIESKAMP W, et al. Towards federated learning at scale: system design [EB/OL]. (2019⁃3⁃22)[2020⁃01⁃31]. |
34 | STICH S U. Local SGD converges fast and communicates little [EB/OL]. (2019⁃05⁃03)[2020⁃01⁃31]. |
35 |
YANG H H, LIU Z Z, QUEK T Q S, et al. Scheduling policies for federated learning in wireless networks [J]. IEEE transactions on communications, 2020, 68(1): 317–333. DOI: 10.1109/tcomm.2019.2944169
DOI |
36 | CHEN M, YANG Z, SAAD W, et al. A joint learning and communications framework for federated learning over wireless networks [EB/OL]. (2019⁃9⁃17)[2020⁃01⁃31]. |
37 |
NISHIO T, YONETANI R. Client selection for federated learning with heterogeneous resources in mobile edge [C]//ICC 2019⁃2019 IEEE International Conference On Communications (ICC). Shanghai, China, 2019: 1⁃7. DOI: 10.1109/ICC.2019.8761315
DOI |
38 |
NISHIO T, YONETANI R. Client selection for federated learning with heterogeneous resources in mobile edge [C]//IEEE International Conference on Communications (ICC). Shanghai, China, 2019: 1–7. DOI: 10.1109/icc.2019.8761315
DOI |
No related articles found! |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||