As a popular distributed machine learning framework, wireless federated edge learning (FEEL) can keep original data local, while uploading model training updates to protect privacy and prevent data silos. However, since wireless channels are usually unreliable, there is no guarantee that the model updates uploaded by local devices are correct, thus greatly degrading the performance of the wireless FEEL. Conventional retransmission schemes designed for wireless systems generally aim to maximize the system throughput or minimize the packet error rate, which is not suitable for the FEEL system. A novel retransmission scheme is proposed for the FEEL system to make a tradeoff between model training accuracy and retransmission latency. In the proposed scheme, a retransmission device selection criterion is first designed based on the channel condition, the number of local data, and the importance of model updates. In addition, we design the air interface signaling under this retransmission scheme to facilitate the implementation of the proposed scheme in practical scenarios. Finally, the effectiveness of the proposed retransmission scheme is validated through simulation experiments.
By periodically aggregating local learning updates from edge users, federated edge learning (FEEL) is envisioned as a promising means to reap the benefit of local rich data and protect users’ privacy. However, the scarce wireless communication resource greatly limits the number of participated users and is regarded as the main bottleneck which hinders the development of FEEL. To tackle this issue, we propose a user selection policy based on data importance for FEEL system. In order to quantify the data importance of each user, we first analyze the relationship between the loss decay and the squared norm of gradient. Then, we formulate a combinatorial optimization problem to maximize the learning efficiency by jointly considering user selection and communication resource allocation. By problem transformation and relaxation, the optimal user selection policy and resource allocation are derived, and a polynomial-time optimal algorithm is developed. Finally, we deploy two commonly used deep neural network (DNN) models for simulation. The results validate that our proposed algorithm has strong generalization ability and can attain higher learning efficiency compared with other traditional algorithms.