Loading...

Table of Content

    22 March 2023, Volume 21 Issue 1
    Download the whole issue (PDF)
    The whole issue of ZTE Communications March 2023, Vol. 21 No. 1
    2023, 21(1):  0. 
    Asbtract ( )   PDF (15892KB) ( )  
    Related Articles | Metrics
    Special Topic
    Special Topic on Federated Learning over Wireless Networks
    CUI Shuguang, YIN Changchuan, ZHU Guangxu
    2023, 21(1):  1-2.  doi:10.12142/ZTECOM.202301001
    Asbtract ( )   HTML ( )   PDF (506KB) ( )  
    References | Related Articles | Metrics
    Adaptive Retransmission Design for Wireless Federated Edge Learning
    XU Xinyi, LIU Shengli, YU Guanding
    2023, 21(1):  3-14.  doi:10.12142/ZTECOM.202301002
    Asbtract ( )   HTML ( )   PDF (1432KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    As a popular distributed machine learning framework, wireless federated edge learning (FEEL) can keep original data local, while uploading model training updates to protect privacy and prevent data silos. However, since wireless channels are usually unreliable, there is no guarantee that the model updates uploaded by local devices are correct, thus greatly degrading the performance of the wireless FEEL. Conventional retransmission schemes designed for wireless systems generally aim to maximize the system throughput or minimize the packet error rate, which is not suitable for the FEEL system. A novel retransmission scheme is proposed for the FEEL system to make a tradeoff between model training accuracy and retransmission latency. In the proposed scheme, a retransmission device selection criterion is first designed based on the channel condition, the number of local data, and the importance of model updates. In addition, we design the air interface signaling under this retransmission scheme to facilitate the implementation of the proposed scheme in practical scenarios. Finally, the effectiveness of the proposed retransmission scheme is validated through simulation experiments.

    Reliable and Privacy-Preserving Federated Learning with Anomalous Users
    ZHANG Weiting, LIANG Haotian, XU Yuhua, ZHANG Chuan
    2023, 21(1):  15-24.  doi:10.12142/ZTECOM.202301003
    Asbtract ( )   HTML ( )   PDF (1351KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Recently, various privacy-preserving schemes have been proposed to resolve privacy issues in federated learning (FL). However, most of them ignore the fact that anomalous users holding low-quality data may reduce the accuracy of trained models. Although some existing works manage to solve this problem, they either lack privacy protection for users’ sensitive information or introduce a two-cloud model that is difficult to find in reality. A reliable and privacy-preserving FL scheme named reliable and privacy-preserving federated learning (RPPFL) based on a single-cloud model is proposed. Specifically, inspired by the truth discovery technique, we design an approach to identify the user’s reliability and thereby decrease the impact of anomalous users. In addition, an additively homomorphic cryptosystem is utilized to provide comprehensive privacy preservation (user’s local gradient privacy and reliability privacy). We give rigorous theoretical analysis to show the security of RPPFL. Based on open datasets, we conduct extensive experiments to demonstrate that RPPEL compares favorably with existing works in terms of efficiency and accuracy.

    RIS-Assisted Federated Learning in Multi-Cell Wireless Networks
    WANG Yiji, WEN Dingzhu, MAO Yijie, SHI Yuanming
    2023, 21(1):  25-37.  doi:10.12142/ZTECOM.202301004
    Asbtract ( )   HTML ( )   PDF (1325KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Over-the-air computation (AirComp) based federated learning (FL) has been a promising technique for distilling artificial intelligence (AI) at the network edge. However, the performance of AirComp-based FL is decided by the device with the lowest channel gain due to the signal alignment property. More importantly, most existing work focuses on a single-cell scenario, where inter-cell interference is ignored. To overcome these shortages, a reconfigurable intelligent surface (RIS)-assisted AirComp-based FL system is proposed for multi-cell networks, where a RIS is used for enhancing the poor user signal caused by channel fading, especially for the device at the cell edge, and reducing inter-cell interference. The convergence of FL in the proposed system is first analyzed and the optimality gap for FL is derived. To minimize the optimality gap, we formulate a joint uplink and downlink optimization problem. The formulated problem is then divided into two separable nonconvex subproblems. Following the successive convex approximation (SCA) method, we first approximate the nonconvex term to a linear form, and then alternately optimize the beamforming vector and phase-shift matrix for each cell. Simulation results demonstrate the advantages of deploying a RIS in multi-cell networks and our proposed system significantly improves the performance of FL.

    Hierarchical Federated Learning: Architecture, Challenges, and Its Implementation in Vehicular Networks
    YAN Jintao, CHEN Tan, XIE Bowen, SUN Yuxuan, ZHOU Sheng, NIU Zhisheng
    2023, 21(1):  38-45.  doi:10.12142/ZTECOM.202301005
    Asbtract ( )   HTML ( )   PDF (770KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Federated learning (FL) is a distributed machine learning (ML) framework where several clients cooperatively train an ML model by exchanging the model parameters without directly sharing their local data. In FL, the limited number of participants for model aggregation and communication latency are two major bottlenecks. Hierarchical federated learning (HFL), with a cloud-edge-client hierarchy, can leverage the large coverage of cloud servers and the low transmission latency of edge servers. There are growing research interests in implementing FL in vehicular networks due to the requirements of timely ML training for intelligent vehicles. However, the limited number of participants in vehicular networks and vehicle mobility degrade the performance of FL training. In this context, HFL, which stands out for lower latency, wider coverage and more participants, is promising in vehicular networks. In this paper, we begin with the background and motivation of HFL and the feasibility of implementing HFL in vehicular networks. Then, the architecture of HFL is illustrated. Next, we clarify new issues in HFL and review several existing solutions. Furthermore, we introduce some typical use cases in vehicular networks as well as our initial efforts on implementing HFL in vehicular networks. Finally, we conclude with future research directions.

    Secure Federated Learning over Wireless Communication Networks with Model Compression
    DING Yahao, SHIKH‑BAHAEI Mohammad, YANG Zhaohui, HUANG Chongwen, YUAN Weijie
    2023, 21(1):  46-54.  doi:10.12142/ZTECOM.202301006
    Asbtract ( )   HTML ( )   PDF (1015KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Although federated learning (FL) has become very popular recently, it is vulnerable to gradient leakage attacks. Recent studies have shown that attackers can reconstruct clients’ private data from shared models or gradients. Many existing works focus on adding privacy protection mechanisms to prevent user privacy leakages, such as differential privacy (DP) and homomorphic encryption. These defenses may cause an increase in computation and communication costs or degrade the performance of FL. Besides, they do not consider the impact of wireless network resources on the FL training process. Herein, we propose weight compression, a defense method to prevent gradient leakage attacks for FL over wireless networks. The gradient compression matrix is determined by the user’s location and channel conditions. We also add Gaussian noise to the compressed gradients to strengthen the defense. This joint learning of wireless resource allocation and weight compression matrix is formulated as an optimization problem with the objective of minimizing the FL loss function. To find the solution, we first analyze the convergence rate of FL and quantify the effect of the weight matrix on FL convergence. Then, we seek the optimal resource block (RB) allocation by exhaustive search or ant colony optimization (ACO) and then use the CVX toolbox to obtain the optimal weight matrix to minimize the optimization function. The simulation results show that the optimized RB can accelerate the convergence of FL.

    Research Paper
    Efficient Bandwidth Allocation and Computation Configuration in Industrial IoT
    HUANG Rui, LI Huilin, ZHANG Yongmin
    2023, 21(1):  55-63.  doi:10.12142/ZTECOM.202301007
    Asbtract ( )   HTML ( )   PDF (1130KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    With the advancement of the Industrial Internet of Things (IoT), the rapidly growing demand for data collection and processing poses a huge challenge to the design of data transmission and computation resources in the industrial scenario. Taking advantage of improved model accuracy by machine learning algorithms, we investigate the inner relationship of system performance and data transmission and computation resources, and then analyze the impacts of bandwidth allocation and computation resources on the accuracy of the system model in this paper. A joint bandwidth allocation and computation resource configuration scheme is proposed and the Karush-Kuhn-Tucker (KKT) conditions are used to get an optimal bandwidth allocation and computation configuration decision, which can minimize the total computation resource requirement and ensure the system accuracy meets the industrial requirements. Simulation results show that the proposed bandwidth allocation and computation resource configuration scheme can reduce the computing resource usage by 10% when compared to the average allocation strategy.

    Ultra-Lightweight Face Animation Method for Ultra-Low Bitrate Video Conferencing
    LU Jianguo, ZHENG Qingfang
    2023, 21(1):  64-71.  doi:10.12142/ZTECOM.202301008
    Asbtract ( )   HTML ( )   PDF (1620KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Video conferencing systems face the dilemma between smooth streaming and decent visual quality because traditional video compression algorithms fail to produce bitstreams low enough for bandwidth-constrained networks. An ultra-lightweight face-animation-based method that enables better video conferencing experience is proposed in this paper. The proposed method compresses high-quality upper-body videos with ultra-low bitrates and runs efficiently on mobile devices without high-end graphics processing units (GPU). Moreover, a visual quality evaluation algorithm is used to avoid image degradation caused by extreme face poses and/or expressions, and a full resolution image composition algorithm to reduce unnaturalness, which guarantees the user experience. Experiments show that the proposed method is efficient and can generate high-quality videos at ultra-low bitrates.

    Adaptive Load Balancing for Parameter Servers in Distributed Machine Learning over Heterogeneous Networks
    CAI Weibo, YANG Shulin, SUN Gang, ZHANG Qiming, YU Hongfang
    2023, 21(1):  72-80.  doi:10.12142/ZTECOM.202301009
    Asbtract ( )   HTML ( )   PDF (1061KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In distributed machine learning (DML) based on the parameter server (PS) architecture, unbalanced communication load distribution of PSs will lead to a significant slowdown of model synchronization in heterogeneous networks due to low utilization of bandwidth. To address this problem, a network-aware adaptive PS load distribution scheme is proposed, which accelerates model synchronization by proactively adjusting the communication load on PSs according to network states. We evaluate the proposed scheme on MXNet, known as a real-world distributed training platform, and results show that our scheme achieves up to 2.68 times speed-up of model training in the dynamic and heterogeneous network environment.

    Scene Visual Perception and AR Navigation Applications
    LU Ping, SHENG Bin, SHI Wenzhe
    2023, 21(1):  81-88.  doi:10.12142/ZTECOM.202301010
    Asbtract ( )   HTML ( )   PDF (2185KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    With the rapid popularization of mobile devices and the wide application of various sensors, scene perception methods applied to mobile devices occupy an important position in location-based services such as navigation and augmented reality (AR). The development of deep learning technologies has greatly improved the visual perception ability of machines to scenes. The basic framework of scene visual perception, related technologies and the specific process applied to AR navigation are introduced, and future technology development is proposed. An application (APP) is designed to improve the application effect of AR navigation. The APP includes three modules: navigation map generation, cloud navigation algorithm, and client design. The navigation map generation tool works offline. The cloud saves the navigation map and provides navigation algorithms for the terminal. The terminal realizes local real-time positioning and AR path rendering.

    RCache: A Read-Intensive Workload-Aware Page Cache for NVM Filesystem
    TU Yaofeng, ZHU Bohong, YANG Hongzhang, HAN Yinjun, SHU Jiwu
    2023, 21(1):  89-94.  doi:10.12142/ZTECOM.202301011
    Asbtract ( )   HTML ( )   PDF (493KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Byte-addressable non-volatile memory (NVM), as a new participant in the storage hierarchy, gives extremely high performance in storage, which forces changes to be made on current filesystem designs. Page cache, once a significant mechanism filling the performance gap between Dynamic Random Access Memory (DRAM) and block devices, is now a liability that heavily hinders the writing performance of NVM filesystems. Therefore state-of-the-art NVM filesystems leverage the direct access (DAX) technology to bypass the page cache entirely. However, the DRAM still provides higher bandwidth than NVM, which prevents skewed read workloads from benefiting from a higher bandwidth of the DRAM and leads to sub-optimal performance for the system. In this paper, we propose RCache, a read-intensive workload-aware page cache for NVM filesystems. Different from traditional caching mechanisms where all reads go through DRAM, RCache uses a tiered page cache design, including assigning DRAM and NVM to hot and cold data separately, and reading data from both sides. To avoid copying data to DRAM in a critical path, RCache migrates data from NVM to DRAM in a background thread. Additionally, RCache manages data in DRAM in a lock-free manner for better latency and scalability. Evaluations on Intel Optane Data Center (DC) Persistent Memory Modules show that, compared with NOVA, RCache achieves 3 times higher bandwidth for read-intensive workloads and introduces little performance loss for write operations.