Loading...

Table of Content

    25 June 2020, Volume 18 Issue 2
    Special Topic
    Editorial: Special Topic onMachine Learning at Network Edges
    TAO Meixia, HUANG Kaibin
    2020, 18(2):  1-1.  doi:10.12142/ZTECOM.202002001
    Asbtract ( )   HTML ( )   PDF (536KB) ( )  
    References | Related Articles | Metrics
    Enabling Intelligence at Network Edge:An Overview of Federated Learning
    YANG Howard H., ZHAO Zhongyuan, QUEK Tony Q. S.
    2020, 18(2):  2-10.  doi:10.12142/ZTECOM.202002002
    Asbtract ( )   HTML ( )   PDF (1050KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The burgeoning advances in machine learning and wireless technologies are forging a new paradigm for future networks, which are expected to possess higher degrees of intelligence via the inference from vast dataset and being able to respond to local events in a timely manner. Due to the sheer volume of data generated by end-user devices, as well as the increasing concerns about sharing private information, a new branch of machine learning models, namely federated learning, has emerged from the intersection of artificial intelligence and edge computing. In contrast to conventional machine learning methods, federated learning brings the models directly to the device for training, where only the resultant parameters shall be sent to the edge servers. The local copies of the model on the devices bring along great advantages of eliminating network latency and preserving data privacy. Nevertheless, to make federated learning possible, one needs to tackle new challenges that require a fundamental departure from standard methods designed for distributed optimizations. In this paper, we aim to deliver a comprehensive introduction of federated learning. Specifically, we first survey the basis of federated learning, including its learning structure and the distinct features from conventional machine learning models. We then enumerate several critical issues associated with the deployment of federated learning in a wireless network, and show why and how technologies should be jointly integrated to facilitate the full implementation from different perspectives, ranging from algorithmic design, on-device training, to communication resource management. Finally, we conclude by shedding light on some potential applications and future trends.

    Scheduling Policies for Federated Learning in Wireless Networks: An Overview
    SHI Wenqi, SUN Yuxuan, HUANG Xiufeng, ZHOU Sheng, NIU Zhisheng
    2020, 18(2):  11-19.  doi:10.12142/ZTECOM.202002003
    Asbtract ( )   HTML ( )   PDF (1466KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Due to the increasing need for massive data analysis and machine learning model training at the network edge, as well as the rising concerns about data privacy, a new distributed training framework called federated learning (FL) has emerged and attracted much attention from both academia and industry. In FL, participating devices iteratively update the local models based on their own data and contribute to the global training by uploading model updates until the training converges. Therefore, the computation capabilities of mobile devices can be utilized and the data privacy can be preserved. However, deploying FL in resource-constrained wireless networks encounters several challenges, including the limited energy of mobile devices, weak onboard computing capability, and scarce wireless bandwidth. To address these challenges, recent solutions have been proposed to maximize the convergence rate or minimize the energy consumption under heterogeneous constraints. In this overview, we first introduce the backgrounds and fundamentals of FL. Then, the key challenges in deploying FL in wireless networks are discussed, and several existing solutions are reviewed. Finally, we highlight the open issues and future research directions in FL scheduling.

    Joint User Selection and Resource Allocation for Fast Federated Edge Learning
    JIANG Zhihui, HE Yinghui, YU Guanding
    2020, 18(2):  20-30.  doi:10.12142/ZTECOM.202002004
    Asbtract ( )   HTML ( )   PDF (1627KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    By periodically aggregating local learning updates from edge users, federated edge learning (FEEL) is envisioned as a promising means to reap the benefit of local rich data and protect users’ privacy. However, the scarce wireless communication resource greatly limits the number of participated users and is regarded as the main bottleneck which hinders the development of FEEL. To tackle this issue, we propose a user selection policy based on data importance for FEEL system. In order to quantify the data importance of each user, we first analyze the relationship between the loss decay and the squared norm of gradient. Then, we formulate a combinatorial optimization problem to maximize the learning efficiency by jointly considering user selection and communication resource allocation. By problem transformation and relaxation, the optimal user selection policy and resource allocation are derived, and a polynomial-time optimal algorithm is developed. Finally, we deploy two commonly used deep neural network (DNN) models for simulation. The results validate that our proposed algorithm has strong generalization ability and can attain higher learning efficiency compared with other traditional algorithms.

    Communication-Efficient Edge AI Inference over Wireless Networks
    YANG Kai, ZHOU Yong, YANG Zhanpeng, SHI Yuanming
    2020, 18(2):  31-39.  doi:10.12142/ZTECOM.202002005
    Asbtract ( )   HTML ( )   PDF (978KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Given the fast growth of intelligent devices, it is expected that a large number of high-stakes artificial intelligence (AI) applications, e.g., drones, autonomous cars, and tactile robots, will be deployed at the edge of wireless networks in the near future. Therefore, the intelligent communication networks will be designed to leverage advanced wireless techniques and edge computing technologies to support AI-enabled applications at various end devices with limited communication, computation, hardware and energy resources. In this article, we present the principles of efficient deployment of model inference at network edge to provide low-latency and energy-efficient AI services. This includes the wireless distributed computing framework for low-latency device distributed model inference as well as the wireless cooperative transmission strategy for energy-efficient edge cooperative model inference. The communication efficiency of edge inference systems is further improved by building up a smart radio propagation environment via intelligent reflecting surface.

    Knowledge Distillation for Mobile Edge Computation Offloading
    CHEN Haowei, ZENG Liekang, YU Shuai, CHEN Xu
    2020, 18(2):  40-48.  doi:10.12142/ZTECOM.202002006
    Asbtract ( )   HTML ( )   PDF (884KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Edge computation offloading allows mobile end devices to execute compute-intensive tasks on edge servers. End devices can decide whether the tasks are offloaded to edge servers, cloud servers or executed locally according to current network condition and devices’ profiles in an online manner. In this paper, we propose an edge computation offloading framework based on deep imitation learning (DIL) and knowledge distillation (KD), which assists end devices to quickly make fine-grained decisions to optimize the delay of computation tasks online. We formalize a computation offloading problem into a multi-label classification problem. Training samples for our DIL model are generated in an offline manner. After the model is trained, we leverage KD to obtain a lightweight DIL model, by which we further reduce the model’s inference delay. Numerical experiment shows that the offloading decisions made by our model not only outperform those made by other related policies in latency metric, but also have the shortest inference delay among all policies.

    Joint Placement and Resource Allocation for UAV-Assisted Mobile Edge Computing Networks with URLLC
    ZHANG Pengyu, XIE Lifeng, XU Jie
    2020, 18(2):  49-56.  doi:10.12142/ZTECOM.202002007
    Asbtract ( )   HTML ( )   PDF (1363KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    This paper investigates an unmanned aerial vehicle (UAV) assisted mobile edge computing (MEC) network with ultra-reliable and low-latency communications (URLLC), in which a UAV acts as an aerial edge server to collect information from a set of sensors and send the processed data (e.g., command signals) to the corresponding actuators. In particular, we focus on the round-trip URLLC from the sensors to the UAV and to the actuators in the network. By considering the finite block-length codes, our objective is to minimize the maximum end-to-end packet error rate (PER) of these sensor-actuator pairs, by jointly optimizing the UAV’s placement location and transmitting power allocation, as well as the users’ block-length allocation, subject to the UAV’s sum transmitting power constraint and the total block-length constraint. Although the maximum-PER minimization problem is non-convex and difficult to be optimally solved, we obtain a high-quality solution to this problem by using the technique of alternating optimization. Numerical results show that our proposed design achieves significant performance gains over other benchmark schemes without the joint optimization.

    Review
    Adaptive and Intelligent Digital Signal Processing for Improved Optical Interconnection
    SUN Lin, DU Jiangbing, HUA Feng, TANG Ningfeng, HE Zuyuan
    2020, 18(2):  57-73.  doi:10.12142/ZTECOM.202002008
    Asbtract ( )   HTML ( )   PDF (5327KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    In recent years, explosively increasing data traffic has been boosting the continuous demand of high speed optical interconnection inside or among data centers, high performance computers and even consumer electronics. To pursue the improved interconnection performance of capacity, energy efficiency and simplicity, effective approaches are demonstrated including particularly advanced digital signal processing (DSP) methods. In this paper, we present a review about the enabling adaptive DSP methods for optical interconnection applications, and a detailed summary of our recent and ongoing works in this field. In brief, our works focus on dealing with the specific issues for short-reach interconnection scenarios with adaptive operation, including signal-to-noise-ratio (SNR) limitation, level nonlinearity distortion, energy efficiency consideration and the decision precision.

    Research Paper
    Crowd Counting for Real Monitoring Scene
    LI Yiming, LI Weihua, SHEN Zan, NI Bingbing
    2020, 18(2):  74-82.  doi:10.12142/ZTECOM.202002009
    Asbtract ( )   HTML ( )   PDF (3766KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Crowd counting is a challenging task in computer vision as realistic scenes are always filled with unfavourable factors such as severe occlusions, perspective distortions and diverse distributions. Recent state-of-the-art methods based on convolutional neural network (CNN) weaken these factors via multi-scale feature fusion or optimal feature selection through a front switch-net. L2 regression is used to regress the density map of the crowd, which is known to lead to an average and blurry result, and affects the accuracy of crowd count and position distribution. To tackle these problems, we take full advantage of the application of generative adversarial networks (GANs) in image generation and propose a novel crowd counting model based on conditional GANs to predict high-quality density maps from crowd images. Furthermore, we innovatively put forward a new regularizer so as to help boost the accuracy of processing extremely crowded scenes. Extensive experiments on four major crowd counting datasets are conducted to demonstrate the better performance of the proposed approach compared with recent state-of-the-art methods.

    Download the whole issue (PDF)
    The whole issue of ZTE Communications June 2020, Vol. 18 No. 2
    2020, 18(2):  0. 
    Asbtract ( )  
    Related Articles | Metrics