Loading...

Table of Content

    11 June 2019, Volume 17 Issue 2
    Special Topic
    Editorial: Special Topic onMachine Learning for Wireless Networks
    WANG Zhengdao
    2019, 17(2):  1-1.  doi:10.12142/ZTECOM.201902001
    Asbtract ( )   HTML ( )   PDF (257KB) ( )  
    References | Related Articles | Metrics
    A Framework for Active Learning of Beam Alignment in Vehicular Millimeter Wave Communications by Onboard Sensors
    Zöchmann Erich
    2019, 17(2):  2-9.  doi:10.12142/ZTECOM.201902002
    Asbtract ( )   HTML ( )   PDF (1997KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Estimating time-selective millimeter wave wireless channels and then deriving the optimum beam alignment for directional antennas is a challenging task. To solve this problem, one can focus on tracking the strongest multipath components (MPCs). Aligning antenna beams with the tracked MPCs increases the channel coherence time by several orders of magnitude. This contribution suggests tracking the MPCs geometrically. The derived geometric tracker is based on algorithms known as Doppler bearing tracking. A recent work on geometric-polar tracking is reformulated into an efficient recursive version. If the relative position of the MPCs is known, all other sensors on board a vehicle, e.g., lidar, radar, and camera, will perform active learning based on their own observed data. By learning the relationship between sensor data and MPCs, onboard sensors can participate in channel tracking. Joint tracking of many integrated sensors will increase the reliability of MPC tracking.

    Novel Real-Time System for Traffic Flow Classification and Prediction
    YE Dezhong, LV Haibing, GAO Yun, BAO Qiuxia, CHEN Mingzi
    2019, 17(2):  10-18.  doi:10.12142/ZTECOM.201902003
    Asbtract ( )   HTML ( )   PDF (2729KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Traffic flow prediction has been applied into many wireless communication applications (e.g., smart city, Internet of Things). With the development of wireless communication technologies and artificial intelligence, how to design a system for real-time traffic flow prediction and receive high accuracy of prediction are urgent problems for both researchers and equipment suppliers. This paper presents a novel real-time system for traffic flow prediction. Different from the single algorithm for traffic flow prediction, our novel system firstly utilizes dynamic time wrapping to judge whether traffic flow data has regularity, realizing traffic flow data classification. After traffic flow data classification, we respectively make use of XGBoost and wavelet transform-echo state network to predict traffic flow data according to their regularity. Moreover, in order to realize real-time classification and prediction, we apply Spark/Hadoop computing platform to process large amounts of traffic data. Numerical results show that the proposed novel system has better performance and higher accuracy than other schemes.

    A Network Traffic Prediction Method Based on LSTM
    WANG Shihao, ZHUO Qinzheng, YAN Han, LI Qianmu, QI Yong
    2019, 17(2):  19-25.  doi:10.12142/ZTECOM.201902004
    Asbtract ( )   HTML ( )   PDF (1526KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    As the network sizes continue to increase, network traffic grows exponentially. In this situation, how to accurately predict network traffic to serve customers better has become one of the issues that Internet service providers care most about. Current traditional network models cannot predict network traffic that behaves as a nonlinear system. In this paper, a long short-term memory (LSTM) neural network model is proposed to predict network traffic that behaves as a nonlinear system. According to characteristics of autocorrelation, an autocorrelation coefficient is added to the model to improve the accuracy of the prediction model. Several experiments were conducted using real-world data, showing the effectiveness of LSTM model and the improved accuracy with autocorrelation considered. The experimental results show that the proposed model is efficient and suitable for real-world network traffic prediction.

    Potential Off-Grid User Prediction System Based on Spark
    LI Xuebing, SUN Ying, ZHUANG Fuzhen, HE Jia, ZHANG Zhao, ZHU Shijun, HE Qing
    2019, 17(2):  26-37.  doi:10.12142/ZTECOM.201902005
    Asbtract ( )   HTML ( )   PDF (800KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    With the increasingly fierce competition among communication operators, it is more and more important to make an accurate prediction of potential off-grid users. To solve the above problem, it is inevitable to consider the effectiveness of learning algorithms, the efficiency of data processing, and other factors. Therefore, in this paper, we, from the practical application point of view, propose a potential customer off-grid prediction system based on Spark, including data pre-processing, feature selection, model building, and effective display. Furthermore, in the research of off-grid system, we use the Spark parallel framework to improve the gcForest algorithm which is a novel decision tree ensemble approach. The new parallel gcForest algorithm can be used to solve practical problems, such as the off-grid prediction problem. Experiments on two real-world datasets demonstrate that the proposed prediction system can handle large-scale data for the off-grid user prediction problem and the proposed parallel gcForest can achieve satisfying performance.

    Detecting Abnormal Start-Ups, Unusual Resource Consumptions of the Smart Phone: A Deep Learning Approach
    ZHENG Xiaoqing, LU Yaping, PENG Haoyuan, FENG Jiangtao, ZHOU Yi, JIANG Min, MA Li, ZHANG Ji, JI Jie
    2019, 17(2):  38-43.  doi:10.12142/ZTECOM.201902006
    Asbtract ( )   HTML ( )   PDF (449KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    The temporal distance between events conveys information essential for many time series tasks such as speech recognition and rhythm detection. While traditional models such as hidden Markov models (HMMs) and discrete symbolic grammars tend to discard such information, recurrent neural networks (RNNs) can in principle learn to make use of it. As an advanced variant of RNNs, long short-term memory (LSTM) has an alternative (arguably better) mechanism for bridging long time lags. We propose a couple of deep neural network-based models to detect abnormal start-ups, unusual CPU and memory consumptions of the application processes running on smart phones. Experiment results showed that the proposed neural networks achieve remarkable performance at some reasonable computational cost. The speed advantage of neural networks makes them even more competitive for the applications requiring real-time response, offering the proposed models the potential for practical systems.

    Review
    Cooperative Intelligence for Autonomous Driving
    CHENG Xiang, DUAN Dongliang, YANG Liuqing, ZHENG Nanning
    2019, 17(2):  44-50.  doi:10.12142/ZTECOM.201902007
    Asbtract ( )   HTML ( )   PDF (983KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Autonomous driving is an emerging technology attracting interests from various sectors in recent years. Most of existing work treats autonomous vehicles as isolated individuals and has focused on developing separate intelligent modules. In this paper, we attempt to exploit the connectivity among vehicles and propose a systematic framework to develop autonomous driving techniques. We first introduce a general hierarchical information fusion framework for cooperative sensing to obtain global situational awareness for vehicles. Following this, a cooperative intelligence framework is proposed for autonomous driving systems. This general framework can guide the development of data collection, sharing and processing strategies to realize different intelligent functions in autonomous driving.

    Standardization of Fieldbus and Industrial Ethernet
    CHEN Jinghe, ZHANG Hesheng
    2019, 17(2):  51-58.  doi:10.12142/ZTECOM.201902008
    Asbtract ( )   HTML ( )   PDF (296KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Fieldbus and industrial Ethernet standards can guide the specification and coordinate bus optimization. The standards are the basis for the development of fieldbus and industrial Ethernet. In this paper, we review complex standard systems all over the world. We discuss 18 fieldbus standards, including the International Electrotechnical Commission (IEC) 61158, the IEC 61784 standard matched with IEC 61158, the controller and device interface standard IEC 62026 for low voltage distribution and control devices, and the International Organization for Standardization (ISO) 11898 and ISO 11519 standards related to the controller area network (CAN) bus. We also introduce the standards of China, Europe, Japan and America. This paper provides a reference to develop fieldbus and industrial Ethernet products for Chinese enterprises.

    Research Paper
    SRSC: Improving Restore Performance for Deduplication-Based Storage Systems
    ZUO Chunxue, WANG Fang, TANG Xiaolan, ZHANG Yucheng, FENG Dan
    2019, 17(2):  59-66.  doi:10.12142/ZTECOM.201902009
    Asbtract ( )   HTML ( )   PDF (438KB) ( )  
    Figures and Tables | References | Related Articles | Metrics

    Modern backup systems exploit data deduplication technology to save storage space whereas suffering from the fragmentation problem caused by deduplication. Fragmentation degrades the restore performance because of restoring the chunks that are scattered all over different containers. To improve the restore performance, the state-of-the-art History Aware Rewriting Algorithm (HAR) is proposed to collect fragmented chunks in the last backup and rewrite them in the next backup. However, due to rewriting fragmented chunks in the next backup, HAR fails to eliminate internal fragmentation caused by self-referenced chunks (that exist more than two times in a backup) in the current backup, thus degrading the restore performance. In this paper, we propose Selectively Rewriting Self-Referenced Chunks (SRSC), a scheme that designs a buffer to simulate a restore cache, identify internal fragmentation in the cache and selectively rewrite them. Our experimental results based on two real-world datasets show that SRSC improves the restore performance by 45% with an acceptable sacrifice of the deduplication ratio.

    Download the whole issue (PDF)
    The whole issue of ZTE Communications June 2019, Vol. 17 No. 2
    2019, 17(2):  0. 
    Asbtract ( )   PDF (1804KB) ( )  
    Related Articles | Metrics