Loading...

Table of Content

    25 December 2014, Volume 12 Issue 4
    Download the whole issue (PDF)
    The whole issue of ZTE Communications December 2014, Vol. 12 No. 4
    2014, 12(4):  0. 
    Asbtract ( )   PDF (2553KB) ( )  
    Related Articles | Metrics
    Special Topic
    Guest Editorial: Improving Performance of Cloud Computing and Big Data Technologies and Applications
    Zhenjiang Dong
    2014, 12(4):  1-2. 
    Asbtract ( )   PDF (236KB) ( )  
    Related Articles | Metrics
    Cloud computing technology is changing the development and usage patterns of IT infrastructure and applications. Virtualized and distributed systems as well as unified management and scheduling has greatly improved computing and storage. Management has become easier, and OAM costs have been significantly reduced. Cloud desktop technology is developing rapidly. With this technology, users can flexibly and dynamically use virtual machine resources, companies’efficiency of using and allocating resources is greatly improved, and information security is ensured. In most existing virtual cloud desktop solutions, computing and storage are bound together, and data is stored as image files. This limits the flexibility and expandability of systems and is insufficient for meeting customers’requirements in different scenarios.
    In this era of big data, the annual growth rate of the data in social networks, mobile communication, e-commerce, and the Internet of Things is more than 50%. More than 80% of this data is unstructured. Therefore, it is imperative to develop an effective method for storing and managing big data and querying and analyzing big data in real time or quasi real. HBase is a distributed data storage system operating in the Hadoop environment. HBase provides a highly expandable method and platform for big data storage and management. However, it supports only primary key indexing but does not support non-primary key indexing. As a result, the data query efficiency of HBase is low and data cannot be queried in real time or quasi real time. For HBase operating in Hadoop, the capability of querying data according to non-primary keys is the most important and urgent.
    The graph data structure is suitable for most big data created in social networks. Graph data is more complex and difficult to understand than traditional linked-list data or tree data, so quick and easy processing and understanding of graph data is of great significance and has become a hot topic in the industry.
    Big data has a high proportion of video and image data but most of the video and image data is not utilized. Creating value with this data has been a research focus in the industry. For example, the traditional face localization and identification technology is a local optimal solution that has a large room for improvement in accuracy.
    This special issue of ZTE Communications embodies the industry’s efforts on performance improvement of cloud computing and big data technologies and applications. We invited four peer-reviewed papers based on projects supported by ZTE Industry-Academic-Research Cooperation Funds.
    Hancong Duang et al . propose a disk mapping solution integrated with the virtual desktop technology in“A New Virtual Disk Mapping Method for the Cloud Desktop Storage Client.”The virtual disk driver has a user-friendly mode for accessing desktop data and has a flexible cache space management mechanism. The file system filter driver intelligently checks I/O requests of upper applications and synchronizes file access requests to users’cloud storage services. Experimental results show that the read-write performance of our virtual disk mapping method with customizable local cache storage is almost same as that of the local hard disk.
    “HMIBase: An Hierarchical Indexing System for Storing and Querying Big Data,”by Shengmei Luoet al ., presents the design and implementation of a complete hierarchical indexing and query system called HMIBase. This system efficiently queries a value or values within a range according to non-primary key attributes. This system has good expandability. Test results based on 10 million to 1 billion data records show that regardless of whether the number of query results is large or small, HMIBase can respond to cold and hot queries one to four levels faster than standard HBase and five to twenty times faster than the open-source Hindex system.
    In“MBGM: A Graph - Mining Tool Based on MapReduce and BSP,”Zhenjiang Dong et al. propose a MapReduce and BSP-based Graph Mining (MBGM) tool. This tool uses the BSP model-based parallel graph mining algorithm and the MapReduce-based extraction-transformation-loading (ETL) algorithm, and an optimized workflow engine for cloud computing is designed for the tool. Experiments show that graph mining algorithm components, including PageRank, K - means, InDegree Count, and Closeness Centrality, in the MBGM tool has higher performance than the corresponding algorithm components of the BC-PDM and BC-BSP.
    Bofei Wang et al . in“Facial Landmark Localization by Gibbs Sampling,”present an optimized solution of the face localization technology based on key points. Instead of the traditional gradient descent algorithm, this solution uses the Gibbs sampling algorithm, which is easy to converge and can implement the global optimal solution for face localization based on key points. In this way, the local optimal solution is avoided. The posterior probability function used by the Gibbs sampling algorithm comprises the prior probability function and the likelihood function. The prior probability function is assumed to follow the Gaussian distribution and learn according to features after dimension reduction. The likelihood function is obtained through the local linear SVM algorithm. The LFW data has been used in the system for tests. The test results show that the accuracy of face localization is high.
    I would like to thank all the authors for their contributions and all the reviewers who helped improve the quality of the papers.
    A New Virtual Disk Mapping Method for the Cloud Desktop Storage Client
    Hancong Duan, Xiaoqin Wang, Ping Lu, Shengmei Luo, and Zhiyong Wang
    2014, 12(4):  3-7.  doi:DOI:10.3969/j.issn.1673-5188.2014.04.001
    Asbtract ( )   PDF (396KB) ( )  
    Related Articles | Metrics
    Integration of the cloud desktop and cloud storage platform is urgent for enterprises. However, current proposals for cloud disk are not satisfactory in terms of the decoupling of virtual computing and business data storage in the cloud desktop environment. In this paper, we present a new virtual disk mapping method for cloud desktop storage. In Windows, compared with virtual hard disk method of popular cloud disks, the proposed implementation of client based on the virtual disk driver and the file system filter driver is available for widespread desktop environments, especially for the cloud desktop with limited storage resources. Furthermore, our method supports customizable local cache storage, resulting in user-friendly experience for thin-clients of the cloud desktop. The evaluation results show that our virtual disk mapping method performs well in the read-write throughput of different scale files.
    HMIBase: An Hierarchical Indexing System for Storing and Querying Big Data
    Shengmei Luo, Di Zhao, Wei Ge, Rong Gu, Chunfeng Yuan, and Yihua Huang
    2014, 12(4):  8-15.  doi:DOI:10.3969/j.issn.1673-5188.2014.04.002
    Asbtract ( )   PDF (545KB) ( )  
    Related Articles | Metrics
    Relational database management systems are usually deployed on single-node machines and have strict limitations in terms of data structure. This means they do not work well with big data, and NoSQL has been proposed as a solution. To make data querying more efficient, indexes and memory cache techniques are used in NoSQL databases. In this paper, we propose a hierarchical indexing mechanism and a prototype distributed data-storage system, called HMIBase, which has hierarchical indexes for non-primary keys in tables and makes data querying more efficient. HMIBase uses HBase as the lower data storage and creates a memory cache for more efficient data transmission. HMIBase supports coprocessor-to-process update requests. It also provides a client with query and update APIs and a server to support RPCs from the client and finish jobs. To improve the cache hit ratio, we propose a memory cache replacement strategy, called Hot Score algorithm, in HMIBase. The experimental results show that Hot Score algorithm is better than other cache-replacement strategies.
    MBGM: A Graph-Mining Tool Based on MapReduce and BSP
    Zhenjiang Dong, Lixia Liu, Bin Wu, and Yang Liu
    2014, 12(4):  16-22.  doi:DOI:10.3969/j.issn.1673-5188.2014.04.003
    Asbtract ( )   PDF (393KB) ( )  
    Related Articles | Metrics
    This paper proposes an analytical mining tool for big graph data based on MapReduce and bulk synchronous parallel (BSP) computing model. The tool is named Mapreduce and BSP based Graph-mining tool (MBGM). The core of this mining system are four sets of parallel graph-mining algorithms programmed in the BSP parallel model and one set of data extraction-transformation-loading (ETL) algorithms implemented in MapReduce. To invoke these algorithm sets, we designed a workflow engine which optimized for cloud computing. Finally, a well-designed data management function enables users to view, delete and input data in the Hadoop distributed file system (HDFS). Experiments on artificial data show that the components of graph-mining algorithm in MBGM are efficient.
    Facial Landmark Localization by Gibbs Sampling
    Bofei Wang, Diankai Zhang, Chi Zhang, Jiani Hu, and Weihong Deng
    2014, 12(4):  23-29.  doi:DOI:10.3969/j.issn.1673-5188.2014.04.004
    Asbtract ( )   PDF (624KB) ( )  
    Related Articles | Metrics
    In this paper, we introduce a novel method for facial landmark detection. We localize facial landmarks according to the MAP criterion. Conventional gradient ascent algorithms get stuck at the local optimal solution. Gibbs sampling is a kind of Markov Chain Monte Carlo (MCMC) algorithm. We choose it for optimization because it is easy to implement and it guarantees global convergence. The posterior distribution is obtained by learning prior distribution and likelihood function. Prior distribution is assumed Gaussian. We use Principle Component Analysis (PCA) to reduce the dimensionality and learn the prior distribution. Local Linear Support Vector Machine (LL-SVM) is used to get the likelihood function of every key point. In our experiment, we compare our detector with some other well-known methods. The results show that the proposed method is very simple and efficient. It can avoid trapping in local optimal solution.
    Research Paper
    Angle-Based Interference-Aware Routing Algorithm for Multicast overWireless D2D Networks
    Qian Xu, Pinyi Ren, Qinghe Du, Gang Wu, Qiang Li, and Li Sun
    2014, 12(4):  30-39.  doi:DOI:10.3969/j.issn.1673-5188.2014.04.005
    Asbtract ( )   PDF (598KB) ( )  
    Related Articles | Metrics
    Wireless device-to-device (D2D) communications sharing the spectrum of cellular networks is important for improving spectrum efficiency. Furthermore, introducing multicast and multihop communications to D2D networks can expand D2D service functions. In this paper, we propose an angle-based interference-aware routing algorithm for D2D multicast communications. This algorithm reuses the uplink cellular spectrum. Our proposed algorithm aims to reduce the outage probability and minimize the average hop count over all multicast destinations (i.e., multicast receivers), while limiting interference to cellular users to a tolerable level. In particular, our algorithm integrates two design principles for hop-by-hop route selection. First, we minimize the distance ratio of the candidate-todestination link to the candidate-to-base-station link, such that the selected route advances closer to a subset of multicast receivers. Second, we design the angle-threshold based merging strategy to divide multicast receivers into subsets with geographically close destinations. By applying the two principles for selection of each hop and further deriving an adaptive power-allocation strategy, the message can be more efficiently delivered to destinations with fewer branches when constructing the multicast tree. This means fewer duplicated data transmissions. Analyses and simulations are presented to show the impact of system parameters on the routing performances. Simulation results also demonstrate the superiority of our algorithm over baseline schemes in terms of outage probability and average hop count.
    Digital Signal Processing for Optical Access Networks
    Jianjun Yu
    2014, 12(4):  40-48.  doi:DOI:10.3969/j.issn.1673-5188.2014.04.006
    Asbtract ( )   PDF (654KB) ( )  
    Related Articles | Metrics
    In this paper, we investigate advanced digital signal processing (DSP) at the transmitter and receiver side for signal preequalization and post-equalization in order to improve spectrum efficiency (SE) and transmission distance in an optical access network. A novel DSP scheme for this optical superNyquist filtering 9 Quadrature Amplitude Modulation (9 QAM) like signals based on multi-modulus equalization without post filtering is proposed. This scheme recovers the Nyquist filtered Quadrature Phase-Shift Keying (QPSK) signal to a 9-QAM-like one. With this technique, SE can be increased to 4 b/s/Hz for QPSK signals. A novel digital super-Nyquist signal generation scheme is also proposed to further suppress the Nyquist signal bandwidth and reduce channel crosstalk without the need for optical pre-filtering. Only optical couplers are needed for super-Nyquist wavelength-division-multiplexing (WDM) channel multiplexing. We extend the DSP for short-haul optical transmission networks by using high-order QAMs. We propose a high-speed Carrierless Amplitude/Phase64 QAM (CAP-64 QAM) system using directly modulated laser (DML) based on direct detection and digital equalization. Decision-directed least mean square is used to equalize the CAP-64QAM. Using this scheme, we generate and transmit up to 60 Gbit/s CAP-64QAM over 20 km standard singlemode fiber based on the DML and direct detection. Finally, several key problems are solved for real time orthogonal-frequency-division-multiplexing (OFDM) signal transmission and processing. With coherent detection, up to 100 Gbit/s 16 QAM-OFDM real-time transmission is possible.
    Influence on Multimode Rectangular Optical Waveguide Propagation Loss by Surface Roughness
    Chuanlu Deng, Li Zhao, Zhe Liu, Nana Jia, Fufei Pang, and Tingyun Wang
    2014, 12(4):  49-53.  doi:DOI:10.3969/j.issn.1673-5188.2014.04.007
    Asbtract ( )   PDF (448KB) ( )  
    Related Articles | Metrics
    Optical scattering loss coefficient of multimode rectangular waveguide is analyzed in this work. First, the effective refractive index and the mode field distribution of waveguide modes are obtained using the Marcatili method. The influence on scattering loss coefficient by waveguide surface roughness is then analyzed. Finally, the mode coupling efficiency for the SMF-Optical-Waveguide (SOW) structure and MMF-OpticalWaveguide (MOW) structure are presented. The total scattering loss coefficient depends on modes scattering loss coefficients and the mode coupling efficiency between fiber and waveguide. The simulation results show that the total scattering loss coefficient for the MOW structure is affected more strongly by surface roughness than that for the SOW structure. The total scattering loss coefficient of waveguide decreases from 3.97 × 10-2 dB/cm to 2.96 × 10-4 dB/cm for the SOW structure and from 5.24 × 10-2 dB/cm to 4.7 × 10-4 dB/ cm for the MOW structure when surface roughness is from 300nm to 20nm and waveguide length is 100cm.
    An MAS Framework for Speculative Trading Research in Stock Index Futures Market
    Junneng Nie and Haopeng Chen
    2014, 12(4):  54-60.  doi:DOI:10.3969/j.issn.1673-5188.2014.04.008
    Asbtract ( )   PDF (371KB) ( )  
    Related Articles | Metrics
    In this paper, we develop a futures trading simulation system to determine how speculative behavior affects the futures market. A configurable client is designed to simulate traders, and users can define trade strategies using different programming languages. A lightweight server is designed to handle largescale and highly concurrent access requests from clients. HBase is chosen as the database to grantee scalability of the system. As HBase only supports single - row transaction, a transaction support mechanism is developed to improve data consistency for HBase. The HBase transaction support mechanism supports multi-row and multi-table by using two phase commit protocol. The experiments indicate that our system shows high efficiency in the face of the large scale and high concurrency access request, and the read/write performance loss of HBase introduced by the transaction support mechanisms is also acceptable.