Loading...

Table of Content

    25 December 2013, Volume 11 Issue 4
    Download the whole issue (PDF)
    The whole issue of ZTE Communications December 2013, Vol. 11 No. 4
    2013, 11(4):  0. 
    Asbtract ( )   PDF (2025KB) ( )  
    Related Articles | Metrics
    Special Topic
    Guest Editorial: Cloud Computing
    Hong Cai
    2013, 11(4):  1-1. 
    Asbtract ( )   PDF (187KB) ( )  
    Related Articles | Metrics
    In the last five years, great progress has been made in cloud computing, especially in virtualization, standardization, and automation. This has resulted in numerous cloud services, such as Infrastructure as a Service, Platform as a Service, and Software as a Service. Many cloud technologies have matured and have been commercialized. However, issues such as information security, mobility, energy efficiency, and infrastructure optimization are becoming more serious. The key causes of these issues are increased scale of data centers, convergence of IT and CT, increased user concern about information security, and increased opex instead of capex.

    For this special issue of ZTE Communications, researchers from industry and academia were called to submit articles detailing the latest progress on cloud computing.

    The first paper,“Software-Defined Data Center,”by Ali et al., gives an overview of key technologies and standardization of SDDC as well as challenges associated with it. The paper points out that a unified control plane allows rich resource abstractions to enable orchestration purpose fit systems and/or providing programmable infrastructures to enable dynamic optimization in response to business requirements.”

    In their paper“Computation Partitioning in Mobile Cloud Computing: A Survey,”Yang et al. address the issue of computation partitioning in mobile cloud. This involves partitioning the execution of applications between the mobile side and cloud side so that the execution cost is minimized. The authors survey computation partitioning in mobile cloud computing.

    In the paper“MapReduce in the Cloud: Data Location Aware VM Scheduling,” Nguyen el al. see the challenge of MapReduce efficiency in the cloud and develop a distributed cache system and virtual machine scheduler. They show that their prototype can improve performance significantly when running different applications.

    The paper“Preventing Data Leakage in a Cloud Environment,”by Cang et al., deals with the customer information security and avoidance of unauthorized data access in practical multiparty clouds. The authors survey techniques for preventing data leakages, and these techniques can be used in three trust models.

    In the next paper,“CPPL: A New Chunk-Based Proportional-Power Layout with Fast Recovery,”by Yin et al, the authors suggest that the size and number of data centers and cloud storage systems are dramatically increasing, and this, in turn, is dramatically increasing energy consumption and disk failures in emerging facilities. The authors propose a new chunk-based power-proportional layout called CPPL to address these problems.

    In the last paper,“Virtualization of Network and Service Functions: Impact on ICT Transformation and Standardization,”Khasnabish et al. review trends in the virtualization of network/service functions. They also discuss standardization of and required management and orchestration of these functions.

    In this special issue, it is our intention to inform the reader of state-of-the-art research and technology on current cloud computing topics and bring to attention of the cloud computing community problems that must be investigated.

    This special issue would not be possible without the help provided by many. We would like to thank all the authors for their contributions and all reviewers for their efforts and dedication. We also want to thank the editorial office of ZTE Communications for their support.
    Software-Defined Data Center
    Ghazanfar Ali, Jie Hu, and Bhumip Khasnabish
    2013, 11(4):  2-7.  doi:DOI:10.3939/j.issn.1673-5188.2013.04.001
    Asbtract ( )   PDF (471KB) ( )  
    Related Articles | Metrics
    Defining a software-defined data center is a vision of the future. An SDDC brings together software-defined compute, software-defined network, software-defined storage, software-defined hypervisor, software-defined availability, and software-defined security. It also unifies the control planes of each individual software-defined component. A unified control plane enables rich resource abstractions for purpose-fit orchestration systems and/or programmable infrastructures. This enables dynamic optimization according to business requirements.
    Computation Partitioning in Mobile Cloud Computing: A Survey
    Lei Yang and Jiannong Cao
    2013, 11(4):  8-17.  doi:DOI:10.3969/j.issn.1673-5188.2013.04.002
    Asbtract ( )   PDF (563KB) ( )  
    Related Articles | Metrics
    Mobile devices are increasingly interacting with clouds, and mobile cloud computing has emerged as a new paradigm. An central topic in mobile cloud computing is computation partitioning, which involves partitioning the execution of applications between the mobile side and cloud side so that execution cost is minimized. This paper discusses computation partitioning in mobile cloud computing. We first present the background and system models of mobile cloud computation partitioning systems. We then describe and compare state-of-the-art mobile computation partitioning in terms of application modeling, profiling, optimization, and implementation. We point out the main research issues and directions and summarize our own works.
    MapReduce in the Cloud: Data-Location-Aware VM Scheduling
    Tung Nguyen and Weisong Shi
    2013, 11(4):  18-26.  doi:DOI:10.3939/j.issn.1673-5188.2013.04.003
    Asbtract ( )   PDF (435KB) ( )  
    Related Articles | Metrics
    We have witnessed the fast-growing deployment of Hadoop, an open-source implementation of the MapReduce programming model, for purpose of data-intensive computing in the cloud. However, Hadoop was not originally designed to run transient jobs in which users need to move data back and forth between storage and computing facilities. As a result, Hadoop is inefficient and wastes resources when operating in the cloud. This paper discusses the inefficiency of MapReduce in the cloud. We study the causes of this inefficiency and propose a solution. Inefficiency mainly occurs during data movement. Transferring large data to computing nodes is very time-consuming and also violates the rationale of Hadoop, which is to move computation to the data. To address this issue, we developed a distributed cache system and virtual machine scheduler. We show that our prototype can improve performance significantly when running different applications.
    Preventing Data Leakage in a Cloud Environment
    Fuzhi Cang, Mingxing Zhang, Yongwei Wu, and Weimin Zheng
    2013, 11(4):  27-31.  doi:DOI:10.3969/j.issn.1673-5188.2013.04.004
    Asbtract ( )   PDF (346KB) ( )  
    Related Articles | Metrics
    Despite the multifaceted advantages of cloud computing, concerns about data leakage or abuse impedes its adoption for security-sensitive tasks. Recent investigations have revealed that the risk of unauthorized data access is one of the biggest concerns of users of cloud-based services. Transparency and accountability for data managed in the cloud is necessary. Specifically, when using a cloudhost service, a user typically has to trust both the cloud service provider and cloud infrastructure provider to properly handling private data. This is a multi-party system. Three particular trust models can be used according to the credibility of these providers. This paper describes techniques for preventing data leakage that can be used with these different models.
    CPPL: A New Chunk-Based Proportional-Power Layout with Fast Recovery
    Jiangling Yin, Junyao Zhang, and Jun Wang
    2013, 11(4):  32-39.  doi:DOI:10.3939/j.issn.1673-5188.2013.04.005
    Asbtract ( )   PDF (460KB) ( )  
    Related Articles | Metrics
    In recent years, the number and size of data centers and cloud storage systems has increased. These two corresponding trends are dramatically increasing energy consumption and disk failure in emerging facilities. This paper describes a new chunk-based proportional-power layout called CPPL to address the issues. Our basic idea is to leverage current proportional-power layouts by using declustering techniques. In this way, we can manage power at a much finer-grained level. CPPL includes a primary disk group and a large number of secondary disks. A primary disk group contains one copy of available datasets and is always active in order to respond to incoming requests. Other copies of data are placed on secondary disks in declusterd way for power-efficiency and parallel recovery at a finer-grained level. Through comprehensive theoretical proofs and experiments, we conclude that CPPL can save more power and achieve a higher recovery speed than current solutions.
    Virtualizing Network and Service Functions: Impact on ICT Transformation and Standardization
    Bhumip Khasnabish, Jie Hu, and Ghazanfar Ali
    2013, 11(4):  40-46.  doi:DOI:10.3969/j.issn.1673-5188.2013.04.006
    Asbtract ( )   PDF (516KB) ( )  
    Related Articles | Metrics
    Virtualization of network/service functions means time-sharing network/service (and affiliated) resources in a hyper-speed manner. The concept of time sharing was popularized in the 1970s with mainframe computing. The same concept has recently resurfaced under the guise of cloud computing and virtualized computing. Although cloud computing was originally used in IT for server virtualization, the ICT industry is taking a new look at virtualization. This paradigm shift is shaking up the computing, storage, networking, and service industries. The hope is that virtualizing and automating configuration and service management/orchestration will save both capes and opex for network transformation. A complimentary trend is the separation (over an open interface) of control and transmission. This is commonly referred to as software-defined networking (SDN). This paper reviews trends in network/service functions, efforts to standardize these functions, and required management and orchestration.
    Research Paper
    Cooperative Communication Protocols for Performance Improvement in Mobile Satellite Systems
    Ashagrie Getnet Flattie
    2013, 11(4):  47-52.  doi:DOI:10.3969/j.issn.1673-5188.2013.04.007
    Asbtract ( )   PDF (399KB) ( )  
    Related Articles | Metrics
    A mobile satellite indoor signal is proposed to model performance of cooperative communication protocols and maximal ratio combining. Cooperative diversity can improve the reliability of satellite system and increase data speed or expand cell radius by lessening the effects of fading. Performance is determined by measured bit error rates (BERs) in different types of cooperative protocols and indoor systems (e.g. GSM and WCDMA networks). The effect of performance on cooperative terminals located at different distances from an indoor cellular system is also discussed. The proposed schemes provide higher signal-tonoise ratio (SNR)—around 1.6 dB and 2.6 dB gap at BER 10-2 for amplify-and-forward (AF) and decode-and-forward (DF) cooperative protocols, respectively, when the cooperative terminal is located 10 m from the WCDMA indoor system. Cooperative protocols improve effective power utilization and, hence, improve performance and cell coverage of the mobile satellite network.
    Capacity Scaling Limits and New Advancements in Optical Transmission Systems
    Zhensheng Jia
    2013, 11(4):  53-58.  doi:DOI:10.3939/j.issn.1673-5188.2013.04.008
    Asbtract ( )   PDF (521KB) ( )  
    Related Articles | Metrics
    Optical transmission technologies have gone through several generations of development. Spectral efficiency has significantly improved, and industry has begun to search for an answer to a basic question: What are the fundamental linear and nonlinear signal channel limitations of the Shannon theory when there is no compensation in an optical fiber transmission system? Next-generation technologies should exceed the 100G transmission capability of coherent systems in order to approach the Shannon limit. Spectral efficiency first needs to be improved before overall transmission capability can be improved. The means to improve spectral efficiency include more complex modulation formats and channel encoding/decoding algorithms, prefiltering with multisymbol detection, optical OFDM and Nyquist WDM multicarrier technologies, and nonlinearity compensation. With further optimization, these technologies will most likely be incorporated into beyond-100G optical transport systems to meet bandwidth demand.