This paper reviews task scheduling frameworks, methods, and evaluation metrics of central processing unit-graphics processing unit (CPU-GPU) heterogeneous clusters. Task scheduling of CPU-GPU heterogeneous clusters can be carried out on the system level, nodelevel, and device level. Most task-scheduling technologies are heuristic based on the experts’ experience, while some technologies are based on statistic methods using machine learning, deep learning, or reinforcement learning. Many metrics have been adopted to evaluate and compare different task scheduling technologies that try to optimize different goals of task scheduling. Although statistic task scheduling has reached fewer research achievements than heuristic task scheduling, the statistic task scheduling still has significant research potential.
Open-set recognition (OSR) is a realistic problem in wireless signal recognition, which means that during the inference phase there may appear unknown classes not seen in the training phase. The method of intra-class splitting (ICS) that splits samples of known classes to imitate unknown classes has achieved great performance. However, this approach relies too much on the predefined splitting ratio and may face huge performance degradation in new environment. In this paper, we train a multi-task learning (MTL) network based on the characteristics of wireless signals to improve the performance in new scenes. Besides, we provide a dynamic method to decide the splitting ratio per class to get more precise outer samples. To be specific, we make perturbations to the sample from the center of one class toward its adversarial direction and the change point of confidence scores during this process is used as the splitting threshold. We conduct several experiments on one wireless signal dataset collected at 2.4 GHz ISM band by LimeSDR and one open modulation recognition dataset, and the analytical results demonstrate the effectiveness of the proposed method.
A new optimization method is proposed to realize the synthesis of duplexers. The traditional optimization method takes all the variables of the duplexer into account, resulting in too many variables to be optimized when the order of the duplexer is too high, so it is not easy to fall into the local solution. In order to solve this problem, a new optimization strategy is proposed in this paper, that is, two-channel filters are optimized separately, which can reduce the number of optimization variables and greatly reduce the probability of results falling into local solutions. The optimization method combines the self-adaptive differential evolution algorithm (SADE) with the Levenberg-Marquardt (LM) algorithm to get a global solution more easily and accelerate the optimization speed. To verify its practical value, we design a 5G duplexer based on the proposed method. The duplexer has a large external coupling, and how to achieve a feed structure with a large coupling bandwidth at the source is also discussed. The experimental results show that the proposed optimization method can realize the synthesis of higher- order duplexers compared with the traditional methods.
Massive multiple-input multiple-output (MIMO) emerges as one of the most promising technologies for 5G mobile communication systems. Compared to the conventional MIMO channel models, channel researches and measurements show that significant non-stationary properties rise in massive MIMO channels. Therefore, an accurate channel model is indispensable for the sake of massive MIMO system design and performance evaluation. This article presents an overview of methods of modeling non-stationary properties on both the array and time axes, which are mainly divided into two major categories: birth-death (BD) process and cluster visibility region (VR) method. The main concepts and theories are described, together with useful implementation guidelines. In conclusion, a comparison between these two methods is made.