Fiber-to-the-Room (FTTR) has emerged as the core architecture for next-generation home and enterprise networks, offering gigabit-level bandwidth and seamless wireless coverage. However, the complex multi-device topology of FTTR networks presents significant challenges in identifying sources of network performance degradation and conducting accurate root cause analysis. Conventional approaches often fail to deliver efficient and precise operational improvements. To address this issue, this paper proposes a Transformer-based multi-task learning model designed for automated root cause analysis in FTTR environments. The model integrates multidimensional time-series data collected from access points (APs), enabling the simultaneous detection of APs experiencing performance degradation and the classification of underlying root causes, such as weak signal coverage, network congestion, and signal interference. To facilitate model training and evaluation, a multi-label dataset is generated using a discrete-event simulation platform implemented in MATLAB. Experimental results demonstrate that the proposed Transformer-based multi-task learning model achieves a root cause classification accuracy of 96.75%, significantly outperforming baseline models including Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Random Forest, and eXtreme Gradient Boosting (XGBoost). This approach enables the rapid identification of performance degradation causes in FTTR networks, offering actionable insights for network optimization, reduced operational costs, and enhanced user experience.
Recent years have witnessed a spurt of progress in federated learning, which can coordinate multi-participation model training while protecting the data privacy of participants. However, low communication efficiency is a bottleneck when deploying federated learning to edge computing and IoT devices due to the need to transmit a huge number of parameters during co-training. In this paper, we verify that the outputs of the last hidden layer can record the characteristics of training data. Accordingly, we propose a communication-efficient strategy based on model split and representation aggregate. Specifically, we make the client upload the outputs of the last hidden layer instead of all model parameters when participating in the aggregation, and the server distributes gradients according to the global information to revise local models. Empirical evidence from experiments verifies that our method can complete training by uploading less than one-tenth of model parameters, while preserving the usability of the model.
Self-organizing network (SON) and minimization of driver tests (MDT) are functions designed for Long Term Evolution (LTE) system. SON is designed for network deployment by automatic configuration. MDT is designed for network performance evaluation by automatic signalling procedure. However, these functions do not support new features in new radio (NR) access technology, e.g., multiple radio access technology (RAT)-dual connectivity (MR-DC), central unit-distribute unit (CU-DU) split architecture, beam, etc. Therefore, how to support these features is a challenge for the industry. This paper provides analysis for these problems and provides the summary of SON/MDT functions progress in 3GPP. The analysis includes sub functions such as inter/intra system mobility robustness enhancement, inter/intra system mobility load balance, measurement qualities and mechanism of MDT, energy saving mechanism and procedure, RACH procedure optimization, PCI selection optimization, coverage and capacity optimization, and quality of service (QoS) monitoring mechanism. In addition, this paper also provides an initial thought on artificial intelligence (AI) algorithms applied to SON/MDT functions in NR, so called Smart Grid.
The 5G radio access network (RAN) architecture is supposed to be split into the central unit (CU) and the distributed unit (DU) in order to support more flexible transport networks and provide enhanced user experience. However, such functional split may also introduce some new technical issues. In this paper, we study the data fast retransmission issue introduced by this functional split in different scenarios and solutions are provided to handle this issue. With the fast data retransmission mechanism proposed in this paper, the retransmitted data packets could be identified and handled with high priority. In this way, the data delivery between the CU and DU in 5G RAN is assured.
In new radio (NR) access technology, the radio access network (RAN) architecture is split into two kinds of entities, i.e., the centralized unit (CU) and the distributed unit (DU), to enhance the network flexibility. In this split architecture, one CU is able to control several DUs, which enables the function of base-band central control and remote service for users. In this paper, the general aspects of CU-DU split architecture are introduced, including the split method, interface functions (control plane functions and user plane functions), mobility scenarios and other CU-DU related issues. The simulations show the performance of Options 2 and 3 for CU-DU split.