Conventional optical time-domain reflectometry (OTDR) schemes for passive optical network (PON) link monitoring are limited by insufficient dynamic range and spatial resolution. The expansion of PONs, with increasing optical network units (ONUs) and cascaded splitters, imposes even more stringent demands on the dynamic range of monitoring systems. To address these challenges, we propose a time-gated digital optical frequency-domain reflectometry (TGD-OFDR) system for PON monitoring that effectively decouples the inherent coupling between spatial resolution and pulse width. The proposed system achieves both high spatial resolution (~0.3 m) and high dynamic range (~30 dB) simultaneously, marking a significant advancement in optical link monitoring.
Wireless local area networks (WLANs) have witnessed rapid growth in the past 20 years, with maximum throughput as the key technical objective. However, quality of experience (QoE) remains the primary concern for wireless network users. We point out that poor QoE is the most challenging issue in current WLANs and further analyze the key technical problems that cause poor QoE in WLANs, including fully distributed networking architectures, chaotic random access, awkward “high capability” issues, coarse-grained quality of service (QoS) architectures, ubiquitous and complicated interference, “no place” for AI issues, and heavy burden of standard evolution. To the best of our knowledge, this is the first work to point out that poor QoE is the most challenging problem in current WLANs, and the first to systematically analyze the technical problems that cause poor QoE in WLANs. We strongly suggest that achieving high experience (HEX) be the key objective of the next-generation WLANs.
Millimeter-wave (mmWave) technology has been extensively studied for indoor short-range communications. In such fixed network applications, the emerging FTTR architecture allows mmWave technology to be well cascaded with in-room optical network terminals, supporting high-speed communication at rates over tens of Gbit/s. In this Fiber-to-the-Room (FTTR)-mmWave system, the severe signal attenuation over distance and high penetration loss through room walls are no longer bottlenecks for practical mmWave deployment. Instead, these properties create high spatial isolation, which prevents mutual interference between data streams and ensures information security. This paper surveys the promising integration of FTTR and mmWave access for next-generation indoor high-speed communications, with a particular focus on the Ultra-Converged Access Network (U-CAN) architecture. It is structured in two main parts: it first traces this new FTTR-mmWave architecture from the perspective of Wi-Fi and mmWave communication evolution, and then focuses specifically on the development of key mmWave chipsets for FTTR-mmWave Wi-Fi applications. This work aims to provide a comprehensive reference for researchers working toward immersive, untethered indoor wireless experiences for users.
The increasing demand for high throughput and low latency in Wi-Fi 7 necessitates a robust receiver design. Traditional receiver architectures, which rely on a cascade of complex, independent signal processing modules, often face performance bottlenecks. Rather than focusing on semantic-level tasks or simplified Additive White Gaussian Noise (AWGN) channels, this paper investigates a bit-level end-to-end receiver for a practical Wi-Fi 7 Multiple-Input Multiple-Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) physical layer. A lightweight Transformer-based encoder-only architecture is proposed to directly map synchronized OFDM signals to decoded bitstreams, replacing the conventional channel estimation, equalization, and data detection. By leveraging the multi-head self-attention mechanism of the Transformer encoder, our model effectively captures long-range spatial–temporal dependencies across antennas and subcarriers, thus learning to compensate for channel distortions without explicit channel state information. This mechanism eliminates the need for explicit channel estimation, enabling the direct extraction of crucial channel and signal features. Experimental results validate the efficacy of the proposed design, demonstrating the significant potential of deep learning for future wireless receiver architectures.
Fiber-to-the-Room (FTTR) has emerged as the core architecture for next-generation home and enterprise networks, offering gigabit-level bandwidth and seamless wireless coverage. However, the complex multi-device topology of FTTR networks presents significant challenges in identifying sources of network performance degradation and conducting accurate root cause analysis. Conventional approaches often fail to deliver efficient and precise operational improvements. To address this issue, this paper proposes a Transformer-based multi-task learning model designed for automated root cause analysis in FTTR environments. The model integrates multidimensional time-series data collected from access points (APs), enabling the simultaneous detection of APs experiencing performance degradation and the classification of underlying root causes, such as weak signal coverage, network congestion, and signal interference. To facilitate model training and evaluation, a multi-label dataset is generated using a discrete-event simulation platform implemented in MATLAB. Experimental results demonstrate that the proposed Transformer-based multi-task learning model achieves a root cause classification accuracy of 96.75%, significantly outperforming baseline models including Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Random Forest, and eXtreme Gradient Boosting (XGBoost). This approach enables the rapid identification of performance degradation causes in FTTR networks, offering actionable insights for network optimization, reduced operational costs, and enhanced user experience.
As Fiber-to-the-Room (FTTR) networks proliferate, multi-device deployments pose significant energy consumption challenges. This paper proposes a Quality of Service (QoS)-aware energy-saving scheme based on a multi-threshold buffer energy saving (MBES) scheme to reduce consumption while ensuring energy QoS. MBES leverages the centralized control of the main fiber unit (MFU) and the wireless-state awareness of subordinate fiber units (SFUs) for synergistic fiber-wireless energy savings. The scheme assigns independent, dynamic buffer thresholds to service queues on SFUs, enabling low-latency reporting for high-priority traffic while accumulating low-priority data to extend sleep cycles. At the MFU, a coordinated scheduling algorithm accounts for Wi-Fi access delay and creates an adaptive closed-loop control by adjusting SFUs’ buffer thresholds based on end-to-end delay feedback. Simulation results show that, while satisfying strict latency requirements, MBES achieves a maximum energy saving of 17.75% compared with the no energy saving (NES) scheme and provides a superior trade-off between latency control and energy efficiency compared with the single-threshold buffer energy saving (SBES) scheme.
Fiber-to-the-Room (FTTR) networks with multi-access point (AP) coordination face significant challenges in implementing Joint Transmission (JT), particularly the high overhead of Channel State Information (CSI) acquisition. While the centralized wireless access network (C-WAN) architecture inherently provides high-precision synchronization through fiber-based clock distribution and centralized scheduling, efficient JT still requires accurate CSI with low signaling cost. In this paper, we propose a deep learning-based hybrid model that synergistically integrates temporal prediction and spatial reconstruction to exploit spatiotemporal correlations in indoor channels. By leveraging the centralized data and computational capability of the C-WAN architecture, the model reduces sounding frequency and the number of antennas required per sounding instance. Experimental results on a real-world synchronized channel dataset show that the proposed method lowers over-the-air resource consumption while maintaining JT performance close to that achieved with ideal CSI, offering a practical low-overhead solution for high-performance FTTR systems.
Open-set object detectors, as exemplified by Grounding DINO, have attracted significant attention due to their remarkable performance on in-domain datasets like Common Objects in Context (COCO) after only few-shot fine-tuning. However, their generalization capabilities in cross-domain scenarios remain substantially inferior to their in-domain few-shot performance. Prior work on fine-tuning Grounding DINO for cross-domain few-shot object detection has primarily focused on data augmentation, leaving broader systemic optimizations unexplored. To bridge this gap, we propose a comprehensive end-to-end fine-tuning framework specifically designed to optimize Grounding DINO for cross-domain few-shot scenarios. In addition, we propose Mixture-of-Experts (MoE)-Grounding DINO, a novel architecture that integrates the MoE architecture to enhance adaptability in cross-domain settings. Our approach demonstrates a significant 15.4 Mean Average Precision (mAP) improvement over the Grounding DINO baseline on the Roboflow20-VL benchmark, establishing a new state of the art for cross-domain few-shot object detection (CD-FSOD). The source code and models will be made available upon publication.
Shortening is a standard rate-matching method for polar codes in wireless communications. Since polarization-adjusted convolutional (PAC) codes also have a block length limited to the integer powers of two, they also require rate-matching. To this end, we first analyze the limitations of existing shortening patterns for PAC codes and explore their feasibility. Subsequently, we propose a novel shortening scheme for PAC codes based on list decoding, where the receiver is allowed to treat the values of the deleted bits as undetermined. This approach uses a specialized PAC codeword and activates multiple decoding paths during the initialization of list decoding, enabling it to achieve the desired reliability.
The complexities of hardware and signal processing make it especially challenging to develop self-interference cancellation (SIC) techniques for full-duplex (FD) massive multiple-input-multiple-output (MIMO) systems. This paper examines an FD massive MIMO system featuring multi-stream transmission. Specifically, it adopts an architecture where a single transmit or receive radio frequency (RF) channel is connected to three antennas in the same polarization direction, effectively reducing the number of transmit and receive RF channels by half. The SoftNull algorithm serves as the primary method for SI suppression, leveraging digital precoding during transmission. Additionally, this study outlines a design strategy to enhance SIC in the proposed system. Simulation results highlight the efficacy of the SoftNull algorithm, which achieves a remarkable total SIC of up to 64 dB. Furthermore, combined with measures such as antenna isolation and increased transceiver array spacing, the resulting sum rate can be twice that of a half-duplex system.
In recent years, microservice architecture has gained increasing popularity. However, due to the complex and dynamically changing nature of microservice systems, failure detection has become more challenging. Traditional root cause analysis methods mostly rely on a single modality of data, which is insufficient to cover all failure information. Existing multimodal methods require collecting high-quality labeled samples and often face challenges in classifying unknown failure categories. To address these challenges, this paper proposes a root cause analysis framework based on a masked graph autoencoder (GAE). The main process involves feature extraction, feature dimensionality reduction based on GAE, and online clustering combined with expert input. The method is experimentally evaluated on two public datasets and compared with two baseline methods, demonstrating significant advantages even with 16% labeled samples.