ZTE Communications ›› 2026, Vol. 24 ›› Issue (1): 65-71.DOI: 10.12142/ZTECOM.202601009

• Special Topic • Previous Articles     Next Articles

Enhancing Code Quality with LLM in Software Static Analysis

Niu Zhi1,2(), Dong Luming1,2   

  1. 1.State Key Laboratory of Mobile Network and Mobile Multimedia Technology, Shenzhen 518055, China
    2.ZTE Corporation, Shenzhen 518057, China
  • Received:2024-04-18 Online:2026-03-25 Published:2026-03-17
  • About author:Niu Zhi (niu.zhi@zte.com.cn) received his master's degree in control engineering from Chongqing University, China. He is currently working at ZTE Corporation. His research interests include distributed systems, formal verification, and software reliability.
    Dong Luming received his master's degree in control theory and control engineering from Huazhong University of Science and Technology, China. He is currently working at ZTE Corporation. His research interests include distributed systems, formal verification, software reliability, and innovative security technologies for wireless communications.

Abstract:

In the modern era of ubiquitous and highly interconnected information technology, cybersecurity threats stemming from software code vulnerabilities have become increasingly severe, posing significant risks to the confidentiality, integrity, and availability of modern information systems. To enhance software code quality, enterprises often integrate static code analysis tools into Continuous Integration (CI) pipelines. However, the high rates of false positives and false negatives remain a challenge. The advent of large language models (LLMs), such as ChatGPT, presents a new opportunity to address these challenges. In this paper, we propose AI-SCDF, a framework that utilizes the custom-built Nebula-Coder AI model for detecting and fixing code security issues in real time during the developer’s personal build process. We construct a static code checking rule knowledge base through summarizing and classifying Common Weakness Enumeration (CWE) code security problems identified by security and quality assurance teams. The rule knowledge base is combined with CodeFuse-processed code contexts to serve as input for an AI code security detection microservice, which assists in identifying code quality and security issues. If any abnormalities are detected, they are addressed by an AI code security patching microservice, which alerts the developer and requests confirmation before committing the code into the repository. Experimental results show that our approach effectively improves code quality. We also develop a VSCode plugin for code alert detection and fix based on LLMs, which facilitates test shift-left and lowers the risk of software development.

Key words: software static analysis, LLM, CWE, knowledge base