Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Modern Graphics APIs: Design Principles, A Use Case, and New Perspectives
Lu Ping, Sun Qi, Wang Chen, Guo Jie, Guo Yanwen, Shi Wenzhe
ZTE Communications    2026, 24 (1): 97-106.   DOI: 10.12142/ZTECOM.202601013
Abstract2)   HTML0)    PDF (2369KB)(0)       Save

In this paper, we provide a comprehensive examination of the evolution of graphics Application Programming Interfaces (APIs). We begin by exploring traditional graphics APIs, elucidating their distinct features and inherent challenges. This sets the stage for a detailed exploration of modern graphics APIs, with a focus on four critical design principles. These principles are further analyzed through specific case studies and categorical examinations. The paper then introduces MoerEngine, a bespoke rendering engine, as a practical case to demonstrate the real-world application of these modern principles in software engineering. In conclusion, the study offers insights into the potential future trajectory of graphics APIs, spotlighting emerging design patterns and technological innovations. It also ventures to predict the development trends and capabilities of next-generation graphics APIs.

Table and Figures | Reference | Related Articles | Metrics
Special Topic on Digital Twin Online Channel Modeling for 6G and Beyond
WANG Chengxiang, HUANG Chen
ZTE Communications    2025, 23 (2): 1-2.   DOI: 10.12142/ZTECOM.202502001
Abstract134)   HTML5)    PDF (393KB)(52)       Save
Reference | Related Articles | Metrics
Research on High-Precision Stochastic Computing VLSI Structures for Deep Neural Network Accelerators
WU Jingguo, ZHU Jingwei, XIONG Xiankui, YAO Haidong, WANG Chengchen, CHEN Yun
ZTE Communications    2024, 22 (4): 9-17.   DOI: 10.12142/ZTECOM.202404003
Abstract195)   HTML8)    PDF (1890KB)(179)       Save

Deep neural networks (DNN) are widely used in image recognition, image classification, and other fields. However, as the model size increases, the DNN hardware accelerators face the challenge of higher area overhead and energy consumption. In recent years, stochastic computing (SC) has been considered a way to realize deep neural networks and reduce hardware consumption. A probabilistic compensation algorithm is proposed to solve the accuracy problem of stochastic calculation, and a fully parallel neural network accelerator based on a deterministic method is designed. The software simulation results show that the accuracy of the probability compensation algorithm on the CIFAR-10 data set is 95.32%, which is 14.98% higher than that of the traditional SC algorithm. The accuracy of the deterministic algorithm on the CIFAR-10 dataset is 95.06%, which is 14.72% higher than that of the traditional SC algorithm. The results of Very Large Scale Integration Circuit (VLSI) hardware tests show that the normalized energy efficiency of the fully parallel neural network accelerator based on the deterministic method is improved by 31% compared with the circuit based on binary computing.

Table and Figures | Reference | Related Articles | Metrics