Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
A Privacy-Preserving Scheme for Multi-Party Vertical Federated Learning
FAN Mochan, ZHANG Zhipeng, LI Difei, ZHANG Qiming, YAO Haidong
ZTE Communications    2024, 22 (4): 89-96.   DOI: 10.12142/ZTECOM.202404012
Abstract8)   HTML0)    PDF (1065KB)(7)       Save

As an important branch of federated learning, vertical federated learning (VFL) enables multiple institutions to train on the same user samples, bringing considerable industry benefits. However, VFL needs to exchange user features among multiple institutions, which raises concerns about privacy leakage. Moreover, existing multi-party VFL privacy-preserving schemes suffer from issues such as poor reliability and high communication overhead. To address these issues, we propose a privacy protection scheme for four institutional VFLs, named FVFL. A hierarchical framework is first introduced to support federated training among four institutions. We also design a verifiable replicated secret sharing (RSS) protocol 3 2 -sharing and combine it with homomorphic encryption to ensure the reliability of FVFL while ensuring the privacy of features and intermediate results of the four institutions. Our theoretical analysis proves the reliability and security of the proposed FVFL. Extended experiments verify that the proposed scheme achieves excellent performance with a low communication overhead.

Table and Figures | Reference | Related Articles | Metrics
Research on High-Precision Stochastic Computing VLSI Structures for Deep Neural Network Accelerators
WU Jingguo, ZHU Jingwei, XIONG Xiankui, YAO Haidong, WANG Chengchen, CHEN Yun
ZTE Communications    2024, 22 (4): 9-17.   DOI: 10.12142/ZTECOM.202404003
Abstract16)   HTML0)    PDF (1890KB)(8)       Save

Deep neural networks (DNN) are widely used in image recognition, image classification, and other fields. However, as the model size increases, the DNN hardware accelerators face the challenge of higher area overhead and energy consumption. In recent years, stochastic computing (SC) has been considered a way to realize deep neural networks and reduce hardware consumption. A probabilistic compensation algorithm is proposed to solve the accuracy problem of stochastic calculation, and a fully parallel neural network accelerator based on a deterministic method is designed. The software simulation results show that the accuracy of the probability compensation algorithm on the CIFAR-10 data set is 95.32%, which is 14.98% higher than that of the traditional SC algorithm. The accuracy of the deterministic algorithm on the CIFAR-10 dataset is 95.06%, which is 14.72% higher than that of the traditional SC algorithm. The results of Very Large Scale Integration Circuit (VLSI) hardware tests show that the normalized energy efficiency of the fully parallel neural network accelerator based on the deterministic method is improved by 31% compared with the circuit based on binary computing.

Table and Figures | Reference | Related Articles | Metrics