[an error occurred while processing this directive]

ZTE Communications ›› 2024, Vol. 22 ›› Issue (2): 39-48.DOI: 10.12142/ZTECOM.202402006

• • 上一篇    下一篇

  

  • 收稿日期:2024-05-31 出版日期:2024-06-25 发布日期:2024-06-25

Hierarchical Federated Learning Architectures for the Metaverse

GU Cheng, LI Baochun   

  1. Department of Electrical and Computer Engineering, University of Toronto, Toronto M5S2E8, Canada
  • Received:2024-05-31 Online:2024-06-25 Published:2024-06-25
  • About author:GU Cheng received his BASc Degree of Honours in Computer Engineering Cooperative Program with Distinction in 2022, and his MASc degree from the Department of Electrical and Computer Engineering in May 2024, both from University of Waterloo, Canada. His research interests focus on building next-generation AI assisted distributed systems.
    LI Baochun (bli@ece. toronto. edu) received his BE degree from Tsinghua University, China in 1995 and his MS and PhD degrees from the University of Illinois at Urbana-Champaign, USA in 1997 and 2000, respectively. Since 2000, he has been with the Department of Electrical and Computer Engineering, the University of Toronto, Canada, where he is currently a Professor. Since August 2005, he has been holding the Bell Canada Endowed Chair in computer engineering. He was the recipient of IEEE Communications Society Leonard G. Abraham Award in the field of communications systems in 2000, the Multimedia Communications Best Paper Award from the IEEE Communications Society in 2009, the University of Toronto McLean Award in 2009, the Best Paper Award from IEEE INFOCOM in 2023, and the IEEE INFOCOM Achievement Award in 2024. He is a Fellow of the Canadian Academy of Engineering, the Engineering Institute of Canada, and IEEE. His current research interests include cloud computing, security and privacy, distributed machine learning, federated learning, and networking.

Abstract:

In the context of edge computing environments in general and the metaverse in particular, federated learning (FL) has emerged as a distributed machine learning paradigm that allows multiple users to collaborate on training a shared machine learning model locally, eliminating the need for uploading raw data to a central server. It is perhaps the only training paradigm that preserves the privacy of user data, which is essential for computing environments as personal as the metaverse. However, the original FL architecture proposed is not scalable to a large number of user devices in the metaverse community. To mitigate this problem, hierarchical federated learning (HFL) has been introduced as a general distributed learning paradigm, inspiring a number of research works. In this paper, we present several types of HFL architectures, with a special focus on the three-layer client-edge-cloud HFL architecture, which is most pertinent to the metaverse due to its delay-sensitive nature. We also examine works that take advantage of the natural layered organization of three-layer client-edge-cloud HFL to tackle some of the most challenging problems in FL within the metaverse. Finally, we outline some future research directions of HFL in the metaverse.

Key words: federated learning, hierarchical federated learning, metaverse