ZTE Communications ›› 2023, Vol. 21 ›› Issue (3): 3-10.DOI: 10.12142/ZTECOM.202303002

• Special Topic • Previous Articles     Next Articles

Double Deep Q-Network Decoder Based on EEG Brain-Computer Interface

REN Min, XU Renyu(), ZHU Ting   

  1. Southwest Jiaotong University, Chengdu 611756, China
  • Received:2023-06-08 Online:2023-09-21 Published:2023-09-21
  • About author:REN Min received her BS degree from the School of Mathematics and Information, China West Normal University in 2021. She is currently working toward an MS degree at Southwest Jiaotong University, China. Her research interests include reinforcement learning and its application and data mining.|XU Renyu (ryxu@swjtu.edu.cn) received her PhD degree in mathematics and information science from Paris Saclay University, France in 2017. She is currently a lecturer in the school of mathematics, Southwest Jiaotong University, China. Her current research interests include reasoning with graph theory and machine learning.|ZHU Ting received her BS degree from the School of Mathematics and Software Science, Sichuan Normal University, China in 2021. She is now working toward an MS degree at Southwest Jiaotong University, China. Her research interests include reinforcement learning and deep reinforcement learning.

Abstract:

Brain-computer interfaces (BCI) use neural activity as a control signal to enable direct communication between the human brain and external devices. The electrical signals generated by the brain are captured through electroencephalogram (EEG) and translated into neural intentions reflecting the user's behavior. Correct decoding of the neural intentions then facilitates the control of external devices. Reinforcement learning-based BCIs enhance decoders to complete tasks based only on feedback signals (rewards) from the environment, building a general framework for dynamic mapping from neural intentions to actions that adapt to changing environments. However, using traditional reinforcement learning methods can have challenges such as the curse of dimensionality and poor generalization. Therefore, in this paper, we use deep reinforcement learning to construct decoders for the correct decoding of EEG signals, demonstrate its feasibility through experiments, and demonstrate its stronger generalization on motion imaging (MI) EEG data signals with high dynamic characteristics.

Key words: brain-computer interface (BCI), electroencephalogram (EEG), deep reinforcement learning (Deep RL), motion imaging (MI) generalizability