Self-supervised contrastive learning for EEG-based cross-subject motor imagery recognition.
Wenjie LiHaoyu LiXinlin SunHuicong KangShan AnGuoxin WangZhong-Ke GaoPublished in: Journal of neural engineering (2024)
Objective . The extensive application of electroencephalography (EEG) in brain-computer interfaces (BCIs) can be attributed to its non-invasive nature and capability to offer high-resolution data. The acquisition of EEG signals is a straightforward process, but the datasets associated with these signals frequently exhibit data scarcity and require substantial resources for proper labeling. Furthermore, there is a significant limitation in the generalization performance of EEG models due to the substantial inter-individual variability observed in EEG signals. Approach . To address these issues, we propose a novel self-supervised contrastive learning framework for decoding motor imagery (MI) signals in cross-subject scenarios. Specifically, we design an encoder combining convolutional neural network and attention mechanism. In the contrastive learning training stage, the network undergoes training with the pretext task of data augmentation to minimize the distance between pairs of homologous transformations while simultaneously maximizing the distance between pairs of heterologous transformations. It enhances the amount of data utilized for training and improves the network's ability to extract deep features from original signals without relying on the true labels of the data. Main results . To evaluate our framework's efficacy, we conduct extensive experiments on three public MI datasets: BCI IV IIa, BCI IV IIb, and HGD datasets. The proposed method achieves cross-subject classification accuracies of 67.32%, 82.34%, and 81.13%on the three datasets, demonstrating superior performance compared to existing methods. Significance . Therefore, this method has great promise for improving the performance of cross-subject transfer learning in MI-based BCI systems.