Login / Signup

Cross-subject emotion recognition using visibility graph and genetic algorithm-based convolution neural network.

Qing CaiJian-Peng AnHao-Yu LiJia-Yi GuoZhong-Ke Gao
Published in: Chaos (Woodbury, N.Y.) (2022)
An efficient emotion recognition model is an important research branch in electroencephalogram (EEG)-based brain-computer interfaces. However, the input of the emotion recognition model is often a whole set of EEG channels obtained by electrodes placed on subjects. The unnecessary information produced by redundant channels affects the recognition rate and depletes computing resources, thereby hindering the practical applications of emotion recognition. In this work, we aim to optimize the input of EEG channels using a visibility graph (VG) and genetic algorithm-based convolutional neural network (GA-CNN). First, we design an experiment to evoke three types of emotion states using movies and collect the multi-channel EEG signals of each subject under different emotion states. Then, we construct VGs for each EEG channel and derive nonlinear features representing each EEG channel. We employ the genetic algorithm (GA) to find the optimal subset of EEG channels for emotion recognition and use the recognition results of the CNN as fitness values. The experimental results show that the recognition performance of the proposed method using a subset of EEG channels is superior to that of the CNN using all channels for each subject. Last, based on the subset of EEG channels searched by the GA-CNN, we perform cross-subject emotion recognition tasks employing leave-one-subject-out cross-validation. These results demonstrate the effectiveness of the proposed method in recognizing emotion states using fewer EEG channels and further enrich the methods of EEG classification using nonlinear features.
Keyphrases