TY - JOUR
T1 - Fusion Convolutional Neural Network for Cross-Subject EEG Motor Imagery Classification
AU - Roots, Karel
AU - Muhammad, Yar
AU - Muhammad, Naveed
N1 - Funding Information: Funding: Naveed Muhammad has been funded by the European Social Fund via the IT Academy program. Publisher Copyright: © 2020 by the authors. Licensee MDPI, Basel, Switzerland. Copyright: Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2020/9/5
Y1 - 2020/9/5
N2 - Brain–computer interfaces (BCIs) can help people with limited motor abilities to interact with their environment without external assistance. A major challenge in electroencephalogram (EEG)-based BCI development and research is the cross-subject classification of motor imagery data. Due to the highly individualized nature of EEG signals, it has been difficult to develop a cross-subject classification method that achieves sufficiently high accuracy when predicting the subject’s intention. In this study, we propose a multi-branch 2D convolutional neural network (CNN) that utilizes different hyperparameter values for each branch and is more flexible to data from different subjects. Our model, EEGNet Fusion, achieves 84.1% and 83.8% accuracy when tested on the 103-subject eegmmidb dataset for executed and imagined motor actions, respectively. The model achieved statistically significantly higher results compared with three state-of-the-art CNN classifiers: EEGNet, ShallowConvNet, and DeepConvNet. However, the computational cost of the proposed model is up to four times higher than the model with the lowest computational cost used for comparison.
AB - Brain–computer interfaces (BCIs) can help people with limited motor abilities to interact with their environment without external assistance. A major challenge in electroencephalogram (EEG)-based BCI development and research is the cross-subject classification of motor imagery data. Due to the highly individualized nature of EEG signals, it has been difficult to develop a cross-subject classification method that achieves sufficiently high accuracy when predicting the subject’s intention. In this study, we propose a multi-branch 2D convolutional neural network (CNN) that utilizes different hyperparameter values for each branch and is more flexible to data from different subjects. Our model, EEGNet Fusion, achieves 84.1% and 83.8% accuracy when tested on the 103-subject eegmmidb dataset for executed and imagined motor actions, respectively. The model achieved statistically significantly higher results compared with three state-of-the-art CNN classifiers: EEGNet, ShallowConvNet, and DeepConvNet. However, the computational cost of the proposed model is up to four times higher than the model with the lowest computational cost used for comparison.
U2 - 10.3390/computers9030072
DO - 10.3390/computers9030072
M3 - Article
SN - 2073-431X
VL - 9
SP - 1
EP - 9
JO - Computers
JF - Computers
IS - 3
ER -