=Paper= {{Paper |id=Vol-3368/paper2 |storemode=property |title=The evolution of AI approaches for motor imagery EEG-based BCIs |pdfUrl=https://ceur-ws.org/Vol-3368/paper2.pdf |volume=Vol-3368 |authors=Aurora Saibene,Silvia Corchs,Mirko Caglioni,Francesca Gasparini |dblpUrl=https://dblp.org/rec/conf/aiia/SaibeneCCG22 }} ==The evolution of AI approaches for motor imagery EEG-based BCIs== https://ceur-ws.org/Vol-3368/paper2.pdf
The evolution of AI approaches for motor imagery
EEG-based BCIs
Aurora Saibene1,2,* , Silvia Corchs2,3 , Mirko Caglioni1 and Francesca Gasparini1,2
1
  University of Milano-Bicocca, Viale Sarca 336, 20126, Milano, Italy
2
  NeuroMI, Milan Center for Neuroscience, Piazza dell’Ateneo Nuovo 1, 20126, Milano, Italy
3
  University of Insubria, Via J. H. Dunant 3, 21100, Varese, Italy


                                         Abstract
                                         The Motor Imagery (MI) electroencephalography (EEG) based Brain Computer Interfaces (BCIs) allow
                                         the direct communication between humans and machines by exploiting the neural pathways connected
                                         to motor imagination. Therefore, these systems open the possibility of developing applications that
                                         could span from the medical field to the entertainment industry. In this context, Artificial Intelligence
                                         (AI) approaches become of fundamental importance especially when wanting to provide a correct and
                                         coherent feedback to BCI users. Moreover, publicly available datasets in the field of MI EEG-based BCIs
                                         have been widely exploited to test new techniques from the AI domain. In this work, AI approaches
                                         applied to datasets collected in different years and with different devices but with coherent experimental
                                         paradigms are investigated with the aim of providing a concise yet sufficiently comprehensive survey on
                                         the evolution and influence of AI techniques on MI EEG-based BCI data.

                                         Keywords
                                         artificial intelligence, brain computer interface, electroencephalography, motor imagery




1. Introduction
Translating thoughts into commands understandable by external applications and devices is
the basic principle ruling the development of Brain Computer Interfaces (BCIs) [1]. The most
appreciated method to collect neural signals is the electroencephalogram (EEG), having that
it records data with non-invasive surface sensors called electrodes, it is sufficiently low-cost
and with possible high temporal and spatial resolution [2]. Moreover, the EEG signals are
characterized by rhythms, whose fluctuations may be exploited to detect specific brain states
[3]. Among these brain conditions, the imagination of voluntary movements, called Motor
Imagery (MI), may be observed over the primary sensorimotor cortex with amplitude variations
of the 𝜇 and 𝛽 rhythms [4, 5]. These effects can be exploited to create MI EEG-based BCIs that
can be used for a variety of applications spanning from rehabilitation procedures to the control

Italian Workshop on Artificial Intelligence for Human Machine Interaction (AIxHMI 2022), December 02, 2022, Udine,
Italy
*
  Corresponding author.
†
  These authors contributed equally.
$ aurora.saibene@unimib.it (A. Saibene); silvia.corchs@uninsubria.it (S. Corchs); m.caglioni2@campus.unimib.it
(M. Caglioni); francesca.gasparini@unimib.it (F. Gasparini)
 0000-0002-4405-8234 (A. Saibene); 0000-0002-1739-8110 (S. Corchs); 0000-0002-6279-6660 (F. Gasparini)
                                       © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073
                                       CEUR Workshop Proceedings (CEUR-WS.org)
of wheelchair movements [1], and in conjunction with virtual and augmented reality [6].
Therefore, one of the main BCI life-cycle components is represented by feedback, which benefits
from the evolution and improvement of Artificial Intelligence (AI) approaches [7] in predicting
the different brain states to be translated into system commands.
Consequently, how have the AI techniques evolved in and influenced the field of MI EEG-based
BCIs, considering the great number of possibilities offered by these systems?
This work aims at providing a brief overview and discussion on this topic by analysing the
AI approaches that have been applied to some representative datasets present in the domain
literature.
   Therefore, in Section 2 the paper firstly provides an overall view of the timeline related to MI
EEG-based BCIs and justifies the choice of specific datasets, that are described in Section 3. An
overview of the AI techniques applied to these data is provided in Section 4 and observations
on the presented AI approaches are discussed in Section 5. Finally, conclusions are drawn in
Section 6.


2. Overview
The research on MI EEG-based BCIs has become particularly prolific in the last years. In
fact, typing a quick query on Scopus1 title, abstract and keywords of indexed works, i.e.,
TITLE-ABS-KEY((mi OR motor AND imagery OR motor AND imagination) AND (eeg OR elec-
troencephalographic) AND based AND (bci OR brain AND computer AND interface)), we obtain
the graphic in Figure 1. Notice that the search was conducted during September 2022.
Besides having a clear understanding of a constant increase of publications on these topics
starting from 2017 and having for now its apex in 2021, Figure 1 provides a story of the early
phases of the MI EEG-based BCIs, which set the foundation for later works.
An initial period (1996-2003) during which the BCI community began to discuss these topics
was followed by a discrete boost (2004) of the research production probably due to the insightful
work of Schalk et al. [8] and the outcome of the BCI Competition 2003 [9].
Starting from the evident direction took by many laboratories concerning the development of
systems capable of enabling a mean of communication and control for patients with severe
motor disabilities, Schalk et al. presented the BCI20002 , which is an open-source platform that
allows the management of BCI systems and that remains active and maintained to the present
day. Moreover, not only some of the authors of [8] were involved in the BCI Competition 2003,
but also the names of the researchers of the pioneer work of 1996 [10] obtained with the Scopus
search, unsurprisingly appear. In fact, they contributed to dataset III titled Motor Imagery, which
presented data acquired on the main central cortical electrodes (C{3,4,z}) during the imagination
of left or right hand movements, with the main aim of providing a continuous feedback to
the BCI users. The winning strategy to the proposed problem was presented by Lemm et al.
[11], who tried to disclose motor intention by characterizing the EEG signal rhythmic activity
through complex Morlet wavelet [12] application and using a probabilistic model to predict left
or right hand MI.
1
    https://www.scopus.com/
2
    https://www.bci2000.org/mediawiki/index.php/Main_Page
Figure 1: Number of papers per year obtained by querying Scopus on MI EEG-based BCI related
keywords (search conducted during September 2022).


   Afterwards, the BCI Competitions - Berlin Brain Computer-Interface3 datasets quickly became
benchmarks on which test strategies to provide more efficient and reliable BCI systems.
By adding to the first query the string (bci AND competition AND (2003 OR ii OR iii OR iv)),
representing the BCI Competition II (or 2003), III and IV, and limiting the results to year 2021 only,
27 papers are provided in output, of which 23 use the BCI Competition IV datasets (especially,
2a and 2b). Considering that Figure 1 reports 97 works published in 2021 and that the screening
through Scopus is done only on the title, abstract, and keywords of the works indexed by this
search engine, the number of publications using the BCI Competition IV dataset 2a and 2b [13]
seems to be fairly high and justifies a deeper analysis concerning the AI techniques tested on
them to better understand their evolution during a long time span.
   However, these datasets present EEG recordings acquired on a restricted number of subjects,
considering a restricted number of electrodes and experimental conditions (as detailed in
Section 3). Wanting to have a general overview of the evolution of AI approaches in MI EEG-
based BCIs and noticing its increased use as a benchmark in the last 10 years, the EEG Motor
Movement/Imagery Dataset [8, 14] collected from a larger population, using a different montage,
and considering diverse MI tasks, but using the BCI2000 system, is also considered.
   Moreover, a great attention has been given to wearable technologies, especially in the last few
years, since the necessity of moving the use of BCIs from medical and laboratory environments
to real-world scenarios is becoming more pressing due to a variety of needs like developing in-
home rehabilitation tools [15], exploiting customer-grade devices [16], and providing continuous
assistive technologies [17] that patients could easily use alone without having to buy expensive
equipment.
Following these principles, Peterson et al. [18] have collected the MI-OpenBCI dataset using
wearable low-cost technologies to record MI EEG-based BCI data. Therefore, an overview of
this dataset is provided to have a closer look on future developments of these new systems and

3
    https://www.bbci.de/competition/
Figure 2: Electrode positioning used for the recording of BCI Competition IV dataset 2a. Notice that on
the right are reported the positions of the electrooculogram channels. This setting is also used for BCI
Competition IV dataset 2b, considering electrodes C{3,4,z} only. See original montage in [13].


the changing role of AI when facing them.


3. Datasets
Considering the brief overview presented in Section 2, the datasets that will be at the center
of this paper analysis are the BCI Competition IV dataset 2a (2012) and 2b (2012) [13], the EEG
Motor Movement/Imagery Dataset (2009) [8, 14], and the MI-OpenBCI (2020) one [18]. In this
section a concise description of their characteristics is reported for completeness.

3.1. BCI Competition IV dataset 2a
The BCI Competition IV dataset 2a [13] is provided under the name Continuous Multi-class Motor
Imagery. In fact, it has been collected from 9 subjects executing a cue-based BCI paradigm
consisting of left/right hand, both feet and tongue MI. Each subject participated in different
days to 2 experimental sessions containing 6 experimental runs of 48 trials each.
Figure 2 depicts the montage consisting of 22 Ag/AgCl electrodes for scalp recording, reference
and ground electrodes placed on the left and right mastoids, and 3 monopolar electrooculogram
channels (positioned to provide reference for artifact removal).
The signals have been acquired with 250Hz sampling rate and the dataset authors provided
them bandpass (0.5-100Hz) and notch (50Hz) filtered. The noisy trials were removed by manual
screening of experts.
3.2. BCI Competition IV dataset 2b
The BCI Competition IV dataset 2b [13] presents the following short description: Session-to-
Session Transfer of a Motor Imagery BCI under Presence of Eye Artifacts. In fact, the motivation
driving its presence into the BCI competition was to provide a correct EEG signal classification
despite having data affected by ocular noise.
The dataset has been collected from 9 right-handed healthy subjects using only the central
cortical electrodes (C{3,4,z}), the reference and ground channel and the 3 monopolar electroocu-
logram channels (depicted in Figure 2), during a cue-based BCI paradigm of left/right hand MI
[19]. The experiment was designed to have 2 separate sessions consisting of 6 runs of 10 trials
each without user feedback and 3 separate sessions consisting of 4 runs of 40 trials each with
online feedback.
The EEG signal was sampled at 250Hz and bandpass (0.5-100Hz) and notch (50Hz) filtered.

3.3. EEG Motor Movement/Imagery Dataset
The EEG Motor Movement/Imagery Dataset 4 [8, 14] has been collected by using the BCI2000
system introduced in Section 2 and considering an electrode montage of 64 channels respecting
the 10-10 International System (excluding electrodes Nz, F{9,10}, FT{9,10}, A{1,2}, TP{9,10},
P{9,10}).
109 subjects were asked to perform a cue-based experiment in a single session consisting of
14 experimental runs divided in 2 baseline recordings (eyes open and closed) and 3 runs per
experimental condition. The experimental conditions consisted of the motor execution or
imagination of left and right hand movements or both hands and feet movements.
The signal was acquired with a sampling rate of 160Hz and no pre-processing was performed
on the data.

3.4. MI-OpenBCI
The MI-OpenBCI dataset [18, 20] has been collected and provided to the research community
very recently (year 2020).
The authors performed a feasibility study on the use of consumer-grade MI EEG-based BCI
system. Therefore, they employed the OpenViBE software platform5 , the OpenBCI Cyton
sensing board with Daisy Module6 and used Electrocap System II with 19 electrodes to acquire
the EEG signal wirelessly. Of these electrodes, 16 were effectively used (F{z,3,4,7,8}, C{z,3,4},
T{3,4,5,6}, P{z,3,4}). Reference and ground electrodes were placed on the left and right ear lobes.
Notice that electromyographic signals were also acquired, but details are not provided here,
having that these physiological data are not the focus of the present work.
12 healthy right-handed subjects with no prior experience with BCIs performed a cue-based MI
of the dominant hand grasping or resting during a single experimental session. 4 runs of 20
trials were executed.

4
  Dataset description and recordings available at https://physionet.org/content/eegmmidb/1.0.0/.
5
  http://openvibe.inria.fr/.
6
  https://docs.openbci.com/GettingStarted/Boards/DaisyGS/.
The EEG signal was acquired with sampling rate of 125Hz and filtered with a 3rd order Butter-
worth bandpass-filter (0.5-45Hz).


4. Artificial intelligence approaches on the investigated datasets
Having provided a brief description of each dataset under scrutiny, it is possible to proceed
with the analysis of the AI approaches applied on them during their lifetime.
The analyses will start from the first publication presenting results on the investigated dataset
and then proceed by considering some relevant works that allow further discussions on the
influence that AI has on MI EEG-based BCIs.

4.1. On the BCI Competition IV dataset 2a
The winners of the BCI Competition IV dataset 2a presented their approach in a dedicated
publication [21]. Ang et al. wanted to enhance the performances of the Common Spatial Pattern
(CSP) algorithm, and thus proposed the use of Filter Bank CSP (FBCSP) [22]. FBCSP consists
of 4 phases, i.e., the band-pass filtering, the spatial filtering, the feature selection based on
mutual information, and the classification phase. Concerning this last step, the authors chose
to apply the Naïve Bayesian Parzen Window classifier, having obtained good results from
a previous competition [22]. Moreover, having that the dataset presented 4 conditions and
that the algorithm was initially developed for binary classification, they propose multi-class
extensions of FBCSP by using the divide and conquer, pairwise and one versus rest approaches.
This last strategy was chosen as the one to present to the competition having that it obtained
better or similar average Cohen’s kappa value (0.57) with less computational cost, datum that is
particularly important when thinking of the real-time nature that BCI responses should have.
   In 2015, Nicolas-Alonso et al. [23] thought about this issue and proposed an adaptive semi-
supervised classification that could also face the non-stationarity of the EEG signals. Therefore,
they firstly reduced non-stationarity by using exponentially weighted moving average before
performing the classification, which is done by using the newly developed method based on
spectral regression kernel discriminant analysis. In fact, the authors train the model with labelled
data and then introduce a self-training algorithm using new unlabelled data to sequentially
update the model. Notice that the feature extraction is done through CSP.
The joint semi-supervised learning and adaptive processing provided better results with a
significant computational efficiency compared to previous literature works considering the
4-class problem (0.70 Cohen’s kappa coefficient and 77% accuracy).
The authors further improved the performances (0.74 maximum average kappa value) by using
the stacked regularised linear discriminant analysis for classification and thus integrating
temporal, spectral, and spatial information [24].
   An adaptive learning strategy has been also proposed by Raza et al. [25], in 2016, who
exploited the covariate shift detection based on exponentially weighted moving average to
monitor the EEG-based BCIs. The authors obtained an average accuracy of 76.70% employing
the upper bound version of their methodology.
   In 2017, Jafarifarmand et al. [26] proposed a new framework that could allow the improvement
of multi-class BCIs. The process consisted of a feature extraction step using artifact rejected
CSP and a self-regulated adaptive resonance theory based neuro-fuzzy classifier that is able to
model the non-stationarity of the EEG signal as well as uncertainties in the data. The authors
obtained better results in respect to the competition winners with an average kappa value of
0.63.
   In 2019, Olivas-Padilla [27] do not only test their strategy on BCI Competition IV dataset 2a
but also acquire a proprietary dataset using OpenBCI with a similar experimental paradigm.
The feature extraction was again provided by a novel modification of the CSP algorithm, i.e.,
the discriminative FBCSP, to model the spatial and spectral characteristics of the EEG signal.
A modular network composed of 4 Convolutional Neural Networks (CNNs) specialised in the
binary classification of combinations of the 4 experimental conditions, was used to provide
a final classification. Notice that Bayesian optimisation was employed for hyperparameter
selection. The authors’ model achieved 80.03% accuracy and a kappa score of 0.61.
   In 2019, deep learning methods were also investigated by Majidov & Whangbo [28] who
highlighted the necessity of these models to use a good number of data. Therefore, they proposed
a pipeline consisting of different combinations of the following steps: (i) a data augmentation
step based on sliced moving windows, (ii) the extraction of the power spectrum density from
3 EEG rhythms and the application of FBCSP, (iii) information theoretical feature extraction,
(iv) tangent space mapping feature extraction and, (v) wrapper-based feature-selection with
particle swarm optimization. The classification was performed with a 1D CNN, avoiding deeper
networks that could overfit and obtaining an average accuracy of 87.94%.
   In the same year (2019), 83% accuracy and 0.80 Cohen’s kappa value were instead achieved
by Zhang et al. [29]. The authors proposed the use of one-versus-rest FBCSP to extract
characterising features for each of the 4 classes and a deep architecture based on the CNN and
Long Short Term Memory (LSTM) models. Notice that they performed the classification by
training the model on the data merged from all the subjects and then evaluating the data of
each subject separately, obtaining a subject-invariant strategy.
   A year later (2020), Luo et al. [30] proposed a novel ensemble support vector learning based
approach to combine event related synchronisation/desynchronisation features typical of the
MI domain. The authors employed class discrepancy-guided sub-band filter-based common
spatial pattern to extract more separable and stable features before the classification step. The
proposed approach achieved an average kappa value of 0.60.
   Finally, in 2021, a CNN model built on the inception-time network called EEG-inception [31]
and, in 2022, a transfer learning-based CNN and LSTM hybrid deep learning model [32] have
been proposed. EEG-inception is directly fed with the raw EEG data augmented with noise
addition, avoiding complex pre-processing steps and decreasing the possibility of overfitting.
88.39% average accuracy was obtained on the 4 classes.
Similarly, Khademi et al. [32] firstly applied a data augmentation by cropping and secondly
provided a time-frequency characterisation of the EEG signal through continuous wavelet
transform. Afterwards, the authors exploited ResNet-50 and Inception-v3, 2 pre-trained CNNs,
to provide a transfer learning strategy supporting their CNN/LSTM model. The mean kappa
values obtained by using the provided approach were 0.86 and 0.88 for the ResNet-50 and
Inception-v3 models, respectively. The accuracy values were around 90% and 92%.
   Table 1 summarises the time-line of the presented AI strategies and highlights (bold) the best
achieved results.
Table 1
BCI Competition IV dataset 2a time-line on AI strategies summary. Best results are highlighted (bold).
 Paper    Year    Strategy                                                Cohen’s kappa    Accuracy
  [21]    2012    FBCSP + Naïve Bayesian Parzen Window classifier             0.57            NA
  [23]    2015    CSP + adaptive semi-supervised classification               0.70          77.00%
  [24]    2015    Added on [23] stacked regularised linear discriminant       0.74            NA
                  analysis
  [25]    2016    Adaptive learning with covariate shift detection            NA            76.70%
  [26]    2017    Artifact rejected CSP + self-regulated neuro-fuzzy          0.63            NA
                  framework
  [27]    2019    FBCSP + 4 CNNs + Bayesian optimisation                      0.61          80.03%
  [28]    2019    Data augmentation + feature engineering + 1D CNN            NA            87.94%
  [29]    2019    One-vs-rest FBCSP + CNN and LSTM (subject-                  0.80          83.00%
                  invariant)
  [30]    2020    CSP variation + ensemble support vector learning            0.60            NA
  [31]    2021    EEG-inception                                               NA            88.39%
  [32]    2022    Data augmentation + Inception-v3                            NA            92.00%


4.2. On the BCI Competition IV dataset 2b
A good number of works are in common between the two BCI Competition IV datasets under
analysis. In fact, the winners using the BCI Competition IV dataset 2b were again Ang et al. [21],
who used the same approach described for BCI Competition IV dataset 2a at the beginning of
Section 4.1. They obtained an average accuracy around the 60%.
Similarly, the work of Raza et al. [25] appears again in the performed search. The average
accuracy obtained on dataset 2b was of 73.33% using the upper bound version of the proposed
strategy.
   In 2019, Zhu et al. [33] proposed an end-to-end deep learning framework based on the transfer
of knowledge obtained from previously analysed subjects with the main aim of removing the
training phase from MI BCIs. CSP was used to extract features in the temporal domain and
the authors introduced a separated-channel CNN to provide the correct characterisation of the
multi-channel EEG signal. Besides testing the proposed strategy on the scrutinised dataset,
they performed analyses on a proprietary recorded set of data, considering left/right hand
MI. Notice that a single subject data was used as the test set, while the remaining subject
data were exploited for model training. A comparison with widely used traditional machine
learning techniques was also performed considering the K-nearest neighbour, logistic regression,
linear discriminant analysis and Support Vector Machine (SVM) classifiers. Therefore, close
performances were detected between the novel approach without transfer learning and the
traditional models, especially considering the information transfer rate on the BCI Competition
IV dataset 2b. Instead, better average results were provided when the transfer learning was
applied. The proposed approach achieved 0.83 Information Transfer Rate (ITR), while the best
result for the traditional techniques was obtained by the SVM with an ITR value of 0.02. The
accuracy values were around 64% and 50% for the two strategies, respectively.
   Instead, in the same year, Malan & Sharma [34] proposed a paradigm shift by considering
Table 2
BCI Competition IV dataset 2b time-line on AI strategies summary. Best results are highlighted (bold).
 Paper    Year    Strategy                                               Cohen’s kappa     Accuracy
  [21]    2012    FBCSP + Naïve Bayesian Parzen Window classifier             NA            60.00%
  [25]    2016    Adaptive learning with covariate shift detection            NA            73.33%
  [33]    2019    CSP + knowledge transfer + CNN                              NA            64.00%
  [34]    2019    Feature selection through regularised neighbourhood         0.62          80.70%
                  component analysis
  [30]    2020    CSP variation + ensemble support vector learning            0.71            NA
  [31]    2021    EEG-inception                                               NA            88.60%
  [35]    2022    Feature vector optimisation + SVM                           0.68          84.00%


a better definition of the feature vector to enhance a SVM classifier performances. Therefore,
after extracting features by considering frequency-related and statistical measures, the authors
performed feature selection by proposing a regularised neighbourhood component analysis. In
fact, the authors wanted to reduce the feature vector by selecting the features providing the
maximum accuracy or minimum generalisation error.
The final results show that the proposed algorithm was able to provide better average perfor-
mances in terms of accuracy (80.70%), kappa coefficient (0.62), precision (0.85), recall (0.79),
specificity (0.83) and F1-score (0.81) by using a lower number of features (6 on 42) when com-
pared with feature selection techniques based on ReliefF, principal component analysis and
genetic algorithm.
   An year later (2020), the ensemble support vector learning strategy of Luo et al. [30] described
in Section 4.1 provided 0.71 average max kappa value on the analysed dataset.
Even Zhang et al. [31] (2021) tested their EEG-inception model on both the BCI Competition IV
datasets and obtained the 88.60% average accuracy on dataset 2b.
   Finally, Malan & Sharma [35] presented a new work in 2022, focusing again on the optimisa-
tion of the feature vector to improve a SVM classifier performances. Therefore, they proposed a
novel methodology consisting of (i) dual tree complex wavelet transform based filter bank, (ii)
CSP for spatial feature extraction from the previously obtained EEG sub-bands, and again (iii) a
regularised neighbourhood component analysis. Comparing their approach with other CSP
variations, they obtained better results in terms of accuracy (84%) and kappa coefficient (0.68).
   Table 2 summarises the time-line of the presented AI strategies and highlights (bold) the best
achieved results.

4.3. On the EEG Motor Movement/Imagery Dataset
The EEG Motor Movement/Imagery Dataset has not been used as much as the previous 2 datasets,
but provides a good starting point to perform analyses on a larger pool of data considering both
executed and imagined movements.
The first work by Sleight et al. (2009) [36] reporting results on this dataset performed a partic-
ularly interesting analysis having that the authors’ aim was not to discriminate different MI
tasks, but the imagined from the real movements. Therefore, they firstly considered feature
extraction based on independent component analysis on different frequency bands and a chan-
nel selection, to provide a well characterised feature vector as input to a SVM model. After
performing different analyses, the best average accuracy (69%) was obtained by considering
a subject-based approach on data normalized per frequency band and avoiding the use of
independent component analysis.
   Almost 5 years later, in 2013, Park et al. [37] proposed the augmented complex CSP to deal
with non-circular EEG signals. The SVM model (with a Gaussian kernel) was employed to
classify data from a synthetic dataset and the one under scrutiny, considering left/right hand
MI. Among the 109 subjects, 56 performed over the 64% accuracy.
   In 2017, a LSTM model was proposed [38] to obtain intent recognition considering all the
experimental MI conditions of 10 subjects. A hyper-parameter selection through orthogonal
array application was also performed obtaining a final average accuracy of 95.53%.
Another deep learning approach has been chosen in 2018 by Dose et al. [39], who considered
the combination of different MI tasks for signal classification, i.e., left/right hand, open eyes
with left/right hand, feet with open eyes and left/right hand conditions. The authors designed a
CNN architecture that could be complemented with a transfer learning strategy. The average
accuracy values obtained on the different task combinations without (with) the transfer learning
approach were 80.38% (86.49%), 69.82% (79.25%) and 58.58% (68.51%), respectively. A general
performance increase was detected when applying the transfer learning strategy.
   An ensemble learning approach for MI data classification on 10 subjects was instead chosen
by Zhang et al. [40] in 2018. Firstly, the authors exploited a recurrent neural network and a
CNN model to learn features and then an autoencoder for feature adaption. Afterwards, they
applied eXtreme Gradient Boosting for intent recognition, obtaining an average accuracy of
95.53%.
   The prediction of different MI conditions (left/right hand, plus rest, plus feet) was also
provided by Wang et al. [41] (2020), who exploited a model based on a pre-trained CNN,
EEG-Net, obtaining the following accuracy values: 82.43%, 75.07% and 65.07%.
   Finally, in 2021, Varsehi & Firoozabadi [42] focused on a channel selection method, taking
as an assumption the causal interaction between channels. Therefore, the authors proposed
a novel channel selection algorithm based on Granger causality analysis and after different
processing steps, classified the resulting data by using the linear/quadratic discriminant analysis,
kernel Fisher discriminant, SVM, multi-layer perceptron, learning vector quantisation, neural
network, k-nearest neighbour and decision tree classifiers. The best results obtained by the
proposed strategy and using 8 channels only were of 93.03% accuracy, 92.93% sensitivity, and
93.12% specificity.
   Table 3 summarises the time-line of the presented AI strategies and highlights (bold) the best
achieved results.

4.4. On the MI-OpenBCI dataset
Being very recent (2020), the MI-OpenBCI dataset has not yet been deeply analysed by the
research community.
The dataset authors, Peterson et al., [18] proposed data denoising and feature extraction based
on CSP. The best results were achieved by the penalised time-frequency band CSP with an
Table 3
EEG Motor Movement/Imagery Dataset time-line on AI strategies summary. Best results are highlighted
(bold).
 Paper    Year   Strategy                                               Cohen’s kappa    Accuracy
  [36]    2009   Imagined vs real movements + independent compo-             NA           69.00%
                 nent analysis + SVM
  [37]    2013   Augmented complex CSP + SVM (56 subjects)                   NA           64.00%
  [38]    2017   LSTM (all MI conditions) + hyperparameter selection         NA           95.53%
                 through orthogonal array
  [39]    2018   CNN (left/right hand MI) + transfer learning                NA           86.49%
  [40]    2018   Recurrent neural network + CNN for feature extrac-          NA           95.53%
                 tion + eXtreme Gradient Boosting (all MI conditions)
  [41]    2020   EEG-Net (left/right hand MI)                                NA           82.43%
  [42]    2021   channel selection based on Granger causality                NA           93.03%


average accuracy of 83.30%. Notice that generalised sparse discriminant analysis was used for
both feature selection and classification considering both online and offline scenarios.
Therefore, the proposed methodology is set as a benchmark for future works.
Notice that the works citing Peterson et al. paper have exploited it to make a point regarding the
necessity of moving to low-cost and consumer-grade technologies to provide more ecological
and easy-to-use BCIs.
For example, Koo et al. [43] cite it when considering that locked-in patients do not have
communication tools based on BCI paradigms that are easy to maintain, low cost and comfortable.
Moreover, Peterson et al. work is taken as an example considering the use of open-source and
low cost platforms [44, 45] like the OpenBCI one.


5. Discussion
Considering the general overview given on the AI techniques applied to the analysed datasets,
observations can be made towards the scope of the present paper.
   1. Initially, the BCI Competition IV datasets were considered as the sole test beds for
      novel MI detection techniques. Afterwards, researchers began to use them as terms of
      comparison for their proprietary data. The EEG Motor Movement/Imagery Dataset seemed
      to immediately take the role of a benchmark.
   2. The CSP and its variations have been widely used and remain between the most applied
      techniques for feature extraction, meant to provide a correct data characterisation as
      input to AI models.
   3. From an initial application of statistical measures or SVM classifiers, the AI techniques
      evolved following this pattern: (i) traditional machine learning approaches, (ii) introduc-
      tion of transfer or adaptive learning, (iii) focus on the feature vector and channel selection,
      (iv) deep learning architectures, especially based on CNN and LSTM, or ensemble tech-
      niques, and (v) exploitation of pre-trained networks.
   4. The necessity of introducing data augmentation techniques has also appeared when the
      researches conducted with deep learning approaches became more mature.
   5. The check on performances has shifted from the use of the sole kappa coefficient to an
      extreme tendency of accessing the system behaviour through accuracy only.
   6. New technologies and demands from the general public have asked a paradigm shift
      related to the use of low-cost and easy-to-use devices, which should enable a rapid
      response from BCI systems and thus provide lightweight computation techniques to allow
      efficient and reliable feedback to the users as part of their requirements.
   7. The acquisition of the MI-OpenBCI dataset has set a good standard to provide systems in
      line with the requirements reported at the previous point.
Therefore, the evolution of AI techniques is clearly detectable and followed the discoveries
presented by the AI community. However, this evolution demanded changes in how the
experimental paradigms and devices are made. Especially, the need for more data to feed to the
models has become particularly pressing as well as the possibility of providing signals with
higher signal-to-noise ratio.
Interestingly, a return to simpler and quicker traditional machine learning models in combination
with feature engineering strategies, may prove effective for new real-time and low-cost BCI
systems. In fact, instead of answering the demands from a computational perspective, the user
needs will probably become the main focus when developing new technologies.


6. Conclusion
In this paper, a brief over all survey on the AI techniques applied to specific MI EEG-based BCI
datasets has been provided.
Starting from the development of the BCI2000 platform and the opening of the BCI Competitions,
great interest has been given by the field research community towards the generation of new
approaches enabling the correct response of such systems.
In the presented overview, the evolution of AI techniques and their influence on MI EEG-based
BCIs seemed to have been twofold: the AI approaches have provided better discrimination
of experimental conditions and the devices used to collect data and control applications have
changed according to the needs of the general public.
However, many issues remain to be answered, starting from the necessity of finding reliable
metrics for feedback procedure evaluations not only based on the sole accuracy but also accessing
the usability, comfort, reliability and reproducibility of the new consumer-grade technologies.


References
 [1] A. Singh, A. A. Hussain, S. Lal, H. W. Guesgen, A comprehensive review on critical issues
     and possible solutions of motor imagery based electroencephalography brain-computer
     interface, Sensors 21 (2021) 2173.
 [2] A. Craik, Y. He, J. L. Contreras-Vidal, Deep learning for electroencephalogram (EEG)
     classification tasks: a review, Journal of neural engineering 16 (2019) 031001.
 [3] S. Vaid, P. Singh, C. Kaur, EEG signal analysis for BCI interface: A review, in: 2015 fifth
     international conference on advanced computing & communication technologies, IEEE,
     2015, pp. 143–147.
 [4] S. C. Wriessnegger, C. Brunner, G. R. Müller-Putz, Frequency specific cortical dynamics
     during motor imagery are influenced by prior physical activity, Frontiers in psychology 9
     (2018) 1976.
 [5] M. Dai, D. Zheng, R. Na, S. Wang, S. Zhang, EEG classification of motor imagery using a
     novel deep learning framework, Sensors 19 (2019) 551.
 [6] V. Kohli, U. Tripathi, V. Chamola, B. K. Rout, S. S. Kanhere, A review on Virtual Reality and
     Augmented Reality use-cases of Brain Computer Interface based applications for smart
     cities, Microprocessors and Microsystems 88 (2022) 104392.
 [7] Z. Cao, A review of artificial intelligence for EEG-based brain- computer interfaces and
     applications, Brain Science Advances 6 (2020) 162–170.
 [8] G. Schalk, D. J. McFarland, T. Hinterberger, N. Birbaumer, J. R. Wolpaw, BCI2000: a
     general-purpose brain-computer interface (BCI) system, IEEE Transactions on biomedical
     engineering 51 (2004) 1034–1043.
 [9] B. Blankertz, K.-R. Muller, G. Curio, T. M. Vaughan, G. Schalk, J. R. Wolpaw, A. Schlogl,
     C. Neuper, G. Pfurtscheller, T. Hinterberger, et al., The BCI competition 2003: progress
     and perspectives in detection and discrimination of EEG single trials, IEEE transactions
     on biomedical engineering 51 (2004) 1044–1051.
[10] J. Kalcher, D. Flotzinger, C. Neuper, S. Gölly, G. Pfurtscheller, Graz brain-computer interface
     II: towards communication between humans and computers based on online classification
     of three different EEG patterns, Medical and biological engineering and computing 34
     (1996) 382–388.
[11] S. Lemm, C. Schafer, G. Curio, BCI competition 2003-data set III: probabilistic modeling
     of sensorimotor/spl mu/rhythms for classification of imaginary hand movements, IEEE
     Transactions on Biomedical Engineering 51 (2004) 1077–1080.
[12] C. Torrence, G. P. Compo, A practical guide to wavelet analysis, Bulletin of the American
     Meteorological society 79 (1998) 61–78.
[13] M. Tangermann, K.-R. Müller, A. Aertsen, N. Birbaumer, C. Braun, C. Brunner, R. Leeb,
     C. Mehring, K. J. Miller, G. Mueller-Putz, et al., Review of the BCI competition IV, Frontiers
     in neuroscience (2012) 55.
[14] A. L. Goldberger, L. A. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E.
     Mietus, G. B. Moody, C.-K. Peng, H. E. Stanley, PhysioBank, PhysioToolkit, and PhysioNet:
     components of a new research resource for complex physiologic signals, circulation 101
     (2000) e215–e220.
[15] J. J. Daly, J. E. Huggins, Brain-computer interface: current and emerging rehabilitation
     applications, Archives of physical medicine and rehabilitation 96 (2015) S1–S7.
[16] G. A. M. Vasiljevic, L. C. de Miranda, Brain–computer interface games based on consumer-
     grade EEG Devices: A systematic literature review, International Journal of Human–
     Computer Interaction 36 (2020) 105–142.
[17] J. Minguillon, M. A. Lopez-Gordo, F. Pelayo, Trends in EEG-BCI for daily-life: Requirements
     for artifact removal, Biomedical Signal Processing and Control 31 (2017) 407–418.
[18] V. Peterson, C. Galván, H. Hernández, R. Spies, A feasibility study of a complete low-cost
     consumer-grade brain-computer interface system, Heliyon 6 (2020) e03425.
[19] R. Leeb, F. Lee, C. Keinrath, R. Scherer, H. Bischof, G. Pfurtscheller, Brain–computer
     communication: motivation, aim, and impact of exploring a virtual apartment, IEEE
     Transactions on Neural Systems and Rehabilitation Engineering 15 (2007) 473–482.
[20] V. Peterson, C. Galván, H. Hernández, M. P. Saavedra, R. Spies, A motor imagery vs. rest
     dataset with low-cost consumer grade EEG hardware, Data in Brief 42 (2022) 108225.
[21] K. K. Ang, Z. Y. Chin, C. Wang, C. Guan, H. Zhang, Filter bank common spatial pattern
     algorithm on BCI competition IV datasets 2a and 2b, Frontiers in neuroscience 6 (2012) 39.
[22] K. K. Ang, Z. Y. Chin, H. Zhang, C. Guan, Filter bank common spatial pattern (FBCSP) in
     brain-computer interface, in: 2008 IEEE international joint conference on neural networks
     (IEEE world congress on computational intelligence), IEEE, 2008, pp. 2390–2397.
[23] L. F. Nicolas-Alonso, R. Corralejo, J. Gomez-Pilar, D. Álvarez, R. Hornero, Adaptive
     semi-supervised classification to reduce intersession non-stationarity in multiclass motor
     imagery-based brain–computer interfaces, Neurocomputing 159 (2015) 186–196.
[24] L. F. Nicolas-Alonso, R. Corralejo, J. Gomez-Pilar, D. Alvarez, R. Hornero, Adaptive
     stacked generalization for multiclass motor imagery-based brain computer interfaces,
     IEEE Transactions on Neural Systems and Rehabilitation Engineering 23 (2015) 702–712.
[25] H. Raza, H. Cecotti, Y. Li, G. Prasad, Adaptive learning with covariate shift-detection for
     motor imagery-based brain–computer interface, Soft Computing 20 (2016) 3085–3096.
[26] A. Jafarifarmand, M. A. Badamchizadeh, S. Khanmohammadi, M. A. Nazari, B. M.
     Tazehkand, A new self-regulated neuro-fuzzy framework for classification of EEG signals
     in motor imagery BCI, IEEE transactions on fuzzy systems 26 (2017) 1485–1497.
[27] B. E. Olivas-Padilla, M. I. Chacon-Murguia, Classification of multiple motor imagery using
     deep convolutional neural networks and spatial filters, Applied Soft Computing 75 (2019)
     461–472.
[28] I. Majidov, T. Whangbo, Efficient classification of motor imagery electroencephalography
     signals using deep learning methods, Sensors 19 (2019) 1736.
[29] R. Zhang, Q. Zong, L. Dou, X. Zhao, A novel hybrid deep learning scheme for four-class
     motor imagery classification, Journal of neural engineering 16 (2019) 066004.
[30] J. Luo, X. Gao, X. Zhu, B. Wang, N. Lu, J. Wang, Motor imagery EEG classification based
     on ensemble support vector learning, Computer methods and programs in biomedicine
     193 (2020) 105464.
[31] C. Zhang, Y.-K. Kim, A. Eskandarian, EEG-inception: an accurate and robust end-to-end
     neural network for EEG-based motor imagery classification, Journal of Neural Engineering
     18 (2021) 046014.
[32] Z. Khademi, F. Ebrahimi, H. M. Kordy, A transfer learning-based CNN and LSTM hybrid
     deep learning model to classify motor imagery EEG signals, Computers in Biology and
     Medicine 143 (2022) 105288.
[33] X. Zhu, P. Li, C. Li, D. Yao, R. Zhang, P. Xu, Separated channel convolutional neural network
     to realize the training free motor imagery BCI systems, Biomedical Signal Processing and
     Control 49 (2019) 396–403.
[34] N. S. Malan, S. Sharma, Feature selection using regularized neighbourhood component
     analysis to enhance the classification performance of motor imagery signals, Computers
     in biology and medicine 107 (2019) 118–126.
[35] N. Malan, S. Sharma, Motor imagery EEG spectral-spatial feature optimization using
     dual-tree complex wavelet and neighbourhood component analysis, IRBM 43 (2022)
     198–209.
[36] J. Sleight, P. Pillai, S. Mohan, Classification of executed and imagined motor movement
     EEG signals, Ann Arbor: University of Michigan 110 (2009).
[37] C. Park, C. C. Took, D. P. Mandic, Augmented complex common spatial patterns for
     classification of noncircular EEG from motor imagery tasks, IEEE Transactions on neural
     systems and rehabilitation engineering 22 (2013) 1–10.
[38] X. Zhang, L. Yao, C. Huang, Q. Z. Sheng, X. Wang, Intent recognition in smart living through
     deep recurrent neural networks, in: International conference on neural information
     processing, Springer, 2017, pp. 748–758.
[39] H. Dose, J. S. Møller, H. K. Iversen, S. Puthusserypady, An end-to-end deep learning
     approach to MI-EEG signal classification for BCIs, Expert Systems with Applications 114
     (2018) 532–542.
[40] X. Zhang, L. Yao, Q. Z. Sheng, S. S. Kanhere, T. Gu, D. Zhang, Converting your thoughts
     to texts: Enabling brain typing via deep feature learning of eeg signals, in: 2018 IEEE
     international conference on pervasive computing and communications (PerCom), IEEE,
     2018, pp. 1–10.
[41] X. Wang, M. Hersche, B. Tömekce, B. Kaya, M. Magno, L. Benini, An accurate eegnet-based
     motor-imagery brain–computer interface for low-power edge computing, in: 2020 IEEE
     international symposium on medical measurements and applications (MeMeA), IEEE, 2020,
     pp. 1–6.
[42] H. Varsehi, S. M. P. Firoozabadi, An EEG channel selection method for motor imagery based
     brain–computer interface and neurofeedback using Granger causality, Neural Networks
     133 (2021) 193–206.
[43] D. Koo, H. F. H. Polanco, M. Cross, Y. H. Rho, N. Ames, J. Raiti, Demonstration of low-cost
     EEG system providing on-demand communication for locked-in patients, in: 2021 IEEE
     Global Humanitarian Technology Conference (GHTC), IEEE, 2021, pp. 108–111.
[44] R. Butsiy, S. Lupenko, A. Zozulya, Comprehensive justification for the choice of software
     development tools and hardware components of a multi-channel neurointerface system,
     in: 2021 IEEE 16th International Conference on Computer Sciences and Information
     Technologies (CSIT), volume 1, IEEE, 2021, pp. 309–312.
[45] D. Zambrana-Vinaroz, J. M. Vicente-Samper, J. M. Sabater-Navarro, Validation of Con-
     tinuous Monitoring System for Epileptic Users in Outpatient Settings, Sensors 22 (2022)
     2900.