<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Comparison of EEG Data Processing Using Feedforward and Convolutional Neural Network∗</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yu Xie</string-name>
          <email>yu.xie@inf.unideb.hu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefan Oniga</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tamás</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Copyright © 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). Faculty of Informatics, University of Debrecen and the Faculty of Engineering, Technical University of Cluj-Napoca, North University Centre of Baia Mare</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of IT Systems and Networks, Faculty of Informatics, University of Debrecen</institution>
          ,
          <country country="HU">Hungary</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Proceedings of the 1</institution>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Techical University of Cluj-Napoca, North University Centre of Baia Mare</institution>
        </aff>
      </contrib-group>
      <fpage>279</fpage>
      <lpage>289</lpage>
      <abstract>
        <p>EEG signals are the overall reflection of the physiological activities of brain nerve cells in the cerebral cortex and scalp. By classifying and processing EEG signals, it is possible to identify states that do not require conscious activity. This article mainly processes the raw data and uses the multi-layer perceptron (MLP) neural network to determine whether the subject's eyes are open or closed and compares the results of the convolutional neural network (CNN) network.</p>
      </abstract>
      <kwd-group>
        <kwd>EEG</kwd>
        <kwd>signal processing</kwd>
        <kwd>MLP neural network</kwd>
        <kwd>classification</kwd>
        <kwd>CNN</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Brain-Computer Interface (BCI) is a communication control system established
between the brain and external devices (computers or other electronic devices)
through signals generated during brain activity [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The aim of BCI is to create
a communication link between the human brain and the computer. It provides a
way to transform brainwaves into physical efects without using muscles [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. In the
decades since the birth of BCI technology, the research on electroencephalogram
(EEG) signals classification methods have always been the driving force for the
continuous development of BCI technology. EEG is a non-invasive acquisition
method in the BCI system [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. It detects weak EEG signals by placing electrodes
on the scalp and records changes in electrical signals during brain nerve activity.
However, since EEG will be greatly weakened when it travels through the cerebral
cortex to the scalp, the signal-to-noise ratio of the extracted signal is extremely low,
which increases the dificulty of subsequent feature extraction and classification [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
It is dificult for traditional classification methods to find well distinguished and
representative features to design a classification model with excellent performance.
In recent years, however, the deep learning methods have made itself a great success
in the field of image and speech such as good generalization capabilities, and
layerby-layer automatic learning of data features [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>This study created a convolution neural network that can recognize and
automatic extract the features of EEG signals and compare the accuracy of traditional
methods of feature extraction and classification using data from the same public
database. We used PhysioNet EEG data for this project, which are composed of
over 1500 one- and two-minute EEG recordings, received from 109 subjects. The
goal of our work is to explore Fast Fourier Transform (FFT) signal analysis
techniques for distinction between two states, eyes open (EO) and eyes closed (EC),
through the detection of EEG activity obtained from eight scalp channels.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Methods</title>
      <sec id="sec-2-1">
        <title>2.1. PhysioNet EEG Database</title>
        <p>
          PhysioNet EEG data set are composed of over 1500 one- and two-minute EEG
recordings, received from 109 subjects [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. They accomplished six diferent
motor/imagery tasks while the EEGs were recorded from 64 electrodes using the
BCI2000 system. Each volunteers completed 14 trials: two one-minute baseline
runs (one with EO, one with EC), and three two-minute trials of each of the four
following tasks:
• An object shows on either the left or the right hand side of the screen. The
volunteer opens and closes the matching fist until the object vanishes. Then
the volunteer unwinds.
• An object shows on either the left or the right hand side of the screen. The
volunteer visions opening and closing the matching fist until the object
vanishes. Then the volunteer unwinds.
• An object shows on either the top or the bottom of the screen. The volunteer
opens and closes either both fists (if the target is on top) or both feet (if
the target is on the bottom) until the object vanishes. Then the volunteer
unwinds.
• An object shows on either the top or the bottom of the screen. The volunteer
visions opening and closing either both fists (if the target is on top) or both
feet (if the target is on the bottom) until the object vanishes. Then the
volunteer unwinds.[
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]
        </p>
        <p>The 64-channel EEG were recorded (each sampled at 160 samples per second) as
per the international 10-10 system (excluding electrodes Nz, F9, F10, FT9, FT10,
A1, A2, TP9, TP10, P9, and P10), as it is shown Figure 1.</p>
        <p>FT8
40
T8
42</p>
        <p>T10
44
T9
43</p>
        <p>T417
TP7
45
F3T97 FC15 FC23 FC31 FC4Z</p>
        <p>AF7
25</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Data acquisition</title>
        <p>A trained neural network will be planning to use to implement real-time
classification in the future by our own device, which is an Ultracortex Mark IV biosensing
headset from OpenBCI. In this case, only these eight channels were taken into
account that are C3, C4, Fp1, Fp2, P7, P8, O1, and O2.</p>
        <p>Since the data from eight channels original scalp are around two minutes long
with 160 samples per second sampling frequency. In order to increase the number
of samples, we expanded the data cut every 4 seconds as a segment (640 points).</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Signal analysis by EEGLAB</title>
        <p>
          EEGLAB is an interactive MATLAB toolbox for processing continuous and
eventrelated EEG, MEG, and other electrophysiological signals [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ].
        </p>
        <p>
          EEG signals can be analyzed in the frequency domain. We name 8-14 Hz for
alpha, 14-30 Hz for beta, 30-80 Hz for gamma, 1-4 Hz for delta, 4-8 Hz for theta
band, however frequency ranges are lightly dissimilar in various articles [
          <xref ref-type="bibr" rid="ref11 ref4 ref6">4, 6, 11</xref>
          ].
The Power Spectral Density (PSD) of EC and EO for the first eight seconds of the
ifrst volunteer is shown in Figure 2 and Figure 3 respectively. Each colored trace
represents the spectrum of the activity of one data channel. The leftmost scalp
map indicates the scalp distribution of power at 10.3 Hz, which in these data is
concentrated on the occipital regions and parietal midline respectively. The other
scalp maps show the distribution of power at 15.5 Hz and 20.6 Hz. We can see
easily that the alpha wave power is significantly higher in the EC state than in the
EO state with obvious diferences in frontal, parietal and occipital regions.
        </p>
        <p>
          Figure 4 and Figure 5 respectively showed the activity spectrum at O2 position
of EC and EO state. We can see a distinct peak at 8-14hz on Figure 4 compared
to Figure 5. Alpha waves appear when people are awake, quiet and with their eyes
closed. As soon as subjects open their eyes, think, or receive other stimuli, alpha
waves disappear and turn into fast waves. It reappears when the person becomes
quiet again and closes his eyes. This phenomenon is called “alpha blocking” [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. So
alpha waves are the main manifestation of electrical activity of the cerebral cortex
in awake, quiet and EC state.
        </p>
        <p>At the same time, alpha wave activities are unstable according to the Figure 4
and 5. When analyzing the same EEG signal, the amplitude of the beta waves are
much less than that of the alpha waves. It may be more reasonable to use beta
waves as an analytical indicator than alpha waves for identifying the EO state.</p>
        <p>
          Most of the researches [
          <xref ref-type="bibr" rid="ref7 ref9">7, 9</xref>
          ] focus on the amplitude and power variation of wave in
diferent states. The study of alpha and beta wave does not involve the correlation
and mutual comparison between the two waves. Therefore, beta waves are added
as the features for comparison in this paper.
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Data processing by Matlab</title>
        <p>In our case the data processing includes data import, normalization, segmentation,
feature extraction, neural network creation, training and test steps.</p>
        <p>To facilitate the activity classification, the raw data was divided into small
segments (windows). The main challenge in this task is the find the proper
window size. In order to improve the accuracy, we made diferent degrees of linear
enhancement according to the correlation of each channel. In case of PSD
calculation, multiple FFT window sizes were tested. Windows were overlapping, at each
sample the window contained the current sample and the previous N-10 samples.
Before classification, a 5-Fold Cross-Validation was used for randomly shufled the
training and testing data set.</p>
        <p>We tested and compared the performance of feedforward MLP networks and
CNN in this step. In case of MLP, training function was Levenberg-Marquardt
(trainlm). We achieved the best results with the two hidden layers network
contained 12 and 7 neurons using log-sigmoid transfer functions (as shown in Figure 6).
Output layer had only one neurons as it is two states (EO and EC) to be
recognized. On the layers the initial weights came from a normal Gaussian distribution.
In the training algorithm the epoch limit was 1000 cycles.</p>
        <p>The advantage of deep learning algorithm classification accuracy is usually
relfected when the number of sample sets is large enough. And the more complex
the network, the more parameters to be trained, the more training set samples are
needed. Therefore, the high complexity of the network model cannot be pursued
blindly in the design of neural networks. Layers of the used CNN is shown in
Table 1. The raw data were converted to grayscale images and these images were
used as inputs for the CNN. Image size was 64 × 1 × 8 (64 measurement points, 8
channels).</p>
        <p>In addition to the design of network structure, many hyperparameters still
need to be determined manually in the training process of a CNN classifier. We
created a set of options for training a network using stochastic gradient descent
with momentum.</p>
        <p>We used a mini-batch with 64 observations at each iteration. If the batch size
is set too small, it will be dificult for the network to converge and under-fitting.
If the batch size is set too large, it will result in reduced eficiency or memory
overflow.</p>
        <p>Learning rate is a very important hyperparameter in network training. On the
on hand, if the learning rate is set too small the error curve drops too slowly. On
the other hand, too large learning rate will lead to error explosion, and the network
cannot find the correct direction of gradient descent. We reduced the learning rate
by a factor of 0.1 every 10 epochs with 0.2 initial learning rate and set the maximum
number of epochs for training to 50.</p>
        <p>We set L2 regularization to 0.0005. It ensures that feature weights are not too
large, and the processed feature weights are relatively average.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Result</title>
      <p>All the following experimental results are from the same computer(CPU: Intel Core
i5-7300HQ, RAM: 16 GB,GPU: GeForce GTX 1050 Ti) with MATLAB R2020a.</p>
      <p>Classification model evaluation metric was accuracy, calculated the following
way:</p>
      <p>Accuracy =</p>
      <p>No correct predictions</p>
      <p>No all predictions
.</p>
      <p>To determine a proper window size for band power calculation, diferent window
sizes were used, training and testing were repeated 1000 times to decrease statistical
uncertainty. Used neural network in this case was the 2-layer MLP with 12-7
neurons in hidden layers. The Mean and best accuracy on test data using diferent
windows are summarized in Table 2.
In case of real-time data processing, we hope to find a window small enough.
We found window size of 40 samples a best choice as we are able to identify the
activities of the subjects from the EEG signal in approximately 0.25 second. It is
an acceptable latency and gives significantly better accuracy than other windows.
Therefore, all of the following experiments are based on this window.</p>
      <p>In order to further improve the accuracy of the network, we enlarged the data
set. Each segment has 40 points that overlap the previous segment.</p>
      <p>The best result obtained after enlargement was 96.63% for test data. The
comparison of training time and mean accuracy are showed in Table 3. In Table 3,
the sufix with A represents only alpha waves considered as feature. The sufix
with B represents only beta waves considered as feature. The sufix with A and B
represents the feature combined by alpha waves and beata waves. The sufix with L
represents using expanded data set. All the accuracy results are the average values
of 100 time 10-fold cross-validation.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>In this paper we proposed two artificial neural network approaches for EO and EC
tasks recognition from EEG data. We used data from 50 volunteers and tried to
recognize their activities with feedforward MLP network and CNN in combination
with diferent data processing approaches. Our results demonstrated that the
determination of the activity from the EEG signal is possible with high classification
accuracy.</p>
      <p>In the case of MLP, the results obtained show that the accuracy obtained using
only alpha or beta waves are not significantly diferent, but using the extended
data set makes a significant diference. So it would be more reasonable to use beta
wave as an analytical indicator than alpha wave for this this purpose. This can
provide some reference for studying the EEG signal of the subject’s eye state.</p>
      <p>We reached the higher accuracy rate and shorter training time of using MLP
instead of CNN for this purpose. Compared with MLP, however, CNN combines
signal preprocessing, feature extraction and classification. It prevents the blindness
and cumbersomeness of EEG signal processing, and it also has a good accuracy rate.
It is essential to produce a CNN construction with high robustness performance
and accuracy rate for EEG signals.</p>
      <p>The data set in this project was subjected to a simple preprocessing (bandpass
iflter and the normalized processing) without special processing for ECG, EMG
and noise. The data set itself has much room to improve, and then makes the
accuracy of the model also have great potential for ascension. Due to the limited
research time of the project, more details in parameter adjustment have not been
studied. However, the eficiency of deep neural network models such as CNN is
largely dependent on parameter adjustment. Therefore, the further improvement
of model performance needs to be further improved in parameter adjustment.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>F.</given-names>
            <surname>Carpi</surname>
          </string-name>
          , D. De Rossi,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Menon: Non invasive brain-machine interfaces</article-title>
          ,
          <source>ESA Ariadna Study</source>
          <volume>5</volume>
          (
          <year>2006</year>
          ), p.
          <fpage>6402</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Delorme</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          <article-title>Makeig: EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis</article-title>
          ,
          <source>Journal of neuroscience methods 134</source>
          .1 (
          <issue>2004</issue>
          ), pp.
          <fpage>9</fpage>
          -
          <lpage>21</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Goldberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Amaral</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Glass</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hausdorff</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. C.</given-names>
            <surname>Ivanov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Mietus</surname>
          </string-name>
          , G. Moody, C. Peng, H. Stanley:
          <article-title>Components of a new research resource for complex physiologic signals</article-title>
          , PhysioBank, PhysioToolkit, and
          <string-name>
            <surname>Physionet</surname>
          </string-name>
          (
          <year>2000</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>I. S.</given-names>
            <surname>Isa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. S.</given-names>
            <surname>Zainuddin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Hussain</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. N.</surname>
          </string-name>
          <article-title>Sulaiman: Preliminary study on analyzing EEG alpha brainwave signal activities based on visual stimulation</article-title>
          ,
          <source>Procedia Computer Science</source>
          <volume>42</volume>
          (
          <year>2014</year>
          ), pp.
          <fpage>85</fpage>
          -
          <lpage>92</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>N.</given-names>
            <surname>Kawabata</surname>
          </string-name>
          :
          <article-title>Nonstationary power spectrum analysis of the photic alpha blocking</article-title>
          ,
          <source>Kybernetik</source>
          <volume>12</volume>
          .1 (
          <issue>1972</issue>
          ), pp.
          <fpage>40</fpage>
          -
          <lpage>44</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>D. Kučikiene˙</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Praninskiene</surname>
          </string-name>
          <article-title>˙: The impact of music on the bioelectrical oscillations of the brain</article-title>
          ,
          <source>Acta Medica Lituanica 25.2</source>
          (
          <issue>2018</issue>
          ), p.
          <fpage>101</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Merzagora</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bunce</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Izzetoglu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Onaral</surname>
          </string-name>
          :
          <article-title>Wavelet analysis for EEG feature extraction in deception detection</article-title>
          ,
          <source>in: 2006 International Conference of the IEEE Engineering in Medicine and Biology Society</source>
          , IEEE,
          <year>2006</year>
          , pp.
          <fpage>2434</fpage>
          -
          <lpage>2437</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>G.</given-names>
            <surname>Schalk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. J.</given-names>
            <surname>McFarland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hinterberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Birbaumer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Wolpaw</surname>
          </string-name>
          <article-title>: BCI2000: a general-purpose brain-computer interface (BCI) system</article-title>
          ,
          <source>IEEE Transactions on biomedical engineering 51</source>
          .6 (
          <issue>2004</issue>
          ), pp.
          <fpage>1034</fpage>
          -
          <lpage>1043</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G. B.</given-names>
            <surname>Seco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. J.</given-names>
            <surname>Gerhardt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Biazotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. L.</given-names>
            <surname>Molan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. V.</given-names>
            <surname>Schönwald</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Rybarczyk-Filho</surname>
          </string-name>
          :
          <article-title>EEG alpha rhythm detection on a portable device</article-title>
          ,
          <source>Biomedical Signal Processing and Control</source>
          <volume>52</volume>
          (
          <year>2019</year>
          ), pp.
          <fpage>97</fpage>
          -
          <lpage>102</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Shih</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. J.</given-names>
            <surname>Krusienski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Wolpaw</surname>
          </string-name>
          :
          <article-title>Brain-computer interfaces in medicine</article-title>
          ,
          <source>in: Mayo Clinic Proceedings</source>
          , vol.
          <volume>87</volume>
          ,
          <issue>3</issue>
          , Elsevier,
          <year>2012</year>
          , pp.
          <fpage>268</fpage>
          -
          <lpage>279</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Suto</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          <article-title>Oniga: Music stimuli recognition in electroencephalogram signal</article-title>
          ,
          <source>Elektronika ir Elektrotechnika 24.4</source>
          (
          <issue>2018</issue>
          ), pp.
          <fpage>68</fpage>
          -
          <lpage>71</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Su</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <surname>F.</surname>
          </string-name>
          <article-title>Zhou: Rehabilitation Treatment of Motor Dysfunction Patients Based on Deep Learning Brain-Computer Interface Technology</article-title>
          ,
          <source>Frontiers in Neuroscience</source>
          <volume>14</volume>
          (
          <year>2020</year>
          ), p.
          <volume>1038</volume>
          ,
          <issue>issn</issue>
          :
          <fpage>1662</fpage>
          -
          <lpage>453X</lpage>
          , doi: 10.3389/fnins.
          <year>2020</year>
          .
          <volume>595084</volume>
          , url: https://www.frontiersin.org/article/10.3389/fnins.
          <year>2020</year>
          .
          <volume>595084</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xie</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          <article-title>Oniga: A Review of Processing Methods and Classification Algorithm for EEG Signal</article-title>
          ,
          <source>Carpathian Journal of Electronic and Computer Engineering</source>
          <volume>13</volume>
          .3 (
          <issue>2020</issue>
          ), pp.
          <fpage>23</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>