<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Acquisition, analysis and classification of EEG signals for control design</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Paula Ivone Rodriguez</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jose Mejia</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Boris Mederos</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nayeli Edith Moreno</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Victor Manuel Mendoza</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Universidad Autónoma de Ciudad Juárez Avenida Plutarco Elías Calles 1210 Fovissste Chamizal</institution>
          ,
          <addr-line>32310 Ciudad Juárez, Chihuahua</addr-line>
        </aff>
      </contrib-group>
      <fpage>41</fpage>
      <lpage>52</lpage>
      <abstract>
        <p>In the design of brain machine interfaces it is common to use motor imagery which is the mental simulation of a motor act, it consists of acquiring the signals emitted when imagining the movement of different parts of the body. In this paper we propose a machine learning algorithm for the analysis of electroencephalographic signals (EEG) in order to detect body movement intention, combined with the signals issued in a state of relaxation and the state of mathematical activity. That can be applied for brain computer interface (BCI). The algorithm is based on the use of recurrent networks (RRN) and can recognized four tasks which can be used for control of machinery. The performance of the proposed algorithm has an average classification efficiency of 80.13%. This proposed method can be used to translate the motor imagery signals, relaxation activity signals and mathematical activity signals into control signal using a four state to control the directional movement of a drone.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        The Brain Computer Interfaces (BCIs) are used mostly to help people to
restoring some functions when they are severely disabled by a neuromuscular disorder,
BCIs also used in healthy people to help them improve their functions [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. BCI
experiments based on electroencephalogram (EEG) have the advantage of being
no-invasive for the subject, besides having no environmental restrictions [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In
the case of motor images, brain signals are obtained in most cases, using EEG,
due to its ease of use and its high temporal resolution. EEG signals are obtained
from multiple channels that are placed on the scalp, which makes the signal
more accurate [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Recent studies have shown that EEG-based BCIs allow users
to control machines with multiple-state classification. In some studies the
electroencephalographic signals have been extracted using imagery motor left, right
hand, food or language [
        <xref ref-type="bibr" rid="ref37 ref43 ref44 ref45 ref46">37, 43–46</xref>
        ], when listening to English vowels a, i and u
[
        <xref ref-type="bibr" rid="ref39">39</xref>
        ], in state of relaxation, read, spell and math activity [
        <xref ref-type="bibr" rid="ref40">40</xref>
        ], imaging certain
actions without any physical action, imaging actions without physical movement
[
        <xref ref-type="bibr" rid="ref36 ref41">36, 41</xref>
        ]. Recently different classification methods have been used, among which
are, wavelet transformation [
        <xref ref-type="bibr" rid="ref36">36</xref>
        ], (ANN) Feed forward back-propagation neural
network design [
        <xref ref-type="bibr" rid="ref37 ref40">37, 40</xref>
        ], recurrent neural networks (RNN) deep neural network
(DNN) Adam back-propagation [
        <xref ref-type="bibr" rid="ref38">38</xref>
        ], deep recurrent convolutional neural
networks (CNNs) [
        <xref ref-type="bibr" rid="ref35 ref41 ref42 ref43 ref44 ref45">35, 41–45</xref>
        ]. In this article we propose a new architecture based
on Long Short-Term Memory networks (LSTM), this type of recurrent network
is used to connect past information with current information, also it is capable
of storing a large quantity of information during long periods of time. [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ]. For
this purpose, we will use signals obtained from the imagination of movement of
the left hand and left foot, state of relaxation and mathematical activity.
Additionally, in order to make the system as simple as possible, the EEG signals are
extracted from a headset using four EEG channels.
3
      </p>
    </sec>
    <sec id="sec-2">
      <title>Materials and method</title>
      <p>In this section we will talk about the materials and methods used for the
acquisition of EEG signals, their analysis and classification.
3.1</p>
      <sec id="sec-2-1">
        <title>Experimental protocol</title>
        <p>
          For the acquisition of EEG signals outside the shielded laboratory settings
we used an easy to use equipment with few electrodes. The M useT M
(EEGheadband created by InteraXon) device 1 is a headband that detects signals
from the brain, using circuity technology for detecting EEG. Superficial EEG
obtained with the headband is a non-invasive, so it is harmless when acquiring
electrical signals emitted by brain neurons, showing brain activity in real time
[
          <xref ref-type="bibr" rid="ref24">24</xref>
          ]. The Muse device has four acquisition channels and an android application.
For this work 30-second recordings were acquired, 40 of them used the imagery
motor of the left-hand movement, 40 used the imagery motor of the left foot,
40 in the state of relapse, and 40 in mathematical activity, for a total of 160
records. The recordings were given in a silent environment without external
disturbances. In this experiment the EEG signal is segmented into window frames
of 3000 data length, equivalent to 13 seconds. The features are extracted for the
four tasks and for only one subject.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Deep learning, CNN and LSTM</title>
        <p>
          Deep neural networks contain layers of superimposed neurons, using more
additional hidden layers than a classic artificial neural networks. These additional
layers or deep, improves the accuracy of the network. It is possible to extract
the features automatically, unlike most of the learning algorithms in which
human intervention is required. Each layer trains depending on the output of the
previous layer. As it progresses, more complex characteristics are trained [
          <xref ref-type="bibr" rid="ref50">50</xref>
          ].
        </p>
        <p>
          Recurrent neural networks (RNN) 2 have the capacity to learn characteristics
of the data set through time, due to their feedback connections [
          <xref ref-type="bibr" rid="ref48">48</xref>
          ]. RRN uses
the recurrent connections to create loops in the neurons of the network, this
allows tracking of temporal interactions in the incoming signal.
        </p>
        <p>
          The processing of the temporal information of the RNN is facilitated,
because the network generate patterns that behave according to the value of the
previously given pattern, that is, the inclusion of recurrent connections
generates a dynamic behavior where the information goes temporarily updating [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ].
Unlike the feed forward neural networks, the RRN have the ability to process
random sequences of inputs due to their internal memory. The LSTM is an RRN,
that has the ability to learn from the signal, by observing events over long
periods of time [
          <xref ref-type="bibr" rid="ref50">50</xref>
          ]. LSTM is a type of recurrent network that unlike other neural
networks connects previous information with the current task and learns to use
information stored for long periods of time. It is an effective model that is used
in sequential data learning problems [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ]. LSTMs are also used to capture
longterm temporary dependencies [
          <xref ref-type="bibr" rid="ref49">49</xref>
          ]. The architecture of an LSTM network is a
memory cell that has a chain sequence, also maintains its status over time and
its non-linear gates regulate the flow of information inside and outside the cell
[
          <xref ref-type="bibr" rid="ref49">49</xref>
          ].
        </p>
        <p>
          The convolutional neural networks (CNN) extract abstract functions to
create characteristics in a progressive way by means of convolutional operations,
convolutional models usually learn from training of different layers. In each layer
CNN extract information or characteristics of the input signal [
          <xref ref-type="bibr" rid="ref51">51</xref>
          ].
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Signal acquisition and preprocessing</title>
      <p>
        We used the fourth channel of the Muse device for acquiring the EEG. Then,
we select a window of 3000 data per signal, 160 signals in total, 40 per imagery
motor of left arm, 40 per imagery motor of left foot, 40 in relapse state and 40
in mathematical activity. Figure 6 shows an EEG signal obtained during this
process [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
One healthy subject participated in the experiments. During the recordings the
subject are instructed not to make overt movements and keep their hands
relaxed. The motor imagery task was with close eyes. Each trail is 30 s long,
the subject performs four tasks namely, relax, math activity, imagine left arm
movement and left foot movement.
      </p>
      <p>– Task 1 – Imaginary motor left foot The subject imagine the movement of
the left foot during 30s, without movement.
– Task 2 – Imaginary motor left hand The subject imagine the movement of
the left hand during 30s, without movement.
– Task 3 – Relaxation state The subject do not perform any specific task, but
are asked to relax as much as possible and think of nothing in particular
during 30s. This task is considered the baseline task for alpha wave production
and used as a control measure of the EEG.
– Task 4 – Mathematical activity The subject think in mathematical
operations, during 30s.
4.2</p>
      <sec id="sec-3-1">
        <title>EEG Recording</title>
        <p>EEG is recorded using four gold-plated cup bipolar electrodes placed at the
AF7, AF8, TP9 and TP10 locations on the sensorimotor cortex area as per the
International 10-20 Electrode Placement System. Figure 4 shows the electrode
placement locations. For this experiment were carried out sessions of EEG signal
recordings for several days, 40 recordings were obtained for each task, where each
recording had a duration of 30 seconds. In which 160 EEG signals were obtained
from channels AF7, AF8, TP9 and TP10, whose amplification and sampling is
250Hz. For this experiment, a healthy subject, 30 years old, free of disease or
medication, participated. Which avoided blinking the eyes and any other external
physical movement. All the information obtained from these electrodes was used
in the classification. See the graph of the accuracy during the training 4.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Proposed network architecture</title>
      <p>
        In this research we propose an architecture using a CNN layer with an LSTM
layer. We use phyton with Keras library [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] to code the architecture and process
the input data. The architecture can be seen in the figure 5, it consists:
– A convolutional layer with 10 filters of size 50.
– A LSTM layer with 120 neurons.
– Four dense layers of 150, 50, 14, and 4 neurons.
      </p>
      <p>
        The architectures of CNN and LSTM have allowed modeling temporal
information, this was also being used in other works for speech recognition and
signal classification [
        <xref ref-type="bibr" rid="ref51">51</xref>
        ]. While the CNN layer allows to optimize the extraction
of characteristics of the set of signals and obtain patterns of the characteristics
for its classification, an LSTM allows to use the previous information
temporarily maintaining the flow of the information inside and outside the network and
has a layer called context layer this creates a copy of the hidden layer, with it
stores the previous state of the previous pattern [
        <xref ref-type="bibr" rid="ref49">49</xref>
        ]., both allow a more efficient
architecture.
      </p>
      <p>The optimization of LSTM parameters is performed sequential manner. First
we feed the network with EEG subdivided in windows of 3000 samples. Later
we choose the number of epochs and best weight’s initialization on the training
set. The optimized parameter values, for LSTM are detailed in 6.</p>
      <p>We used a loss function whose objective is try to minimize the loss. It can be
the string identifier of an existing loss function such as categorical crossentropy
function, when using the categorical crossentropy loss function, your targets
should be in categorical format, and one at the corresponding index to the class
of the sample.</p>
      <p>160 data samples are used in this experiment. The training and testing
samples is normalized using categorical normalization algorithm. Selection of the
training and testing data is chosen randomly. All four classifiers are trained with
90% data samples and tested with 10% data samples.
6</p>
    </sec>
    <sec id="sec-5">
      <title>Results and Discussion</title>
      <p>In this section, we can observe the data obtained by the classifier implemented,
as well as their percentages of accuracy.
6.1</p>
      <sec id="sec-5-1">
        <title>Classification performance of the Modeled Classifier</title>
        <p>In figure 7 we can see the accuracy performance of the architecture proposed in
a single subject during training. The classification of the motor imagery signals
for the four states is shown in the as the classification obtained from the 160
samples for a subject, and the network was trained with 800 epochs. With the
accuracy you can see the performance of CNN with LSTM by time. No artifacts
were removed from the EEG data, which demonstrates the robustness of the
algorithm.
The performance results of the classifier are acceptable with respect to the
amount of data used. 80% was obtained in the classification, even though it
is necessary to improve the algorithm. Which could be used for the restoration
of movement and rehabilitation of people with paraplegics conditions and would
allow other people to have direct brain control of external devices in their daily
life. The combination of a convolutional network with an LSTM network
obtained adequate results during feature extraction and training over long periods
of time. The network was able to distinguish between thoughts of imagination
and two different states of brain activity. For future research it is proposed to
use signals acquired only with movement imagination to proceed to test the
classifier.
8</p>
        <p>Rodriguez et al.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1. R. Ron Angevin, «
          <article-title>Retroalimentación en el entrenamiento de una interfaz cerebro computadora usando técnicas basadas en realidad virtual</article-title>
          ,» Tesis Doctoral, Universidad de Malaga, p.
          <fpage>256</fpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>LaFleur</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Cassady</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Doud</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Shades</surname>
            , E. Rogin y
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>He</surname>
          </string-name>
          , «
          <article-title>Quadcopter control in three-dimensional space using a noninvasive motor imagery-based braincomputer interface</article-title>
          ,
          <source>» Journal of neural engineering</source>
          , vol.
          <volume>10</volume>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>A. K. Das</surname>
            ,
            <given-names>S. Surech y N.</given-names>
          </string-name>
          <string-name>
            <surname>Sundararajan</surname>
          </string-name>
          , «
          <article-title>A Robust Interval Type-2 Fuzzy Inference based BCI System,» IEEE</article-title>
          , vol.
          <volume>12</volume>
          , p.
          <fpage>6</fpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>A. K. Das</surname>
            ,
            <given-names>T. T.</given-names>
          </string-name>
          <string-name>
            <surname>Leong</surname>
          </string-name>
          , S. Surech y N. Sundararajan, «
          <article-title>Meta-cognitive Interval Type-2 Fuzzy Controller for Quadcopter Flight Control-An EEG based Approach</article-title>
          ,» IEEE, p.
          <fpage>7</fpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>B.</given-names>
            <surname>Hyung Kim</surname>
          </string-name>
          , M. Kim y S. Jo, «
          <article-title>Quadcopter flight control using a low-cost hybrid interface with EEG-based classification and eye tracking,» Computers in Biology and Medicine</article-title>
          , no
          <volume>51</volume>
          , p.
          <fpage>10</fpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>X.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Niu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Xian</surname>
          </string-name>
          , M. Zeng y G. Chen, «
          <source>Progress in EEG-Based Brain Robot Interaction Systems,» Computational Intelligence and Neuroscience</source>
          , vol.
          <year>2017</year>
          , p.
          <fpage>25</fpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>L. J.</given-names>
            <surname>Gómez</surname>
          </string-name>
          <string-name>
            <surname>Figueroa</surname>
          </string-name>
          , «Análisis de señales EEG para detección de eventos oculares, musculares y cognitivos,» Trabajo de fin de Máster, p.
          <fpage>121</fpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>F.</given-names>
            <surname>Ramos-Arguelles</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Morales</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Egozcue</surname>
          </string-name>
          , R. Pabón y M. Alonso, «
          <article-title>Basic techniques of electroencephalography: principles and clinical applications,» Scielo Analytics</article-title>
          , vol.
          <volume>32</volume>
          , p.
          <fpage>14</fpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Addati y G. Perez</surname>
          </string-name>
          <string-name>
            <surname>Lance</surname>
          </string-name>
          , «
          <article-title>Introducción a los UAV's, Drones o VANTs de uso civil</article-title>
          ,» Econstor, p.
          <fpage>12</fpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>M. Gestal Pose</surname>
          </string-name>
          , «Introducción a las Redes de Neuronas Artificiales,» p.
          <fpage>20</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11. G.
          <string-name>
            <surname>Parra</surname>
            <given-names>V</given-names>
          </string-name>
          , «
          <article-title>Procesos Gaussianos MULTI-</article-title>
          OUTPUT,» Departamento de Ingeniería Matemática , p.
          <fpage>2</fpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12. «Pilots Brain Controls Drone,» Professional Engineering, p.
          <fpage>2</fpage>
          ,
          <string-name>
            <surname>March</surname>
          </string-name>
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Cochocki</surname>
          </string-name>
          , A y Rolf Unbehauen. «
          <article-title>Neural networks for optimization and signal processing</article-title>
          » John Wiley and Sons, Inc.,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Floreano</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Wood</surname>
            ,
            <given-names>R. J.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>Science, technology and the future of small autonomous drones</article-title>
          .
          <source>Nature</source>
          ,
          <volume>521</volume>
          (
          <issue>7553</issue>
          ),
          <fpage>460</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Lapedes</surname>
          </string-name>
          , Alan, y Robert Farber. «
          <article-title>Nonlinear signal processing using neural networks: Prediction and system modelling</article-title>
          .»
          <year>1987</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Hu</surname>
          </string-name>
          , Yu Hen, y Jeng-Neng Hwang, eds. «
          <article-title>Handbook of neural network signal processing</article-title>
          .» (
          <year>2002</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Yu</surname>
            ,
            <given-names>F. T.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Jutamulia</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>1992</year>
          ).
          <article-title>Optical signal processing, computing, and neural networks</article-title>
          . John Wiley &amp; Sons, Inc..
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <given-names>Guarnizo</given-names>
            <surname>Lemus</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          (
          <year>2008</year>
          ). Análisis de reducción de ruido en señales EEG orientado al reconocimiento de patrones.
          <source>Tecno Lógicas</source>
          , (
          <volume>21</volume>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Zecua</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Caballero</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Martınez-Carranza</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Reyes</surname>
            ,
            <given-names>C. A.</given-names>
          </string-name>
          (
          <year>2016</year>
          ). Clasificación de estımulos visuales para control de drones.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <surname>Yipeng</surname>
          </string-name>
          , et al. «
          <article-title>FlyingBuddy2: a brain-controlled assistant for the handicapped</article-title>
          .
          <source>» Ubicomp</source>
          .
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Hansen</surname>
          </string-name>
          , John Paulin, et al. «
          <article-title>”The use of gaze to control drones."</article-title>
          <source>Proceedings of the Symposium on Eye Tracking Research and Applications.» ACM</source>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Khan</surname>
          </string-name>
          , Muhammad Jawad, y Keum-Shik Hong.
          <article-title>«hybrid eeg-fnirs-Based eightcommand Decoding for Bci: application to Quadcopter control.» Frontiers in neurorobotics</article-title>
          , p.
          <volume>11</volume>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>23. http://adventuresinmachinelearning.com/keras-lstm-tutorial/</mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>24. http://developer.choosemuse.com/hardware-firmware/hardware-specifications</mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>25. https://www.parrot.com/global/drones/parrot-bebop-2technicals</mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>26. http://developer.parrot.com/docs/SDK3/</mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Gohritz</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Knobloch</surname>
            ,
            <given-names>P.M.</given-names>
          </string-name>
          <string-name>
            <surname>Vogt</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Bonnemann</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Aszmann</surname>
          </string-name>
          .
          <article-title>Potential Influence of One-Handedness on Politics and Philosophy of the 20th Century</article-title>
          .
          <source>J.Hand Surg</source>
          .Am.,
          <year>2009</year>
          , Vol.
          <volume>34</volume>
          ,
          <fpage>1161</fpage>
          -
          <lpage>1162</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <given-names>Antona</given-names>
            <surname>Cortés</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Herramientas modernas en redes neuronales: la librería Keras Bachelor's thesis.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Betancourt</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gustavo</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suárez</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Franco</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fredy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2004</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xiao</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xie</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhuang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Fusing Geometric Features for Skeleton-Based Action Recognition using Multilayer LSTM Networks</article-title>
          .
          <source>IEEE Transactions on Multimedia, X(X)</source>
          ,
          <volume>1</volume>
          -
          <fpage>1</fpage>
          . https://doi.org/10.1109/TMM.
          <year>2018</year>
          .2802648
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <surname>Ron</surname>
            <given-names>Angevin</given-names>
          </string-name>
          ,
          <string-name>
            <surname>R.</surname>
          </string-name>
          , «
          <article-title>Retroalimentación en el entrenamiento de una interfaz cerebro computadora usando técnicas basadas en realidad virtual</article-title>
          ,» Tesis Doctoral, Universidad de Malaga, p.
          <fpage>256</fpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          32.
          <string-name>
            <given-names>Bonet</given-names>
            <surname>Cruz</surname>
          </string-name>
          ,
          <string-name>
            <surname>I.</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Salazar</given-names>
            <surname>Martínez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Rodríguez</surname>
          </string-name>
          <string-name>
            <surname>Abed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Grau</surname>
          </string-name>
          <string-name>
            <surname>Ábalo</surname>
          </string-name>
          ,
          <string-name>
            <surname>R.</surname>
          </string-name>
          , and
          <string-name>
            <given-names>García</given-names>
            <surname>Lorenzo</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M.</surname>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>Redes neuronales recurrentes para el análisis de secuencias</article-title>
          .
          <source>Revista Cubana de Ciencias Informáticas</source>
          ,
          <volume>1</volume>
          (
          <issue>4</issue>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          33.
          <string-name>
            <surname>Kumar</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Saini</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roy</surname>
            ,
            <given-names>P. P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sahu</surname>
            ,
            <given-names>P. K.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Dogra</surname>
            ,
            <given-names>D. P.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Envisioned speech recognition using EEG sensors</article-title>
          .
          <source>Personal and Ubiquitous Computing</source>
          ,
          <volume>22</volume>
          (
          <issue>1</issue>
          ),
          <fpage>185</fpage>
          -
          <lpage>199</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          34.
          <string-name>
            <surname>Mohamed</surname>
            ,
            <given-names>E. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yusoff</surname>
            ,
            <given-names>M. Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Malik</surname>
            ,
            <given-names>A. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bahloul</surname>
            ,
            <given-names>M. R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Adam</surname>
            ,
            <given-names>D. M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Adam</surname>
            ,
            <given-names>I. K.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Comparison of EEG signal decomposition methods in classification of motor-imagery BCI</article-title>
          .
          <source>Multimedia Tools and Applications</source>
          ,
          <fpage>1</fpage>
          -
          <lpage>23</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          35.
          <string-name>
            <surname>Rahma</surname>
            ,
            <given-names>O. N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hendradi</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Ama</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Classifying Imaginary Hand Movement through Electroencephalograph Signal for Neurorehabilitation</article-title>
          .
          <source>Walailak Journal of Science and Technology (WJST)</source>
          ,
          <volume>15</volume>
          (
          <issue>12</issue>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          36.
          <string-name>
            <surname>Maksimenko</surname>
            ,
            <given-names>V. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pavlov</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Runnova</surname>
            ,
            <given-names>A. E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nedaivozov</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grubov</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koronovslii</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , ... and
          <string-name>
            <surname>Hramov</surname>
            ,
            <given-names>A. E.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Nonlinear analysis of brain activity, associated with motor action and motor imaginary in untrained subjects</article-title>
          .
          <source>Nonlinear Dynamics</source>
          ,
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          37.
          <string-name>
            <surname>Szczuko</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lech</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Czyżewski</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Comparison of Classification Methods for EEG Signals of Real and Imaginary Motion</article-title>
          .
          <source>In advances in Feature Selection for Data and Pattern Recognition</source>
          (pp.
          <fpage>227</fpage>
          -
          <lpage>239</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          38. Springer, Cham.Tang,
          <string-name>
            <given-names>X.</given-names>
            ,
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            , and
            <surname>Wan</surname>
          </string-name>
          ,
          <string-name>
            <surname>H.</surname>
          </string-name>
          (
          <year>2018</year>
          , April).
          <article-title>A Hybrid SAE and CNN Classifier for Motor Imagery EEG Classification</article-title>
          . In Computer Science On-line
          <string-name>
            <surname>Conference</surname>
          </string-name>
          (pp.
          <fpage>265</fpage>
          -
          <lpage>278</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          39. Springer, Cham. Moinnereau,
          <string-name>
            <given-names>M. A.</given-names>
            ,
            <surname>Brienne</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Brodeur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Rouat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Whittingstall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            , and
            <surname>Plourde</surname>
          </string-name>
          ,
          <string-name>
            <surname>E.</surname>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Classification of auditory stimuli from EEG signals with a regulated recurrent neural network reservoir</article-title>
          .arXiv preprint arXiv:
          <year>1804</year>
          .10322.
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          40.
          <string-name>
            <surname>Elakkiya</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Emayavaramban</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Biometric Authentication System Using EEG Brain Signature</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          41.
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yao</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          , and Liu,
          <string-name>
            <surname>Y.</surname>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Know Your Mind: Adaptive Brain Signal Classification with Reinforced Attentive Convolutional Neural Networks</article-title>
          .arXiv preprint arXiv:
          <year>1802</year>
          .03996.
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          42.
          <string-name>
            <surname>Jiao</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gao</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Deep Convolutional Neural Networks for mental load classification based on EEG data</article-title>
          .
          <source>Pattern Recognition</source>
          ,
          <volume>76</volume>
          ,
          <fpage>582</fpage>
          -
          <lpage>595</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          43.
          <string-name>
            <surname>Ozmen</surname>
            ,
            <given-names>N. G.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Gumusel</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>2013</year>
          ,
          <article-title>July)</article-title>
          .
          <article-title>Classification of real and imaginary hand movements for a bci design</article-title>
          .
          <source>InTelecommunications and Signal Processing (TSP)</source>
          ,
          <year>2013</year>
          36th International Conference on(pp.
          <fpage>607</fpage>
          -
          <lpage>611</lpage>
          ). IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          44.
          <string-name>
            <surname>Leuthardt</surname>
            ,
            <given-names>E. C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schalk</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wolpaw</surname>
            ,
            <given-names>J. R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ojemann</surname>
            ,
            <given-names>J. G.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Moran</surname>
            ,
            <given-names>D. W.</given-names>
          </string-name>
          (
          <year>2004</year>
          ).
          <article-title>A brain-computer interface using electrocorticographic signals in humans</article-title>
          .
          <source>Journal of neural engineering</source>
          ,
          <volume>1</volume>
          (
          <issue>2</issue>
          ),
          <fpage>63</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          45.
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>S. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gan</surname>
            ,
            <given-names>J. Q.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Sepulveda</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          (
          <year>2008</year>
          ).
          <article-title>Classifying mental tasks based on features of higher-order statistics from EEG signals in brain-computer interface</article-title>
          .
          <source>Information Sciences</source>
          ,
          <volume>178</volume>
          (
          <issue>6</issue>
          ),
          <fpage>1629</fpage>
          -
          <lpage>1640</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          46.
          <string-name>
            <surname>Pfurtscheller</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Neuper</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schlogl</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Lugger</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          (
          <year>1998</year>
          ).
          <article-title>Separability of EEG signals recorded during right and left motor imagery using adaptive autoregressive parameters</article-title>
          .
          <source>IEEE transactions on Rehabilitation Engineering</source>
          ,
          <volume>6</volume>
          (
          <issue>3</issue>
          ),
          <fpage>316</fpage>
          -
          <lpage>325</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          47.
          <string-name>
            <surname>Forney</surname>
            ,
            <given-names>E. M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>C. W.</given-names>
          </string-name>
          (
          <year>2011</year>
          ,
          <article-title>July)</article-title>
          .
          <article-title>Classification of EEG during imagined mental tasks by forecasting with Elman recurrent neural networks</article-title>
          .
          <source>In Neural Networks (IJCNN)</source>
          , The 2011 International Joint Conference on (pp.
          <fpage>2749</fpage>
          -
          <lpage>2755</lpage>
          ). IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          48.
          <string-name>
            <surname>Hema</surname>
            ,
            <given-names>C. R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Paulraj</surname>
            ,
            <given-names>M. P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yaacob</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Adom</surname>
            ,
            <given-names>A. H.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Nagarajan</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2009</year>
          , March).
          <article-title>Single trial motor imagery classification for a four state brain machine interface</article-title>
          .
          <source>In Signal Processing and Its Applications</source>
          ,
          <year>2009</year>
          .
          <source>CSPA</source>
          <year>2009</year>
          . 5th International Colloquium on (pp.
          <fpage>39</fpage>
          -
          <lpage>41</lpage>
          ). IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          49.
          <string-name>
            <surname>Greff</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Srivastava</surname>
            ,
            <given-names>R. K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koutník</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Steunebrink</surname>
            ,
            <given-names>B. R.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Schmidhuber</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>LSTM: A search space odyssey</article-title>
          .
          <source>IEEE transactions on neural networks and learning systems</source>
          ,
          <volume>28</volume>
          (
          <issue>10</issue>
          ),
          <fpage>2222</fpage>
          -
          <lpage>2232</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          50.
          <string-name>
            <surname>Thomas</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maszczyk</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sinha</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kluge</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Dauwels</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2017</year>
          ,
          <article-title>October)</article-title>
          .
          <article-title>Deep learning-based classification for brain-computer interfaces</article-title>
          .
          <source>In Systems, Man, and Cybernetics (SMC)</source>
          ,
          <year>2017</year>
          IEEE International Conference on (pp.
          <fpage>234</fpage>
          -
          <lpage>239</lpage>
          ). IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          51.
          <string-name>
            <surname>Ordóñez</surname>
            ,
            <given-names>F. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roggen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition</article-title>
          .
          <source>Sensors</source>
          ,
          <volume>16</volume>
          (
          <issue>1</issue>
          ),
          <fpage>115</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>