=Paper= {{Paper |id=Vol-2304/00030041 |storemode=property |title=Acquisition, Analysis and Classification of EEG Signals for Control Design |pdfUrl=https://ceur-ws.org/Vol-2304/00030041.pdf |volume=Vol-2304 |authors=Paula Ivone Rodriguez Azar,José Manuel Mejía Muñoz,Boris Jesus Mederos,Nayeli Edith Moreno Márquez,Victor Manuel Mendoza Guzman }} ==Acquisition, Analysis and Classification of EEG Signals for Control Design== https://ceur-ws.org/Vol-2304/00030041.pdf
    Acquisition, analysis and classification of EEG
               signals for control design

Paula Ivone Rodriguez, Jose Mejia, Boris Mederos, Nayeli Edith Moreno, and
                         Victor Manuel Mendoza

                     Universidad Autónoma de Ciudad Juárez
    Avenida Plutarco Elías Calles 1210 Fovissste Chamizal, 32310 Ciudad Juárez,
                                     Chihuahua
                                    www.uacj.mx



1    Abstract
Abstract In the design of brain machine interfaces it is common to use motor
imagery which is the mental simulation of a motor act, it consists of acquir-
ing the signals emitted when imagining the movement of different parts of the
body. In this paper we propose a machine learning algorithm for the analysis of
electroencephalographic signals (EEG) in order to detect body movement inten-
tion, combined with the signals issued in a state of relaxation and the state of
mathematical activity. That can be applied for brain computer interface (BCI).
The algorithm is based on the use of recurrent networks (RRN) and can recog-
nized four tasks which can be used for control of machinery. The performance of
the proposed algorithm has an average classification efficiency of 80.13%. This
proposed method can be used to translate the motor imagery signals, relaxation
activity signals and mathematical activity signals into control signal using a four
state to control the directional movement of a drone.


2    Introduction
The Brain Computer Interfaces (BCIs) are used mostly to help people to restor-
ing some functions when they are severely disabled by a neuromuscular disorder,
BCIs also used in healthy people to help them improve their functions [2]. BCI
experiments based on electroencephalogram (EEG) have the advantage of being
no-invasive for the subject, besides having no environmental restrictions [5]. In
the case of motor images, brain signals are obtained in most cases, using EEG,
due to its ease of use and its high temporal resolution. EEG signals are obtained
from multiple channels that are placed on the scalp, which makes the signal
more accurate [3]. Recent studies have shown that EEG-based BCIs allow users
to control machines with multiple-state classification. In some studies the elec-
troencephalographic signals have been extracted using imagery motor left, right
hand, food or language [37, 43–46], when listening to English vowels a, i and u
[39], in state of relaxation, read, spell and math activity [40], imaging certain
actions without any physical action, imaging actions without physical movement
42     Rodriguez et al.

[36, 41]. Recently different classification methods have been used, among which
are, wavelet transformation [36], (ANN) Feed forward back-propagation neural
network design [37, 40], recurrent neural networks (RNN) deep neural network
(DNN) Adam back-propagation [38], deep recurrent convolutional neural net-
works (CNNs) [35, 41–45]. In this article we propose a new architecture based
on Long Short-Term Memory networks (LSTM), this type of recurrent network
is used to connect past information with current information, also it is capable
of storing a large quantity of information during long periods of time. [30]. For
this purpose, we will use signals obtained from the imagination of movement of
the left hand and left foot, state of relaxation and mathematical activity. Addi-
tionally, in order to make the system as simple as possible, the EEG signals are
extracted from a headset using four EEG channels.

3     Materials and method
In this section we will talk about the materials and methods used for the acqui-
sition of EEG signals, their analysis and classification.

3.1   Experimental protocol
For the acquisition of EEG signals outside the shielded laboratory settings
we used an easy to use equipment with few electrodes. The M useT M (EEG-
headband created by InteraXon) device 1 is a headband that detects signals
from the brain, using circuity technology for detecting EEG. Superficial EEG
obtained with the headband is a non-invasive, so it is harmless when acquiring
electrical signals emitted by brain neurons, showing brain activity in real time
[24]. The Muse device has four acquisition channels and an android application.
For this work 30-second recordings were acquired, 40 of them used the imagery
motor of the left-hand movement, 40 used the imagery motor of the left foot,
40 in the state of relapse, and 40 in mathematical activity, for a total of 160
records. The recordings were given in a silent environment without external dis-
turbances. In this experiment the EEG signal is segmented into window frames
of 3000 data length, equivalent to 13 seconds. The features are extracted for the
four tasks and for only one subject.




          Fig. 1. The MUSE device used for acquisition of the EEG [24].
      Acquisition, analysis and classification of EEG signals for control design   43

3.2    Deep learning, CNN and LSTM


Deep neural networks contain layers of superimposed neurons, using more ad-
ditional hidden layers than a classic artificial neural networks. These additional
layers or deep, improves the accuracy of the network. It is possible to extract
the features automatically, unlike most of the learning algorithms in which hu-
man intervention is required. Each layer trains depending on the output of the
previous layer. As it progresses, more complex characteristics are trained [50].
    Recurrent neural networks (RNN) 2 have the capacity to learn characteristics
of the data set through time, due to their feedback connections [48]. RRN uses
the recurrent connections to create loops in the neurons of the network, this
allows tracking of temporal interactions in the incoming signal.




                    Fig. 2. Example of a recurrent neural network.



    The processing of the temporal information of the RNN is facilitated, be-
cause the network generate patterns that behave according to the value of the
previously given pattern, that is, the inclusion of recurrent connections gener-
ates a dynamic behavior where the information goes temporarily updating [28].
Unlike the feed forward neural networks, the RRN have the ability to process
random sequences of inputs due to their internal memory. The LSTM is an RRN,
that has the ability to learn from the signal, by observing events over long peri-
ods of time [50]. LSTM is a type of recurrent network that unlike other neural
networks connects previous information with the current task and learns to use
information stored for long periods of time. It is an effective model that is used
in sequential data learning problems [27]. LSTMs are also used to capture long-
term temporary dependencies [49]. The architecture of an LSTM network is a
memory cell that has a chain sequence, also maintains its status over time and
its non-linear gates regulate the flow of information inside and outside the cell
[49].
    The convolutional neural networks (CNN) extract abstract functions to cre-
ate characteristics in a progressive way by means of convolutional operations,
convolutional models usually learn from training of different layers. In each layer
CNN extract information or characteristics of the input signal [51].
44      Rodriguez et al.

4     Signal acquisition and preprocessing

We used the fourth channel of the Muse device for acquiring the EEG. Then,
we select a window of 3000 data per signal, 160 signals in total, 40 per imagery
motor of left arm, 40 per imagery motor of left foot, 40 in relapse state and 40
in mathematical activity. Figure 6 shows an EEG signal obtained during this
process [6].




                       Fig. 3. EEG signal during the record.




4.1   Signal Extraction

One healthy subject participated in the experiments. During the recordings the
subject are instructed not to make overt movements and keep their hands re-
laxed. The motor imagery task was with close eyes. Each trail is 30 s long,
the subject performs four tasks namely, relax, math activity, imagine left arm
movement and left foot movement.

 – Task 1 – Imaginary motor left foot The subject imagine the movement of
   the left foot during 30s, without movement.
      Acquisition, analysis and classification of EEG signals for control design   45

 – Task 2 – Imaginary motor left hand The subject imagine the movement of
   the left hand during 30s, without movement.
 – Task 3 – Relaxation state The subject do not perform any specific task, but
   are asked to relax as much as possible and think of nothing in particular dur-
   ing 30s. This task is considered the baseline task for alpha wave production
   and used as a control measure of the EEG.
 – Task 4 – Mathematical activity The subject think in mathematical opera-
   tions, during 30s.



4.2    EEG Recording


EEG is recorded using four gold-plated cup bipolar electrodes placed at the
AF7, AF8, TP9 and TP10 locations on the sensorimotor cortex area as per the
International 10-20 Electrode Placement System. Figure 4 shows the electrode
placement locations. For this experiment were carried out sessions of EEG signal
recordings for several days, 40 recordings were obtained for each task, where each
recording had a duration of 30 seconds. In which 160 EEG signals were obtained
from channels AF7, AF8, TP9 and TP10, whose amplification and sampling is
250Hz. For this experiment, a healthy subject, 30 years old, free of disease or
medication, participated. Which avoided blinking the eyes and any other external
physical movement. All the information obtained from these electrodes was used
in the classification. See the graph of the accuracy during the training 4.




                       Fig. 4. MUSE device configuration [24].
46     Rodriguez et al.

5    Proposed network architecture
In this research we propose an architecture using a CNN layer with an LSTM
layer. We use phyton with Keras library [23] to code the architecture and process
the input data. The architecture can be seen in the figure 5, it consists:

 – A convolutional layer with 10 filters of size 50.
 – A LSTM layer with 120 neurons.
 – Four dense layers of 150, 50, 14, and 4 neurons.




                      Fig. 5. Proposed Network Architecture.


    The architectures of CNN and LSTM have allowed modeling temporal in-
formation, this was also being used in other works for speech recognition and
signal classification [51]. While the CNN layer allows to optimize the extraction
of characteristics of the set of signals and obtain patterns of the characteristics
      Acquisition, analysis and classification of EEG signals for control design   47

for its classification, an LSTM allows to use the previous information temporar-
ily maintaining the flow of the information inside and outside the network and
has a layer called context layer this creates a copy of the hidden layer, with it
stores the previous state of the previous pattern [49]., both allow a more efficient
architecture.
    The optimization of LSTM parameters is performed sequential manner. First
we feed the network with EEG subdivided in windows of 3000 samples. Later
we choose the number of epochs and best weight’s initialization on the training
set. The optimized parameter values, for LSTM are detailed in 6.
    We used a loss function whose objective is try to minimize the loss. It can be
the string identifier of an existing loss function such as categorical crossentropy
function, when using the categorical crossentropy loss function, your targets
should be in categorical format, and one at the corresponding index to the class
of the sample.




                  Fig. 6. The optimized parameter values for LSTM.


    160 data samples are used in this experiment. The training and testing sam-
ples is normalized using categorical normalization algorithm. Selection of the
training and testing data is chosen randomly. All four classifiers are trained with
90% data samples and tested with 10% data samples.


6     Results and Discussion
In this section, we can observe the data obtained by the classifier implemented,
as well as their percentages of accuracy.

6.1    Classification performance of the Modeled Classifier
In figure 7 we can see the accuracy performance of the architecture proposed in
a single subject during training. The classification of the motor imagery signals
48        Rodriguez et al.

for the four states is shown in the as the classification obtained from the 160
samples for a subject, and the network was trained with 800 epochs. With the
accuracy you can see the performance of CNN with LSTM by time. No artifacts
were removed from the EEG data, which demonstrates the robustness of the
algorithm.




    Fig. 7. Graph of the accuracy during training, it only show a 160 epochs interval.




    Figure 9 shows the confusion matrix for which 10% of the total samples were
used, in which a classification of 80% was obtained, for which the matrix shows a
suitable classification 100% for the imagination of the movement of the left foot
and a proper classification 100% for the imagination of the left hand. In the cases
of state of relaxation and mathematical activity status, 67% were classified, the
classification of both was impaired when the classifier confused one of the states
of relaxation and one of the states of mathematical activity with the movement
of the hand left.
    The accuracy calculation is given by:
Classification accuracy = correct predictions / total predictions =12/15 =80%


7      Conclusion

The performance results of the classifier are acceptable with respect to the
amount of data used. 80% was obtained in the classification, even though it
is necessary to improve the algorithm. Which could be used for the restoration
of movement and rehabilitation of people with paraplegics conditions and would
allow other people to have direct brain control of external devices in their daily
     Acquisition, analysis and classification of EEG signals for control design   49




                        Fig. 8. normalized confusion matrix.


life. The combination of a convolutional network with an LSTM network ob-
tained adequate results during feature extraction and training over long periods
of time. The network was able to distinguish between thoughts of imagination
and two different states of brain activity. For future research it is proposed to
use signals acquired only with movement imagination to proceed to test the
classifier.


8   References

References
 1. R. Ron Angevin, «Retroalimentación en el entrenamiento de una interfaz cerebro
    computadora usando técnicas basadas en realidad virtual,» Tesis Doctoral, Uni-
    versidad de Malaga, p. 256, 2005.
 2. LaFleur, K. Cassady, A. Doud, K. Shades, E. Rogin y B. He , «Quadcopter con-
    trol in three-dimensional space using a noninvasive motor imagery-based brain-
    computer interface,» Journal of neural engineering, vol. 10, pp. 1-16, 2013.
 3. A. K. Das, S. Surech y N. Sundararajan, «A Robust Interval Type-2 Fuzzy Infer-
    ence based BCI System,» IEEE, vol. 12, p. 6, 2016.
 4. A. K. Das, T. T. Leong, S. Surech y N. Sundararajan, «Meta-cognitive Inter-
    val Type-2 Fuzzy Controller for Quadcopter Flight Control—An EEG based Ap-
    proach,» IEEE, p. 7, 2016.
50      Rodriguez et al.

 5. B. Hyung Kim, M. Kim y S. Jo, «Quadcopter flight control using a low-cost hybrid
    interface with EEG-based classification and eye tracking,» Computers in Biology
    and Medicine, no 51, p. 10, 2014.
 6. X. Mao, M. Li, W. Li, L. Niu, B. Xian, M. Zeng y G. Chen, «Progress in EEG-Based
    Brain Robot Interaction Systems,» Computational Intelligence and Neuroscience,
    vol. 2017, p. 25, 2017.
 7. L. J. Gómez Figueroa, «Análisis de señales EEG para detección de eventos oculares,
    musculares y cognitivos,» Trabajo de fin de Máster, p. 121, 2016.
 8. F. Ramos-Arguelles, G. Morales, S. Egozcue, R. Pabón y M. Alonso, «Basic tech-
    niques of electroencephalography: principles and clinical applications,» Scielo An-
    alytics, vol. 32, p. 14, 2009.
 9. G. A. Addati y G. Perez Lance, «Introducción a los UAV’s, Drones o VANTs de
    uso civil,» Econstor, p. 12, 2014.
10. M. Gestal Pose, «Introducción a las Redes de Neuronas Artificiales,» p. 20.
11. G. Parra V, «Procesos Gaussianos MULTI-OUTPUT,» Departamento de Inge-
    niería Matemática , p. 2, 2016.
12. «Pilots Brain Controls Drone,» Professional Engineering, p. 2, March 2015.
13. Cochocki, A y Rolf Unbehauen. «Neural networks for optimization and signal pro-
    cessing» John Wiley and Sons, Inc., 1993.
14. Floreano, D., and Wood, R. J. (2015). Science, technology and the future of small
    autonomous drones. Nature, 521(7553), 460.
15. Lapedes, Alan, y Robert Farber. «Nonlinear signal processing using neural net-
    works: Prediction and system modelling.» 1987.
16. Hu, Yu Hen, y Jeng-Neng Hwang, eds. «Handbook of neural network signal pro-
    cessing.» (2002).
17. Yu, F. T., and Jutamulia, S. (1992). Optical signal processing, computing, and
    neural networks. John Wiley & Sons, Inc..
18. Guarnizo Lemus, C. (2008). Análisis de reducción de ruido en señales EEG orien-
    tado al reconocimiento de patrones. Tecno Lógicas, (21).
19. Zecua, E., Caballero, I., Martınez-Carranza, J., and Reyes, C. A. (2016). Clasifi-
    cación de estımulos visuales para control de drones.
20. Yu, Yipeng, et al. «FlyingBuddy2: a brain-controlled assistant for the handi-
    capped.» Ubicomp. 2012.
21. Hansen, John Paulin, et al. «”The use of gaze to control drones." Proceedings of
    the Symposium on Eye Tracking Research and Applications.» ACM, 2014.
22. Khan, Muhammad Jawad, y Keum-Shik Hong. «hybrid eeg–fnirs-Based eight-
    command Decoding for Bci: application to Quadcopter control.» Frontiers in neu-
    rorobotics, p. 11 (2017).
23. http://adventuresinmachinelearning.com/keras-lstm-tutorial/
24. http://developer.choosemuse.com/hardware-firmware/hardware-specifications
25. https://www.parrot.com/global/drones/parrot-bebop-2technicals
26. http://developer.parrot.com/docs/SDK3/
27. Gohritz, K. Knobloch, P.M. Vogt, C. Bonnemann, O. Aszmann. Potential Influ-
    ence of One-Handedness on Politics and Philosophy of the 20th Century. J.Hand
    Surg.Am.,2009, Vol. 34, 1161-1162.
28. Antona Cortés, C. (2017). Herramientas modernas en redes neuronales: la librería
    Keras Bachelor’s thesis.
29. Betancourt, O., Gustavo, A., Suárez, G., Franco, B., Fredy, J. (2004).
30. Zhang, S., Yang, Y., Xiao, J., Liu, X., Yang, Y., Xie, D., Zhuang, Y.
    (2018). Fusing Geometric Features for Skeleton-Based Action Recognition us-
    ing Multilayer LSTM Networks. IEEE Transactions on Multimedia, X(X), 1–1.
    https://doi.org/10.1109/TMM.2018.2802648
     Acquisition, analysis and classification of EEG signals for control design      51

31. Ron Angevin, R., «Retroalimentación en el entrenamiento de una interfaz cere-
    bro computadora usando técnicas basadas en realidad virtual,» Tesis Doctoral,
    Universidad de Malaga, p. 256, 2005.
32. Bonet Cruz, I., Salazar Martínez, S., Rodríguez Abed, A., Grau Ábalo, R., and
    García Lorenzo, M. M. (2007). Redes neuronales recurrentes para el análisis de
    secuencias. Revista Cubana de Ciencias Informáticas, 1(4).
33. Kumar, P., Saini, R., Roy, P. P., Sahu, P. K., and Dogra, D. P. (2018). Envisioned
    speech recognition using EEG sensors.Personal and Ubiquitous Computing,22(1),
    185-199.
34. Mohamed, E. A., Yusoff, M. Z., Malik, A. S., Bahloul, M. R., Adam, D. M., and
    Adam, I. K. (2018). Comparison of EEG signal decomposition methods in classi-
    fication of motor-imagery BCI.Multimedia Tools and Applications, 1-23.
35. Rahma, O. N., Hendradi, R., and Ama, F. (2018). Classifying Imag-
    inary Hand Movement through Electroencephalograph Signal for Neuro-
    rehabilitation.Walailak Journal of Science and Technology (WJST),15(12).
36. Maksimenko, V. A., Pavlov, A., Runnova, A. E., Nedaivozov, V., Grubov, V.,
    Koronovslii, A., ... and Hramov, A. E. (2018). Nonlinear analysis of brain activity,
    associated with motor action and motor imaginary in untrained subjects.Nonlinear
    Dynamics, 1-15.
37. Szczuko, P., Lech, M., and Czyżewski, A. (2018). Comparison of Classification
    Methods for EEG Signals of Real and Imaginary Motion. In advances in Feature
    Selection for Data and Pattern Recognition(pp. 227-239).
38. Springer, Cham.Tang, X., Yang, J., and Wan, H. (2018, April). A Hybrid SAE
    and CNN Classifier for Motor Imagery EEG Classification. In Computer Science
    On-line Conference(pp. 265-278).
39. Springer, Cham. Moinnereau, M. A., Brienne, T., Brodeur, S., Rouat, J., Whit-
    tingstall, K., and Plourde, E. (2018). Classification of auditory stimuli from
    EEG signals with a regulated recurrent neural network reservoir.arXiv preprint
    arXiv:1804.10322.
40. Elakkiya, A., and Emayavaramban, G. (2018). Biometric Authentication System
    Using EEG Brain Signature.
41. Zhang, X., Yao, L., Wang, X., Zhang, W., Yang, Z., and Liu, Y. (2018). Know
    Your Mind: Adaptive Brain Signal Classification with Reinforced Attentive Con-
    volutional Neural Networks.arXiv preprint arXiv:1802.03996.
42. Jiao, Z., Gao, X., Wang, Y., Li, J., and Xu, H. (2018). Deep Convolutional Neural
    Networks for mental load classification based on EEG data.Pattern Recognition,76,
    582-595.
43. Ozmen, N. G., and Gumusel, L. (2013, July). Classification of real and imaginary
    hand movements for a bci design. InTelecommunications and Signal Processing
    (TSP), 2013 36th International Conference on(pp. 607-611). IEEE.
44. Leuthardt, E. C., Schalk, G., Wolpaw, J. R., Ojemann, J. G., and Moran, D.
    W. (2004). A brain–computer interface using electrocorticographic signals in hu-
    mans.Journal of neural engineering,1(2), 63.
45. Zhou, S. M., Gan, J. Q., and Sepulveda, F. (2008). Classifying mental tasks based
    on features of higher-order statistics from EEG signals in brain–computer interface.
    Information Sciences, 178(6), 1629-1640.
46. Pfurtscheller, G., Neuper, C., Schlogl, A., and Lugger, K. (1998). Separability of
    EEG signals recorded during right and left motor imagery using adaptive autore-
    gressive parameters.IEEE transactions on Rehabilitation Engineering,6(3), 316-
    325.
52      Rodriguez et al.

47. Forney, E. M., and Anderson, C. W. (2011, July). Classification of EEG during
    imagined mental tasks by forecasting with Elman recurrent neural networks. In
    Neural Networks (IJCNN), The 2011 International Joint Conference on (pp. 2749-
    2755). IEEE.
48. Hema, C. R., Paulraj, M. P., Yaacob, S., Adom, A. H., and Nagarajan, R. (2009,
    March). Single trial motor imagery classification for a four state brain machine
    interface. In Signal Processing and Its Applications, 2009. CSPA 2009. 5th Inter-
    national Colloquium on (pp. 39-41). IEEE.
49. Greff, K., Srivastava, R. K., Koutník, J., Steunebrink, B. R., and Schmidhuber, J.
    (2017). LSTM: A search space odyssey. IEEE transactions on neural networks and
    learning systems, 28(10), 2222-2232.
50. Thomas, J., Maszczyk, T., Sinha, N., Kluge, T., and Dauwels, J. (2017, October).
    Deep learning-based classification for brain-computer interfaces. In Systems, Man,
    and Cybernetics (SMC), 2017 IEEE International Conference on (pp. 234-239).
    IEEE.
51. Ordóñez, F. J., Roggen, D. (2016). Deep convolutional and lstm recurrent neural
    networks for multimodal wearable activity recognition. Sensors, 16(1), 115.