Deep Learning for EEG-Based Motor Imagery Classification: Towards Enhanced Human-Machine Interaction and Assistive Robotics Nejia Boutarfaia1 , Samuele Russo2 , Ahmed Tibermacine1 and Imad Eddine Tibermacine3 1 Department of Computer Science, University Mohamed Khider of Biskra, Algeria 2 Department of Psychology, Sapienza University of Rome, Italy 3 Department of Computer, Control and Management Engineering, Sapienza University of Rome, Italy Abstract This study presents a comprehensive exploration of EEG-based motor imagery classification using advanced deep learning architectures. Focusing on six distinct motor imagery classes, we investigate the performance of convolutional neural networks (CNN), CNN with Long Short-Term Memory (CNN-LSTM), and CNN with Bidirectional LSTM (CNN-BILSTM) models. The CNN architecture excels with a remarkable accuracy of 99.86%, while the CNN-LSTM and CNN-BILSTM models achieve 98.39% and 99.27%, respectively, showcasing their effectiveness in decoding EEG signals associated with imagined movements.The results underscore the potential applications of this research in fields such as assistive robotics and automation, showcasing the ability to translate cognitive intent into robotic actions. This study offers valuable insights into the realm of deep learning for EEG analysis, setting the stage for advancements in brain-computer interfaces and human-machine interaction. Keywords Electroencephalogram (EEG), Deep Learning, Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM), Bidirectional LSTM (BILSTM), Motor Imagery, Brain-Computer Interface (BCI) 1. Introduction sual evoked potentials (SSVEP) being typical examples[7]. These systems are stable, require less training, but de- Using deep learning for EEG signal classification has pend on external cues and may cause user fatigue. In applications in the brain. The advancement of brain- contrast, endogenous BCIs, also known as active BCIs, controlled robots capable of direct communication with are based on self-regulation of brain rhythms, specifically humans is beneficial in several scenarios. A brain- motor imagery (MI), reflecting the user’s autonomous in- computer interface (BCI) system is vital in offering ad- tentions without external stimulation. MI induces event- ditional communication between the human brain and related desynchronization/synchronization (ERD/ERS) in other outside entities, such as robots[1]. BCI becomes the brain’s motor area, allowing absolute mind control. a tool for determining the goals of people dealing with While active BCIs do not rely on external stimuli and medical situations by examining recorded brain signals offer more direct user modulation, they require specific and interpreting neural responses. Its main goal is to attention and have gained attention for their potential in provide these people the ability to carry out motor func- realizing genuine user-controlled interfaces[8, 9? ]. tions, which will help them achieve a greater quality of In neuroscience and brain-computer interfaces (BCI), life[2, 3, 4]. motor imagery (MI) refers to the cognitive process in For individuals dealing with medical issues, it becomes which humans mentally simulate or observe a motor ac- a lifeline. A BCI is a computer-based communication tivity, such as the movement of a limb or the execution system designed to analyze signals originating from the of a particular task, without any accompanying physical neural activity within the central nervous system[5, 6]. motion [8]. Mentally rehearsing actions activates the BCI systems are categorized into exogenous and en- same brain pathways as executing those movements. In dogenous types. Exogenous BCIs rely on external stimuli brain-computer interfaces (BCI), electroencephalogram to evoke specific brain responses, with electroencephalo- (EEG) data are commonly used to collect and analyze gram (EEG) patterns such as P300 and steady-state vi- brain activity related to motor imagery. By analyzing brain activity patterns during motor imagery (MI), re- SYSYEM 2023: 9th Scholar’s Yearly Symposium of Technology, Engi- searchers can decode the intended motor motions and neering and Mathematics, Rome, December 3-6, 2023 $ nejia.boutarfaia@univ-biskra.dz (N. Boutarfaia); convert them into control signals for external devices[10]. samuele.russo@uniroma1.it (S. Russo); In recent years, deep learning techniques have rede- ahmed.tibermacine@univ-biskra.dz (A. Tibermacine); fined the landscape of MI classification, offering unprece- tibermacine@diag.uniroma1.it (I. E. Tibermacine) dented capabilities in extracting complex patterns and © 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). 68 CEUR ceur-ws.org Workshop ISSN 1613-0073 Proceedings Nejia Boutarfaia et al. CEUR Workshop Proceedings 68–74 representations from EEG data [11][12]. Its end-to-end sition (EMD) and a Stacked BiLSTM architecture. The methodology sets deep learning apart, eliminating the method demonstrates notable success, achieving an accu- need for manual feature extraction methods. Instead, racy of 82.26% on a widely used dataset. The research of- it autonomously learns many essential parameters and fers an innovative decoding approach and effective noise identifies valuable information within the data[13]. reduction through EMD, explicitly enhancing the clas- Various advanced deep-learning techniques were uti- sification of MI-EEG signals associated with right-hand lized to improve Motor Imagery’s (MI) precision. As an finger movements[22]. example, in this study [14] explores the implementation Numerous advanced deep-learning methodologies of a convolutional neural network (CNN) architecture have been employed to enhance the accuracy of motor with a single convolutional layer for the classification imagery classification within Brain-Computer Interfaces of motor imagery (MI) tasks based on electroencephalo- (BCIs). This motivates us to explore innovative strategies gram (EEG) signals. The designed CNN model includes a for motor imagery (MI) classification, contributing to the convolutional layer, ReLU activation, and max-pooling. continuous progress in the domain. MI, a cognitive pro- The output layer is configured with either 2 or 4 nodes, cess involving mental simulation of movements without depending on the specific number of classes in the MI physical execution, holds significance in brain-computer classification task. The document highlights incorporat- interface (BCI) research. In this investigation, we focus ing data augmentation techniques and utilizing common on a specific subset of six classes from the EEG dataset, spatial patterns (CSP) for effective feature extraction. Re- specifically addressing tasks associated with motor im- sults from the proposed approach demonstrate promising agery actions. Our aim is to evaluate the appropriateness outcomes in both two-class and four-class MI classifica- of these classes for efficient EEG-based classification, ul- tion scenarios. timately aiming to facilitate intuitive and precise control In Ref [15], The authors propose a new approach that of robotic systems. combines continuous wavelet transform (CWT) with a simplified convolutional neural network (SCNN) to en- hance the recognition accuracy of Motor Imagery (MI) 2. Materials and Methods electroencephalogram (EEG) signals. The CWT is ap- plied to map MI-EEG signals into time-frequency image 2.1. Dataset signals, which are then input into the SCNN for feature The dataset utilized in our study, attributed to Gerwin extraction and classification. Schalk and colleagues [23], is a pivotal asset in Brain- In addition, several research has investigated differ- Computer Interface (BCI) research. ent methods, such as Long Short-Term Memory (LSTM), Obtained from over 1500 EEG recordings with the par- which have shown good results in motor imagery clas- ticipation of 109 volunteers, the dataset offers a com- sification using EEG. In light of promising findings, one prehensive data collection. The experiments, facilitated study [16] introduces an EEG classification framework by the BCI2000 system, involve various motor/imagery for motor imagery tasks in BCI systems. The frame- tasks and baseline measurements. work leverages LSTM networks, incorporates a one- Electrode placement follows the international 10-10 dimensional aggregate approximation (1d-AX) for sig- system. At the same time, detailed information about nal representation, and employs a channel weighting the dataset is accessible through the original publication technique inspired by common spatial patterns to boost on PhysioBank. The participants completed a total of effectiveness. In reference [17], they combine a one- 14 experimental trials, as outlined in Figure 1, which dimensional convolutional neural network (1D CNN) provides a detailed description of the experiment. Each with long short-term memory (LSTM)[18]. The suggested trial comprised two one-minute initial sessions—one with deep learning network improves classification accuracy eyes open and another with eyes closed—and three two- by using CNN and LSTM to extract temporal represen- minute trials for each of the four specified tasks. tations of MI tasks successfully[19]. The preprocessing While the original dataset contains continuous multi- of EEG data encompasses band-pass filtering and data channel data with a substantial number of users in our augmentation using a sliding window. The CNN captures study, we concentrated on the EEG signals obtained from essential time domain features, and the subsequent LSTM a subset of seven subjects selected randomly. Specifically, facilitates additional feature extraction, culminating in a our focus was on tasks related to imagined movements, robust classifier designed for four MI tasks[20]. namely tasks 4, 6, 8, 10, 12, and 14. Tasks 4, 8, and 12 Many research utilize BiLSTM as an excellent case involve imagined movements associated with both the study. This research [21] introduces a novel approach right and left fists, as well as periods of relaxation. On the for decoding imagined finger motions using MI-EEG other hand, tasks 6, 10, and 14 involve imagined move- data. The approach effectively tackles noise challenges ments of both fists and both feet. in small, noisy signals using Empirical Mode Decompo- 69 Nejia Boutarfaia et al. CEUR Workshop Proceedings 68–74 Tanh function: 𝑒𝑥 − 𝑒−𝑥 Tanh(𝑥) = (2) 𝑒𝑥 + 𝑒−𝑥 ReLU function: ReLU(𝑥) = max(0, 𝑥) (3) In the design of a CNN, the final layers play a crucial role in handling classification tasks. These layers, known as fully connected layers, establish connections between every neuron within a layer and those from its preceding layer. The ultimate layer of fully connected layers serves as the output layer, functioning as the classifier in the CNN architecture. 2.3. Long Short-Term Memory (LSTM) and Bidirectional LSTM (BiLSTM) LSTM designed to overcome the vanishing gradient prob- lem in traditional RNNs, introduces memory cells with gating mechanisms, including input, forget, and output gates, to control the flow of information. It comprises a cell state representing long-term memory and a hidden state representing short-term memory or output [26]. Figure 1: Overview of the 14-trial EEG experiment. 2.2. Convolutional Neural Networks (CNN) CNNs are very good at classifying images because they use neural layers to learn hierarchically organized fea- tures. People are now interested in making CNNs use data that isn’t pictures, like time-series data. CNNs are a great way to get features from EEG data and recognize patterns. This is because they can show how electrodes are spread out in space and how brain activity changes over time [24, 25]. Convolutional layers, pooling layers, activation functions, and fully connected layers are the main parts that make up a CNN. To make output feature Figure 2: The architecture of a LSTM model [27]. maps, convolutional layers use convolutional kernels to run convolutions. At the same time, pooling layers sub- sample feature maps while keeping the most essential Bidirectional Long Short-Term Memory (Bi-LSTM) is characteristics. Adding activation functions like Sigmoid, an extension of the traditional LSTM, a type of recur- Tanh, and ReLU to the network creates non-linearity, a rent neural network (RNN). LSTMs are adept at captur- vital part of matching inputs to outputs correctly. ing and retaining long-term dependencies in sequential Sigmoid function: data, making them suitable for applications like natural language processing, time series prediction, and speech Sigmoid(𝑥) = 1 (1) recognition [28]. 1 + 𝑒−𝑥 The term "bidirectional" in Bi-LSTM refers to the fact that it processes input sequences in both forward and 70 Nejia Boutarfaia et al. CEUR Workshop Proceedings 68–74 and Dense layers. This hybrid approach synergizes the strengths of CNN for spatial features and Bidirectional LSTM for temporal dependencies, resulting in a robust classification model. A five-fold stratified k-fold cross-validation is imple- mented using the StratifiedKFold function from scikit- learn. The dataset is divided into training and testing sets for each fold, and each model is compiled with categori- cal cross-entropy, Adam optimizer, and accuracy as the metric. The training spans 100 epochs with a batch size of 32, facilitating a thorough assessment of the model’s Figure 3: Bidirectional LSTM model showing the input and performance across diverse data subsets. output layers. The red arrows represent the backward se- quence track and green the forward sequence track [29]. 3. Results backward directions. This bidirectional processing helps To assess the performance of the three models, we em- the network capture information from both the past and ployed key scoring metrics, including Accuracy, Recall, the future of a given time step, allowing it to better un- Precision, and F1-Score. These metrics provide a com- derstand the context and dependencies in the sequence. prehensive evaluation of the models’ effectiveness in The Bi-LSTM architecture consists of two LSTM layers, handling classification tasks. Each sign gives a different one that processes the input sequence in the forward view of different parts of a model’s effectiveness, and direction and another in the backward direction. The when mixed, they provide a complete visualization: results from both directions are often combined before passing them on to the next layer or used for the final TP prediction[28, 30]. Precision = TP + FP 2.4. Proposed Architecture TP Recall = The dataset is subjected to a preprocessing phase that TP + FN includes applying an 8–30 Hz filter and a notch filter, followed by sampling at a frequency of 125 Hz. This es- Precision · Recall F1-score = 2 · sential preprocessing step refines the raw EEG signals, ef- Precision + Recall fectively eliminating undesirable frequency components and ensuring the data’s suitability for further analysis. TP + TN The 8–30 Hz filter is instrumental in concentrating on per- Accuracy = TP + TN + FP + FN tinent frequency bands linked to neural activity, while the notch filter serves to eliminate specific unwanted where TP are the true positives, FP the false positives, frequencies, such as those associated with power line TN the true negatives, and FN the false negatives. interference. The CNN architecture, comprising Conv1D, Batch Nor- Table 1 malization, MaxPooling1D, Dropout, Flatten, and Dense Comparison of Architectures. layers, demonstrates effectiveness in classifying EEG data across 6 classes. The CNN with Long Short-Term Architecture Precision Recall F1 Accuracy Memory (CNN-LSTM) model seamlessly integrates CNN CNN 1.00 1.00 1.00 99.86% and LSTM layers to capture spatial and temporal fea- CNN-LSTM 0.98 0.98 0.98 98.39% tures. This architecture includes CNN, Batch Normaliza- CNN-BILSTM 1.00 0.99 0.99 99.27% tion, MaxPooling1D, Dropout, LSTM, Flatten, and Dense layers, showcasing commendable accuracy in EEG data The CNN architecture showcased exceptional perfor- classification by skillfully combining spatial and tempo- mance, achieving perfect precision, recall, and F1-Score, ral aspects. The CNN with Bidirectional LSTM (CNN- leading to an impressive overall accuracy of 99.86%. BiLSTM) architecture enhances EEG data classification by This underscores the model’s effectiveness in accurately combining CNN and Bidirectional LSTM networks. The classifying EEG data. The CNN-LSTM model, although model incorporates CNN, Batch Normalization, MaxPool- slightly less accurate, still demonstrated commendable ing1D, Dropout, Bidirectional LSTM, Dropout, Flatten, results, with precision, recall, and F1-Score values at 0.98 71 Nejia Boutarfaia et al. CEUR Workshop Proceedings 68–74 Figure 4: Confusion matrix for the Figure 6: Confusion matrix for the proposed CNN-BILSTM proposed CNN model. model. 4. Discussion The evaluation results of three distinct architectures, namely CNN, CNN-LSTM, and CNN-BILSTM, shed light on their respective performances in classifying EEG data with 6 classes. The CNN model exhibits exceptional per- formance across all metrics. It achieves precision, recall, and an F1-score of 1.00 for most classes, highlighting its ability to classify each class accurately. The overall accuracy of 99.86% underscores the model’s effective- ness in capturing intricate patterns within the EEG data. The precision-recall curves for each class demonstrate the model’s robustness and reliability. The CNN-LSTM model, incorporating both convolutional and long short- term memory layers, displays a commendable perfor- mance but with a slight decrease in precision, recall, and Figure 5: Confusion matrix for the F1-score compared to the pure CNN model. This sug- proposed CNN-LSTM model. gests a potential trade-off between model complexity and overall performance. The accuracy of 98.39% indi- cates reliable classification, though less than the CNN architecture. The CNN-BILSTM model, combining the and an overall accuracy of 98.39%. This model effectively strengths of CNN and Bidirectional LSTM, provides an combines spatial and temporal features for EEG clas- excellent balance between precision, recall, and F1-score. sification, emphasizing a balance between complexity With precision above 1 for most classes and an accuracy and accuracy. The CNN-BILSTM architecture displayed of 99.27%, it demonstrates the model’s ability to capture a well-rounded performance, with precision, recall, both spatial and temporal dependencies in the EEG data. and F1-Score all reaching 1, 0.99, and 0.99, respectively. The bidirectional processing contributes to understand- Combining CNN for spatial features and Bidirectional ing the context and dependencies in the sequence. LSTM for temporal dependencies, this hybrid approach The outcomes obtained from the implemented models achieved an accuracy of 99.27%, highlighting its efficacy indicate their proficiency in recognizing patterns and ex- in accurate EEG data classification. tracting features from EEG data, resulting in successful classification. This underscores the appropriateness and effectiveness of the selected models for the specific EEG 72 Nejia Boutarfaia et al. CEUR Workshop Proceedings 68–74 signal classification task. The models’ capability to dis- home automation. cern complex neural patterns contributes significantly to The scenario encourages future exploration into re- the overall success of the classification process, offering fining the proposed approach, addressing practical chal- valuable implications for applications in neuroscientific lenges, and expanding its applicability in real-world set- research and brain-computer interface systems. tings. This discussion signals a step toward unlocking the The focused analysis on a subset of six classes derived full potential of BCIs in enhancing the synergy between from motor imagery tasks opens up intriguing possi- human cognition and robotic systems. bilities for the practical deployment of brain-computer interface (BCI) technologies in the field of robot naviga- tion. Consider a scenario where a user, equipped with an 5. Conclusions EEG-based BCI system, intends to control a robot’s move- This study advances EEG-based motor imagery classifi- ments seamlessly. In this scenario, the selected six classes cation, evaluating CNN, CNN-LSTM, and CNN-BILSTM correspond to distinct motor imagery actions with direct models for six selected classes. Results demonstrate ex- relevance to robot navigation commands: closing and ceptional accuracy (CNN: 99.86%, CNN-LSTM: 98.39%, opening the left hand for turning left, closing and open- CNN-BILSTM: 99.27%). The discussion introduces a com- ing the right hand for turning right, and simultaneously pelling scenario, envisioning the practical application closing and opening both fists and both feet for stopping of the six selected classes in robot navigation through and moving forward, respectively. This subset aligns brain-computer interface technology. This scenario ex- with intuitive and fundamental commands essential for emplifies the potential real-world impact of motor im- controlling the robot’s spatial movements. agery classification, providing a seamless link between cognitive intent and robotic actions. Despite successes, Table 2 challenges in real-time processing and model robustness Mapping Between Movements and Commands for Robotic persist. The study encourages further refinement and Control. addresses practical considerations for broader implemen- Real or Imagined Corresponding tation. The findings contribute to the field, shaping the Movement Commands future of human-machine interaction, particularly in as- Closing/opening left hand Turning left sistive robotics and intelligent automation. Closing/opening right hand Turning right Opening/closing both fists Stopping Opening/closing both feet Going forward References [1] S. Pepe, S. Tedeschi, N. Brandizzi, S. Russo, L. Ioc- As the user engages in motor imagery actions, the EEG chi, C. Napoli, Human attention assessment us- signals associated with the specific classes are decoded in ing a machine learning approach with gan-based real-time by the implemented classification models. The data augmentation technique trained using a cus- system translates these decoded signals into correspond- tom dataset, OBM Neurobiology 6 (2022). doi:10. ing robot commands, enabling the user to navigate the 21926/obm.neurobiol.2204139. robot effortlessly. For instance, by simply imagining the [2] K. M. Hossain, M. A. Islam, S. Hossain, A. Nijholt, closure of the left hand, the robot seamlessly executes M. A. R. Ahad, Status of deep learning for eeg-based a left turn, offering an intuitive and natural interaction brain–computer interface applications, Frontiers mechanism. in computational neuroscience 16 (2023) 1006763. However, challenges and considerations arise in the [3] A. Alfarano, G. De Magistris, L. Mongelli, S. Russo, implementation of such a scenario. Real-time processing, J. Starczewski, C. Napoli, A novel convmixer trans- user adaptation to the BCI system, and robustness in former based architecture for violent behavior de- various environmental conditions are factors that require tection 14126 LNAI (2023) 3 – 16. doi:10.1007/ careful attention. The user’s cognitive load, comfort, and 978-3-031-42508-0_1. the need for continuous improvement in the classification [4] G. De Magistris, M. Romano, J. Starczewski, models become focal points for refinement. C. Napoli, A novel dwt-based encoder for human Despite these challenges, the envisioned scenario high- pose estimation, volume 3360, 2022, pp. 33 – 40. lights the potential transformative impact of motor im- [5] E. H. Houssein, A. Hammad, A. A. Ali, Human emo- agery classification in robot navigation. The seamless tion recognition from eeg-based brain–computer fusion of cognitive intent and robotic action could rev- interface using machine learning: a comprehensive olutionize human-robot interaction, paving the way for review, Neural Computing and Applications 34 intuitive and accessible control mechanisms in diverse (2022) 12527–12557. applications, ranging from assistive robotics to smart 73 Nejia Boutarfaia et al. CEUR Workshop Proceedings 68–74 [6] G. Borowik, M. Woźniak, A. Fornaia, R. Giunta, [19] E. Iacobelli, V. Ponzi, S. Russo, C. Napoli, Eye- C. Napoli, G. Pappalardo, E. Tramontana, A soft- tracking system with low-end hardware: Devel- ware architecture assisting workflow executions opment and evaluation, Information 14 (2023) 644. on cloud resources, International Journal of Elec- [20] I. E. Tibermacine, A. Tibermacine, W. Guettala, tronics and Telecommunications 61 (2015) 17 – 23. C. Napoli, S. Russo, Enhancing sentiment anal- doi:10.1515/eletel-2015-0002. ysis on seed-iv dataset with vision transformers: A [7] S. Falciglia, F. Betello, S. Russo, C. Napoli, Learning comparative study, in: Proceedings of the 2023 11th visual stimulus-evoked eeg manifold for neural im- International Conference on Information Technol- age classification, Neurocomputing (2024) 127654. ogy: IoT and Smart City, 2023, pp. 238–246. [8] J. Zhang, M. Wang, A survey on robots controlled [21] T. Mwata-Velu, J. G. Avina-Cervantes, J. M. Cruz- by motor imagery brain-computer interfaces, Cog- Duarte, H. Rostro-Gonzalez, J. Ruiz-Pinales, Imagi- nitive Robotics 1 (2021) 12–24. nary finger movements decoding using empirical [9] V. Ponzi, S. Russo, A. Wajda, R. Brociek, C. Napoli, mode decomposition and a stacked bilstm architec- Analysis pre and post covid-19 pandemic rorschach ture, Mathematics 9 (2021) 3297. test data of using em algorithms and gmm models, [22] B. Nail, M. A. Atoussi, S. Saadi, I. E. Tibermacine, volume 3360, 2022, pp. 55 – 63. C. Napoli, Real-time synchronisation of multiple [10] K. Zhang, G. Xu, Z. Han, K. Ma, X. Zheng, L. Chen, fractional-order chaotic systems: An application N. Duan, S. Zhang, Data augmentation for mo- study in secure communication, Fractal and Frac- tor imagery signal classification based on a hybrid tional 8 (2024) 104. neural network, Sensors 20 (2020) 4485. [23] G. Schalk, D. J. McFarland, T. Hinterberger, [11] L. Bozhkov, P. Georgieva, Deep learning models for N. Birbaumer, J. R. Wolpaw, "eeg motor move- brain machine interfaces, Annals of Mathematics ment/imagery dataset", 2022. doi:doi:10.18112/ and Artificial Intelligence 88 (2020) 1175–1190. openneuro.ds004362.v1.0.0. [12] G. De Magistris, R. Caprari, G. Castro, S. Russo, [24] Y. Roy, H. Banville, I. Albuquerque, A. Gramfort, L. Iocchi, D. Nardi, C. Napoli, Vision-based holis- T. H. Falk, J. Faubert, Deep learning-based elec- tic scene understanding for context-aware human- troencephalography analysis: a systematic review, robot interaction, in: International Conference of Journal of neural engineering 16 (2019) 051001. the Italian Association for Artificial Intelligence, [25] J. León, J. J. Escobar, A. Ortiz, J. Ortega, J. González, Springer, 2021, pp. 310–325. P. Martín-Smith, J. Q. Gan, M. Damas, Deep [13] Z. Chang, C. Zhang, C. Li, Motor imagery eeg classi- learning for eeg-based motor imagery classifica- fication based on transfer learning and multi-scale tion: Accuracy-cost trade-off, Plos one 15 (2020) convolution network, Micromachines 13 (2022) 927. e0234178. [14] N. Shajil, S. Mohan, P. Srinivasan, J. Arivu- [26] B. Lindemann, B. Maschler, N. Sahlab, M. Weyrich, daiyanambi, A. Arasappan Murrugesan, Multiclass A survey on anomaly detection for technical sys- classification of spatially filtered motor imagery eeg tems using lstm networks, Computers in Industry signals using convolutional neural network for bci 131 (2021) 103498. based applications, Journal of Medical and Biologi- [27] S. Yan, Understanding lstm and its diagrams, ML- cal Engineering 40 (2020) 663–672. Review. com (2016). [15] F. Li, F. He, F. Wang, D. Zhang, Y. Xia, X. Li, A novel [28] I. Ariza, A. M. Barbancho, L. J. Tardón, I. Barbancho, simplified convolutional neural network classifica- Energy-based features and bi-lstm neural network tion algorithm of motor imagery eeg signals based for eeg-based music and voice classification, Neural on deep learning, Applied Sciences 10 (2020) 1605. Computing and Applications 36 (2024) 791–802. [16] P. Wang, A. Jiang, X. Liu, J. Shang, L. Zhang, Lstm- [29] I. K. Ihianle, A. O. Nwajana, S. H. Ebenuwa, R. I. based eeg classification in motor imagery tasks, Otuka, K. Owa, M. O. Orisatoki, A deep learn- IEEE transactions on neural systems and rehabilita- ing approach for human activities recognition from tion engineering 26 (2018) 2086–2095. multimodal sensing devices, IEEE Access 8 (2020) [17] P. Lu, N. Gao, Z. Lu, J. Yang, O. Bai, Q. Li, Com- 179028–179038. bined cnn and lstm for motor imagery classification, [30] I. Ariza, L. J. Tardón, A. M. Barbancho, I. De-Torres, in: 2019 12th International Congress on Image and I. Barbancho, Bi-lstm neural network for eeg-based Signal Processing, BioMedical Engineering and In- error detection in musicians’ performance, Biomed- formatics (CISP-BMEI), IEEE, 2019, pp. 1–6. ical Signal Processing and Control 78 (2022) 103885. [18] B. Nail, B. Djaidir, I. E. Tibermacine, C. Napoli, N. Haidour, R. Abdelaziz, Gas turbine vibration monitoring based on real data and neuro-fuzzy sys- tem, Diagnostyka 25 (2024). 74