<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Intelligence: July</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Low-Cost EEG: Feature Engineering and Interpretability Insights</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tommaso Colafiglio</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Angela Lombardi</string-name>
          <email>angela.lombardi@poliba.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Paolo Sorino</string-name>
          <email>paolo.sorino@poliba.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Domenico Lofù</string-name>
          <email>domenico.lofu@poliba.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Danilo Danese</string-name>
          <email>danilo.danese@poliba.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fedelucio Narducci</string-name>
          <email>fedelucio.narducci@poliba.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eugenio Di Sciascio</string-name>
          <email>eugenio.disciascio@poliba.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tommaso Di Noia</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Emotion Recognition, Brain Computer Interface, Artificial Intelligence, BCI, XAI</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer, Control, and Management Engineering (DIAG), Sapienza University of Rome</institution>
          ,
          <addr-line>Rome, RM, 00185</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Electrical and Information Engineering</institution>
          ,
          <addr-line>Politecnico di Bari, Bari, BA, 70125</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>0</volume>
      <fpage>9</fpage>
      <lpage>11</lpage>
      <abstract>
        <p>Emotion recognition from EEG signals is a core task in afective computing, with growing relevance for realworld applications. In this study, we analyze the NeuroSense Emotion Recognition Dataset, acquired with the Muse 2 brain-computer interface (BCI), a portable, low-cost EEG device with only four electrodes. We implement a complete pipeline that includes signal preprocessing, handcrafted feature extraction (spectral, entropy, autoregressive), and classification using machine learning models under a Leave-One-Subject-Out (LOSO) cross-validation scheme. The models achieve an average accuracy and F1-score of 70% across the four quadrants of Russell's emotional model. To improve transparency, we apply SHAP to evaluate feature importance across subjects and emotional states. The analysis reveals both shared and emotion-specific EEG markers but also highlights a high degree of inter-subject variability in SHAP values. These findings underscore the challenges of generalization in EEG-based emotion recognition and point to the need for adaptive and personalized approaches. This work contributes preliminary but actionable insights toward interpretable, lightweight, and user-aware emotion-aware BCI systems.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Emotion recognition from EEG signals is gaining momentum in afective computing, with applications
in adaptive interfaces, education, and healthcare [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. Despite the success of high-density EEG datasets,
real-world usability remains limited by the complexity and cost of traditional acquisition systems[
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ].
      </p>
      <p>
        Recent studies have introduced advanced deep learning architectures [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], domain adaptation
strategies [6], and graph-based methods [7]. However, these often require dense montages and lack
transparency. In contrast, handcrafted features—especially in low-density setups—ofer interpretability and
eficiency, yet their role in emotion recognition with portable EEG devices remains underexplored [ 8].
      </p>
      <p>Furthermore, the integration of eXplainable AI (XAI) is still in its infancy in EEG-based afective
systems. SHapley Additive Explanations (SHAP) [9] provide a principled method to assess feature
importance in black-box models. Applied to EEG, SHAP can illuminate the neurophysiological basis of
emotional states and support trust and personalization in BCI systems [10]. Still, little is known about
the consistency of SHAP-derived attributions across subjects, or whether shared EEG features underpin
diferent emotions.</p>
      <p>Late-breaking work, Demos and Doctoral Consortium, colocated with The 3rd World Conference on eXplainable Artificial</p>
      <p>CEUR</p>
      <p>ceur-ws.org</p>
      <p>In this work, we analyze the NeuroSense dataset [11], collected with Muse 2 - a low-cost four-electrode
EEG device. Our pipeline includes preprocessing, feature extraction, classification via
Leave-OneSubject-Out cross-validation, and SHAP-based interpretability analysis.</p>
      <p>We address the following research questions:
• RQ1: How do engineered features influence EEG-based emotion classification performance?
• RQ2: Are feature importance explanations consistent across subjects for the same emotion?
• RQ3: Do diferent emotions rely on shared or distinct EEG features?</p>
      <p>To this end, we propose and evaluate a complete pipeline that integrates handcrafted EEG feature
extraction, machine learning-based classification, and post-hoc explainability, applied to data collected
with a low-cost, four-channel BCI device.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Methodology</title>
      <p>This study adopts a structured pipeline to evaluate EEG-based emotion recognition using interpretable
machine learning techniques. We used the NeuroSense Emotion Recognition Dataset [11], which
includes EEG signals from 30 participants recorded with the Muse 2 BCI, a low-cost device with four
electrodes (AF3, AF4, TP9, TP10). Each subject was exposed to 40 music video clips, carefully selected
to elicit diferent emotional responses. The stimuli were categorized according to Russell’s circumplex
model [12], which organizes emotions based on valence (positive/negative) and arousal (high/low
activation). EEG data were recorded during both a baseline phase (before stimulus presentation) and
an emotion induction phase, ensuring a structured analysis of emotional states. More details on the
protocol and characteristics of the dataset can be found in the original work [11]. This section outlines
the processing pipeline, including EEG preprocessing and feature extraction, classification strategies,
and model interpretability via SHAP.</p>
      <sec id="sec-2-1">
        <title>2.1. Preprocessing and feature engineering</title>
        <p>EEG data were preprocessed through bandpass filtering (1–40 Hz), normalization, detrending, and
artifact removal via the Artifact Subspace Reconstruction (ASR) method [13]. Signals were segmented
into 5-second epochs for both baseline and stimulus periods.</p>
        <p>To enhance model performance in this low-density setup, we extracted handcrafted features using
TorchEEG1, grouped into:
• Frequency-domain features (e.g., PSD in delta to gamma bands),
• Entropy-based features (e.g., approximate/sample entropy, fractal dimensions, Hurst
exponent) [14],
• Autoregressive coeficients modeling temporal dependencies [ 15].</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Classification strategy</title>
        <p>We employed a Leave-One-Subject-Out (LOSO) cross-validation scheme [16], which simulates
realworld deployment by testing generalization on unseen individuals [17]. For each iteration, models were
trained on data from 29 participants and tested on the remaining one, repeated over all subjects.</p>
        <p>Five classifiers were tested: SVM, Ridge Classifier, Random Forest, Multi-Layer Perceptron (MLP),
and K-Nearest Neighbors (KNN). Hyperparameters for each model were optimized through grid search,
as detailed in Table 1.</p>
        <p>Model performance was assessed using accuracy, precision, recall, and F1-score, calculated
independently for each of the four emotional quadrants defined by the circumplex model. This process yielded
a total of 120 trained models (30 subjects × 4 emotions).
1https://torcheeg.readthedocs.io/en/latest/</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Explainability with SHAP</title>
        <p>To interpret model decisions, we applied Shapley Additive Explanations (SHAP) [9], a post-hoc technique
based on cooperative game theory. SHAP values quantify each feature’s contribution to a model’s
prediction, enabling fine-grained attribution analyses [ 17]. Given a feature   , its Shapley value   is
defined as:
  =</p>
        <p>∑
⊆ ∖{}
||!(| | − || − 1)!
| |!
[ ( ∪ {}) −  ()
]
(1)
where  is the set of all features and  () the model output when only features in  are considered.</p>
        <p>For each emotional quadrant, we selected the best-performing model based on mean accuracy across
subjects. These models were used to compute SHAP values for each test subject and for each trial. This
enabled the identification of both shared and emotion-specific feature relevance patterns.</p>
        <p>Moreover, to summarize feature importance, SHAP vectors were:
• averaged within each emotion to obtain quadrant-specific attribution profiles,
• ranked to identify the most influential features.</p>
        <sec id="sec-2-3-1">
          <title>2.3.1. Inter-subject similarity</title>
          <p>To evaluate consistency across individuals, Pearson correlation matrices were computed on SHAP
vectors for each emotion. The resulting correlations quantified the similarity in feature attribution
patterns among subjects and were summarized using boxplots to assess variability.</p>
        </sec>
        <sec id="sec-2-3-2">
          <title>2.3.2. Cross-emotion comparison</title>
          <p>To identify common or emotion-specific features, SHAP value distributions were compared across
emotional quadrants. This analysis revealed whether the same features played similar roles in diferent
emotional states or contributed in a diferentiated, emotion-dependent manner.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results and discussion</title>
      <p>This section presents the results obtained from the EEG-based emotion classification models and the
SHAP-based feature attribution analysis, structured according to the research questions outlined in the
Section Introduction.</p>
      <sec id="sec-3-1">
        <title>3.1. RQ1: How do engineered features influence EEG-based emotion classification performance?</title>
        <p>The impact of feature engineering on classification performance was evaluated by training multiple
machine learning models on the extracted feature set. Using LOSO, the best-performing classifier was
selected for each Russell’s emotional quadrant based on accuracy, precision, recall, and F1-score. The
performance metrics of all models are reported in Tables 2.</p>
        <p>The results show that SVM outperformed other classifiers in three quadrants (Excited, Sad, Angry),
while Random Forest (RF) yielded the highest accuracy for the Relaxed state. The diference in model
performance suggests that linear separability in feature space is particularly relevant for these emotions,
while decision-tree-based models may better handle the variability present in low-arousal states.</p>
        <p>Compared to the previous study introducing the NeuroSense dataset[11], which reported an average
accuracy of 75% across the four emotional quadrants, the performance obtained in this work is slightly
lower. The previous study employed MiniRocket, an algorithm that applies random convolutional
kernels followed by global max pooling, generating a high-dimensional feature space that captures
complex temporal patterns in the EEG signals. The extracted features were then classified using SVM,
benefiting from the rich and diverse representations learned through the MiniRocket transformation.</p>
        <p>In contrast, this work relied on a feature engineering approach, extracting spectral and entropy-based
descriptors from the raw EEG signals. Although these features provided meaningful physiological
insights into EEG-based emotion classification, their performance was slightly lower than the
MiniRocketbased method. This suggests that convolutional feature extraction techniques, such as MiniRocket,
may be particularly efective in leveraging hidden temporal dependencies in EEG signals. In contrast,
engineered features may be better suited for improving model interpretability and explainability.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. RQ2: Are feature importance explanations consistent across subjects for the same emotion?</title>
        <p>To assess the consistency of feature attributions across subjects for a given emotional state, we computed
Pearson correlation coeficients between the SHAP value vectors of all participant pairs. The distribution
of these correlation values, summarized in the box plots of Figure 1, provides insight into the variability
of feature importance rankings across participants for all the emotions. The results showed generally
low inter-subject correlations across all four emotions, with values clustered near zero indicating high
variability.</p>
        <p>These findings suggest that the EEG features contributing to emotion classification difer substantially
between individuals, regardless of the emotional quadrant. No specific emotion exhibited significantly
higher consistency. This inter-subject variability may stem from diferences in brain physiology,
cognitive processing, or emotional perception.</p>
        <p>Overall, the results highlight a key limitation of subject-independent models in EEG-based emotion
recognition and support the development of adaptive or personalized approaches that can accommodate
individual diferences in feature attribution.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. RQ3: Do diferent emotions rely on shared or distinct EEG features?</title>
        <p>To investigate whether specific EEG features contribute similarly across emotions, we compared
SHAPbased feature rankings for each emotional quadrant. Figure 2 shows the top 10 most important features
for each emotional state, averaged across subjects.</p>
        <p>The analysis revealed both shared and emotion-specific neural signatures. Excited states were
dominated by spectral features in the theta and alpha bands at frontal electrodes (AF3, AF4), consistent
with prior findings linking these rhythms to attentional engagement and arousal [ 18]. Relaxation was
primarily associated with beta-band kurtosis and delta-band entropy at TP9, which may reflect reduced
cortical excitability and idling activity [19].</p>
        <p>Sad states showed a predominance of entropy-based measures—especially sample and approximate
entropy—in delta and alpha bands at frontal and temporal sites. This supports evidence that emotional
distress is associated with higher EEG complexity [20]. In contrast, anger was characterized by nonlinear
and higher-order features, including delta power and fractal dimension, consistent with increased
cortical activation and arousal [21].</p>
        <p>From a spatial perspective (reported in Figure 3), the top features for excitement were primarily
left-lateralized (AF3), supporting the frontal asymmetry hypothesis [ 22], whereas relaxed and sad
states involved TP9, suggesting stronger temporal-parietal engagement [23]. Anger exhibited a more
bilateral distribution, which may reflect motor-preparatory activity tied to action-oriented afective
responses [24].</p>
        <p>In terms of frequency content, theta and alpha bands were most relevant for high-arousal states
(excited, angry), while delta and beta-related entropy measures were dominant for low-arousal states
(relaxed, sad). These findings underscore the importance of combining generalizable spectral features
with emotion-specific nonlinear descriptors to efectively characterize afective EEG responses.</p>
        <p>Overall, the results suggest that no single feature set universally applies across emotions, and that
emotion-specific feature selection strategies could enhance classification performance in EEG-based
afective computing.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions and future work</title>
      <p>This study explored interpretable EEG-based emotion recognition using a low-cost, sparse-electrode
device, leveraging engineered features and SHAP analysis. Our results confirmed that spectral and
entropy-based features enable competitive classification performance, yet highlighted substantial
intersubject variability in feature relevance.</p>
      <p>While some EEG features showed cross-emotion generalizability, others were clearly emotion-specific,
reflecting distinct neural mechanisms. These findings emphasize the limitations of subject-independent
models and support the use of adaptive, personalized strategies in real-world afective BCI systems.</p>
      <p>Future work will focus on dynamic feature selection and online adaptation techniques to improve
robustness and user specificity. While the current study was limited to a single dataset, we plan
to validate our findings on additional datasets to assess generalizability across acquisition setups.
Furthermore, we aim to explore hybrid pipelines that combine engineered and learned representations
to better capture complex EEG patterns without sacrificing interpretability. Finally, we intend to deepen
the neurophysiological interpretation of EEG markers by involving domain experts and integrating
insights from afective neuroscience.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>This work is carried out while Tommaso Colafiglio is enrolled in Italian National Doctorate on Artificial
Intelligence run by Sapienza University of Rome in collaboration with Politecnico di Bari. This work
was partially supported by the following projects: ”LIFE: the itaLian system wIde Frailty nEtwork”;
DEMETRA: ”Development of an ensemble learning-based, multidimensional sensory impairment score
to predict cognitive impairment in an elderly cohort of Southern Italy”.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.
S. Alsubai, V. Bhatnagar, Lightweight attention mechanisms for eeg emotion recognition for brain
computer interface, Journal of Neuroscience Methods 410 (2024) 110223.
[6] P. Yu, X. He, H. Li, H. Dou, Y. Tan, H. Wu, B. Chen, Fmlan: a novel framework for cross-subject
and cross-session eeg emotion recognition, Biomedical Signal Processing and Control 100 (2025)
106912.
[7] C. Li, P. Li, Y. Zhang, N. Li, Y. Si, F. Li, Z. Cao, H. Chen, B. Chen, D. Yao, et al., Efective emotion
recognition by learning discriminative graph topologies in eeg brain networks, IEEE Transactions
on Neural Networks and Learning Systems (2023).
[8] M.-P. Hosseini, A. Hosseini, K. Ahi, A review on machine learning for eeg signal processing in
bioengineering, IEEE reviews in biomedical engineering 14 (2020) 204–218.
[9] S. M. Lundberg, S.-I. Lee, A unified approach to interpreting model predictions, Advances in
neural information processing systems 30 (2017).
[10] N. Sharma, T. K. R. Bollu, Explainable ai methods for interpreting emotions in brain–computer
interface eeg data, in: Discovering the Frontiers of Human-Robot Interaction: Insights and
Innovations in Collaboration, Communication, and Control, Springer, 2024, pp. 419–436.
[11] T. Colafiglio, A. Lombardi, P. Sorino, E. Brattico, D. Lofù, D. Danese, E. Di Sciascio, T. Di Noia,
F. Narducci, Neurosense: A novel eeg dataset utilizing low-cost, sparse electrode devices for
emotion exploration, IEEE Access (2024).
[12] J. A. Russell, A circumplex model of afect., Journal of personality and social psychology 39 (1980)
1161.
[13] S. Blum, N. S. Jacobsen, M. G. Bleichner, S. Debener, A riemannian modification of artifact subspace
reconstruction for eeg artifact handling, Frontiers in human neuroscience 13 (2019) 141.
[14] S. Kesić, S. Z. Spasić, Application of higuchi’s fractal dimension from basic to clinical
neurophysiology: a review, Computer methods and programs in biomedicine 133 (2016) 55–70.
[15] J. Pardey, S. Roberts, L. Tarassenko, A review of parametric modelling techniques for eeg analysis,</p>
      <p>Medical engineering &amp; physics 18 (1996) 2–11.
[16] S. Katsigiannis, N. Ramzan, Dreamer: A database for emotion recognition through eeg and
ecg signals from wireless low-cost of-the-shelf devices, IEEE journal of biomedical and health
informatics 22 (2017) 98–107.
[17] M. L. N. De Bonis, G. Fasano, A. Lombardi, C. Ardito, A. Ferrara, E. Di Sciascio, T. Di Noia,
Explainable brain age prediction: a comparative evaluation of morphometric and deep learning
pipelines, Brain Informatics 11 (2024) 33.
[18] M. X. Cohen, Error-related medial frontal theta activity predicts cingulate-related structural
connectivity, Neuroimage 55 (2011) 1373–1383.
[19] E. Niedermeyer, F. L. da Silva, Electroencephalography: basic principles, clinical applications, and
related fields, Lippincott Williams &amp; Wilkins, 2005.
[20] B. M. Hager, A. C. Yang, J. N. Gutsell, Measuring brain complexity during neural motor resonance,</p>
      <p>Frontiers in neuroscience 12 (2018) 758.
[21] Y. Liu, O. Sourina, Eeg-based subject-dependent emotion recognition algorithm using fractal
dimension, in: 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC),
IEEE, 2014, pp. 3166–3171.
[22] J. J. Allen, P. M. Keune, M. Schönenberg, R. Nusslock, Frontal eeg alpha asymmetry and emotion:
From neural underpinnings and methodological considerations to psychopathology and social
cognition, 2018.
[23] H. Kober, L. F. Barrett, J. Joseph, E. Bliss-Moreau, K. Lindquist, T. D. Wager, Functional grouping and
cortical–subcortical interactions in emotion: a meta-analysis of neuroimaging studies, Neuroimage
42 (2008) 998–1031.
[24] M. Nikolic, P. Pezzoli, N. Jaworska, M. C. Seto, Brain responses in aggression-prone individuals:
A systematic review and meta-analysis of functional magnetic resonance imaging (fmri) studies
of anger-and aggression-eliciting tasks, Progress in neuro-psychopharmacology and biological
psychiatry 119 (2022) 110596.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>W.</given-names>
            <surname>Ma</surname>
          </string-name>
          , Y. Zheng,
          <string-name>
            <given-names>T.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>A comprehensive review of deep learning in eeg-based emotion recognition: classifications, trends, and practical implications</article-title>
          ,
          <source>PeerJ Computer Science</source>
          <volume>10</volume>
          (
          <year>2024</year>
          )
          <article-title>e2065</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>T.</given-names>
            <surname>Colafiglio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lombardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. Di</given-names>
            <surname>Noia</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. L. N. De Bonis</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Narducci</surname>
            ,
            <given-names>A. M.</given-names>
          </string-name>
          <string-name>
            <surname>Proverbio</surname>
          </string-name>
          ,
          <article-title>Machine learning classification of motivational states: Insights from eeg analysis of perception and imagery</article-title>
          ,
          <source>Expert Systems with Applications</source>
          (
          <year>2025</year>
          )
          <fpage>127076</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Koelstra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Muhl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Soleymani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-S.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Yazdani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Ebrahimi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Pun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nijholt</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Patras</surname>
          </string-name>
          ,
          <article-title>Deap: A database for emotion analysis; using physiological signals</article-title>
          ,
          <source>IEEE transactions on afective computing 3</source>
          (
          <year>2011</year>
          )
          <fpage>18</fpage>
          -
          <lpage>31</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Katsigiannis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ramzan</surname>
          </string-name>
          ,
          <article-title>Dreamer: A database for emotion recognition through eeg and ecg signals from wireless low-cost of-the-shelf devices</article-title>
          ,
          <source>IEEE Journal of Biomedical and Health Informatics</source>
          <volume>22</volume>
          (
          <year>2018</year>
          )
          <fpage>98</fpage>
          -
          <lpage>107</lpage>
          . doi:
          <volume>10</volume>
          .1109/JBHI.
          <year>2017</year>
          .
          <volume>2688239</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>N. K.</given-names>
            <surname>Gunda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. I.</given-names>
            <surname>Khalaf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bhatnagar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Quraishi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Gudala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K. P.</given-names>
            <surname>Venkata</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. Y.</given-names>
            <surname>Alghayadh</surname>
          </string-name>
          ,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>