<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Data Augmentation for Domain-Adversarial Training in EEG-based Emotion Recognition</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Moscow State University</institution>
          ,
          <addr-line>Moscow</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <fpage>397</fpage>
      <lpage>410</lpage>
      <abstract>
        <p>Emotion Recognition is an important and challenging task of modern a ective computing systems. Neuronal action potentials measured by the Electroencephalography (EEG) provide an important data source with a high temporal resolution and direct relevance to a human brain activity. EEG-based evaluation of the emotional state is complicated due to the lack of labeled training data and to a strong presence of subject- and session-dependencies. Various adaptation techniques can be applied to train a model that would be robust to a domain mismatch in EEG data but the amount of available training data is still insu cient. In this work we propose a new approach based on the domain adversarial training and combining available training corpus with much larger unlabeled dataset in a semi-supervised training framework. A detailed analysis of available datasets and existing methods for the emotion recognition task is presented. The e ect of emotion recognition performance degradation caused by the subject- and session-dependencies was measured on DEAP dataset proving the need to develop approaches that would utilize larger datasets in order to obtain a better generalized model.</p>
      </abstract>
      <kwd-group>
        <kwd>Electroencephalography (EEG)</kwd>
        <kwd>Emotion recognition</kwd>
        <kwd>Signal processing</kwd>
        <kwd>Deep learning</kwd>
        <kwd>Domain adaptation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Recently, there has been growing interest in using the EEG signal to analyze
the functioning of the human brain. The results of EEG processing began to
be used in the creation of brain-computer interfaces (BCIs) and in
neurophysiology studies. Emotion recognition is one of the essential tasks in these elds.
Works on a ective disorders report that analysing EEG signal during emotion
task manipulations could provide an assessment of risk for major depressive
disorder [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. There are many works on the subject of a ective brain-computer
interactions. The authors of these works believe that recognizing emotions from
EEG signal will allow robots and machines to read people's interactive intentions
and states and respond to human emotions [2{4]. Moreover, solving the
problem of recognizing emotions may contribute the development of neuromarketing
to determine consumer preferences [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. And another area of task application is
workload estimation [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and driving fatigue detection [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
1.1
      </p>
    </sec>
    <sec id="sec-2">
      <title>Electroencephalography</title>
      <p>
        Electroencephalography is a multichannel continuous signal recorded with
electrodes that measures di erences between electric potentials that are registered
in two areas of the brain. During this recording, electrodes are placed on the
surface of the scalp. To improve the conductivity of the skin, a gel is applied
to the contact surface of the electrodes. Elastic helmets are used to xate the
electrodes on the head. In recent years a number of accessible consumer-level
brain-computer interfaces (BCI) became available on the market [8{10]. These
devices usually include a fewer number of electrodes that often are used
without conductive/adhesive gel. It makes BCI technology cheaper and more a
ordable. Due to this, there is a trend that more data is available. EEG
recording is always contaminated with artifacts, such as EOG(ocular), ECG(cardiac),
EMG(muscle), and noise. Therefore, work pipeline should contain signal
preprocessing to automatically handle this problem. Di erent processes are re ected
in di erent frequency bands of the electrical activity of the brain. For example,
alpha rhythm (8 to 12 Hz) re ects to attentional demands and beta activity (16
to 24 Hz) re ects to emotional and cognitive processes in brain [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>
        As possible variants of the experimental protocol the following systems for
recording EEG signals are used:
1. Resting states with eyes open (REO) or with eyes closed (REC). The patient
is relaxed state and does not think about anything. This procedure is used
to analyze the general condition of the patient. And it is suitable for anyone,
including people with disabilities.
2. Event-related potentials (ERPs) [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. In such experiments, a signal is sent
from a computer representing a stimulus to a computer recording an EEG
whenever a stimulus or response occurs. Such stimuli may be periodic light
exposure at di erent values of the frequency of exposure. Segments of EEG
data that are time-locked to the event signals are extracted from the overall
EEG and averaged.
3. Task-related. Neural activity is recorded under various cognitive tasks. The
patient should also be relaxed and his attention should be focused only on
the implementation of the task. These can be tasks such as counting in the
mind or reading.
4. Somnography [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. EEG is recorded during sleep stage. The sleep
electroencephalogram (EEG) can be recorded for analyzing the stages of sleep or the
causes of sleep deprivation.
1.2
      </p>
    </sec>
    <sec id="sec-3">
      <title>Emotions</title>
      <p>
        Emotion is a mental state and an a ective reaction towards an event based on
a subjective experience. It is hard to measure because it is a subjective feeling.
Emotions can be evaluated in terms of "positive", "negative" or "like",
"dislike" [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. It is also possible to distinguish a set of basic emotions such as anger,
fear, sadness, disgust, happiness, surprise [
        <xref ref-type="bibr" rid="ref34">34</xref>
        ] and try to solve the classi
cation problem. Researchers often use a two- or three-dimensional space to model
emotions [
        <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
        ], where di erent emotion points can be plotted on a 2D plane
consisting of a Valence axis and Arousal axis (Fig. 1a) or on a 3D area with
addition Dominance axis (Fig. 1b).
      </p>
      <p>(a) 2D</p>
      <p>
        (b) 3D
One of the earliest works on emotion recognition from EEG was presented in
1997 [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. Machine-learning approach is one of the classic ways for solving the
problem of emotion recognition. In this method it is necessary to extract reliable
informative features closely related to the emotional state of the subject. The
signal is divided into components by Independent Component Analysis (ICA) [
        <xref ref-type="bibr" rid="ref19 ref20">19,
20</xref>
        ] to separate artifacts. The main method used for extracting spectral
features is the Fourier transform [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ]. A detailed description of popular features
for analysing EEG signal is presented in the article [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. In machine-learning
approach, discriminant analysis [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ] or Bayesian analysis [
        <xref ref-type="bibr" rid="ref34">34</xref>
        ] can be used for
classi cation.
      </p>
      <p>
        In addition to obtaining ML features, deep learning methods can be used.
And that is a more modern way of solving the problem. This approach is often
used as conjunction machine learning feature extraction with neural network
classi cation. For classi cation such neural networks as SAE [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], LSTM [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] are
often con gured [
        <xref ref-type="bibr" rid="ref23 ref37 ref39">37, 39, 23</xref>
        ]. A fully neural network solution has recently been
proposed: SAE+LSTM method [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] on DEAP dataset. In this work Stacked
AutoEncoder (SAE) is used for solving ICA problem and the emotion timing
modeling is based on the Long Short-Term Memory Recurrent Neural Network
(LSTM-RNN).
1.4
      </p>
    </sec>
    <sec id="sec-4">
      <title>Data Augmentation</title>
      <p>
        Neural networks require a big amount of training data. Usually EEG datasets
contain data from a small number of subjects. This is due to the fact that special
devices and the correct experimental conditions are required to collect the data.
Several datasets could be combined to increase the amount of training data.
But each dataset was collected by di erent devices, with di erent experimental
protocol and di erent stimuli. Therefore, it is di cult to conduct training on
data from several sources. Another problem is the low accuracy of prediction
for subjects whose data were not available in the training set. Various domain
adaptation techniques are used to reduce data variability [
        <xref ref-type="bibr" rid="ref40">40</xref>
        ].
      </p>
      <p>
        The volume of the union of datasets labeled by emotions is still not large
enough. The solution for expanding data with other EEG datasets is proposed
in this work. It is possible to use EEG datasets without emotional labels if they
contain video recordings of the experiment. Data can be marked by emotions
detected from the video, the similar approach was suggested for problem of
emotion recognition from speech [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]. It increases the amount of work, but helps
to expand the training set.
2
      </p>
      <sec id="sec-4-1">
        <title>Datasets</title>
        <p>2.1</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>DEAP</title>
      <p>There are several datasets for EEG-based emotion recognition task. Every corpus
was collected according to unique protocol. Available datasets for solving the
problem are described below.</p>
      <p>
        DEAP dataset (A Database for Emotion Analysis Using Physiological Signals) [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]
is a widely used in EEG-based emotion recognition area [
        <xref ref-type="bibr" rid="ref23 ref39 ref40">39, 23, 40</xref>
        ]. This dataset
was collected as a part of an adaptive music video recommendation system
development. The experiment was attended by 32 people. Data was collected from
subjects while watching 40 one-minute music videos stimuli. During the
experiment, participants performed self-assessment of their levels of arousal, valence
and dominance (Fig. 2). As a result, 32-channel electroencephalogram and
peripheral physiological signals were recorded. For 22 of the 32 participants, frontal
face video was also recorded. Dataset is convenient, as it contains not only
original data in BDF (Biosemi Data Format), but also preprocessed data in
MATLAB and Python formats. Dataset is open only for academic research and it is
available for download after signing the EULA (End User License Agreement).
2.2
      </p>
      <p>
        eNTERFACE-2006
Another popular dataset was made as a part of eNTERFACE-2006 project [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ].
The purpose of the project is to collect a su cient data to build an integrated
framework for multi-modal emotion recognition. Data collection was carried out
for 5 male subjects in 3 sessions. Stimuli are images from the IAPS
(International A ective Picture System) [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ] which consists of 1196 pictures evaluated in
arousal-valence dimensions. For experiment 3 groups of images were selected: 106
calm, 71 positive exciting, 150 negative exciting. Each session lasted 15 minutes
and consisted of 30 blocks, each block is succession of 5 images corresponding
to a single emotion. EEG and fNIRS signals with peripheral information were
recorded in ..bdf format. Eventually the data were marked not only with a
preliminary evaluation of the images, but also with participants self-assessment.
2.3
      </p>
    </sec>
    <sec id="sec-6">
      <title>SEED, SEED-IV</title>
      <p>
        SEED (SJTU Emotion EEG Dataset) [
        <xref ref-type="bibr" rid="ref28 ref38">28, 38</xref>
        ] contains data from 15 subjects
in 3 sessions with an interval of about one week. As stimuli, 15 video clips
lasting 4 minutes were selected. During the experiment, subjects conducted
selfassessments based on \positive," \negative," or \neutral" terms for evaluating
emotions. Dataset contains preprocessed EEG in 45 .mat (Matlab) les. EEG
data is downsampled, preprocessed and segmented. In addition, the dataset
comprises les with extracted features. It contains the features of di erential entropy
(DE) of the EEG signals, which is convenient for testing the classi ers.
      </p>
      <p>
        SEED-IV [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ] is another dataset collected later. In this experiment, a di
erent system of emotion classi cation was used: happy, sad, neutral, fear. And in
addition to EEG, eye movement information was recorded with the eye
tracking glasses, that makes SEED-IV multi-modal dataset for emotion recognition.
Dataset contains EEG raw data, extracted features from EEG (di erential
entropy and power spectral density) and raw data and extracted features of eye
movements, all in .mat format. Both of these datasets can be downloaded after
signing the license agreement.
2.4
      </p>
    </sec>
    <sec id="sec-7">
      <title>Neuromarketing</title>
      <p>
        Neuromarketing is the eld of marketing research that helps to determine
consumers' preferences and predict their behavior using unconscious processes, which
ensures e ective utilization of the product. In [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] The Neuromarketing dataset
was created for building predictive modeling framework to better understand
consumer choice. This corpus of data was made by recording an EEG signal
from 40 subjects while viewing consumer products. During the experiment,
participants marked E-commerce products in terms of \likes" and \dislikes". The
resulting dataset is publicity available and can be used in scienti c works and
marketing researches.
2.5
      </p>
    </sec>
    <sec id="sec-8">
      <title>Imagined Emotions</title>
      <p>
        A di erent experiment design that included cue-based emotion stimuli was
presented in [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ]. Each participant listened to a sample of voice recording that
suggested a speci c emotional state. A participant had to imagine a
corresponding emotional scenario or to recall a related emotional experience. The presented
dataset consists of EEG signals collected from 32 subjects who have experienced
15 emotional states, and participants' assessments of the authenticity and
intensity of the tested emotions on a scale of 1 to 9.
3
      </p>
      <sec id="sec-8-1">
        <title>Related Works</title>
        <p>Emotion recognition is an analysis of multi-channel samples of EEG data. Each
sample is considered to have a single emotional state that is supposed to be
constant during the recording. Depending on the system of classi cation of
emotions that was used in the experiment design, either the emotion must be
determined from a preassigned set, or an assessment should be given on the
Arousal-Valence(-Dominance) scales. Thus, the emotion recognition task can be
considered a classi cation or a regression problem.
3.1</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>Preprocessing and Feature Extraction</title>
      <p>Electroencephalogram data consists not only of the recordings of brain activity
but also of a number of artifact components of various origins. Therefore the
extensive ltering and artifact removal procedures must be included as a necessary
part of the analysis pipeline. Deletion of recording sections with artifacts can
be performed by specialists but that requires a thorough and expensive analysis
of each sample. After the initial cleaning step, the multi-channel signal can be
decomposed into quasi-independent components by solving a blind source
separation task. This can be achieved with the Independent Component Analysis
(ICA) or with more recent autoencoder-based approaches.</p>
      <p>
        During the feature extraction step, EEG signal is divided into short time
frames. The EEG features are extracted from each frame and combined into a
feature sequence. The signal is represented as a set of overlap frames using the
window function. It can be a rectangular window, but usually a smoothing
window, such as the Hanning window, is used. For spectral analysis of the EEG data,
the Fourier transform [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ] is used to obtain a frequency domain representation
of each window. Then, feature extraction can be performed independently for
each frequency band. Following metrics and statistics can be utilized as
informative features: max, min, average amplitude and Power spectral density (PSD).
Following cross-channel features can be calculated:
1. Root Mean Square
      </p>
      <p>where Si | ith channel amplitude
2. Pearson Correlation Coe cient between 2 channels
v</p>
      <p>n
RM S = tuu N1 n=1</p>
      <p>X Sn2
P CC =
qPiN=1(xi</p>
      <p>PiN=1(xi
x)(yi</p>
      <p>y)
x)2qPiN=1(yi</p>
      <p>y)2
M SCE = jPijj</p>
      <p>
        Pi Pj
3. Magnitude Squared Coherence Estimate
(1)
(2)
(3)
where Pij | cross-PSD i; jth channels, Pi | PSD ith channel
A more detailed review of feature extraction methods can be found in [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
3.2
      </p>
    </sec>
    <sec id="sec-10">
      <title>Model Training</title>
      <p>
        Emotion recognition problem in feature space can be approached with one of
the machine learning methods for classi cation. In [
        <xref ref-type="bibr" rid="ref34">34</xref>
        ] an emotion recognition
method using a Naive Bayes model was proposed. The classi cation problem
under the maximum likelihood framework was formulated as:
y = arg max P (Xjy)
b y
(4)
where y is label and X is feature vector. The Naive Bayes framework assumes
that the features in X are independent of each other conditioned upon the class
label. This paper compares two model distribution assumptions. It is shown that
the Cauchy distribution assumption typically provides better results than the
Gaussian distribution assumption.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ] a comparison of K Nearest Neighbours classi er and Linear
Discriminant Analysis is presented. The experiment was conducted on a private dataset
and showed the maximum average classi cation rate of 83.26% using KNN and
75.21% using LDA. These solutions are suitable for the classi cation problem
when it is necessary to recognize emotion from a given set. If a ective labeling
is presented as a vector of real values (such as Arousal-Valence scale), this
approach can also be applied with regression methods instead of classi cation [
        <xref ref-type="bibr" rid="ref36">36</xref>
        ].
Despite this, labels are often made binary when evaluating the accuracy of an
algorithm.
3.3
      </p>
    </sec>
    <sec id="sec-11">
      <title>Deep Learning Approach</title>
      <p>Today, neural network algorithms are used everywhere, since they can recognize
deeper, sometimes unexpected patterns in data. And in the studied area, deep
neural network-based feature extraction and emotion recognition began to be
intensively applied.</p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref38">38</xref>
        ] Deep Belief Network (DBN) was trained with di erential entropy
features. The experiment performed classi cation for three emotional categories on
the SEED dataset. The results show that the DBN models obtain higher
accuracy than previously considered models such as kNN, LR and SVM approaches.
      </p>
      <p>
        An emotion recognition system that uses deep learning models at two stages
of work pipeline was introduced in [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. Stacked autoencoder was used for
decomposition of source signal (as a substitute for Independent Component Analysis)
and extracting EEG channel correlations. LSTM-RNN network is used for
emotion classi cation based on Frequency Band Power Features extracted from the
SAE output. The mean accuracy of emotion recognition, calculated by binarized
labels, achieved 81.10% in valence and 74.38% in arousal on the DEAP dataset.
3.4
      </p>
    </sec>
    <sec id="sec-12">
      <title>Domain Adaptation</title>
      <p>Training an accurate model requires an approach that would be robust to
variations in individual characteristics of participants and recording devices, since
EEG data su ers from an intense dependence on the device and the subject.
It is important to apply a domain adaptation technique to a model that would
compensate the subject variability or heterogeneity in various technical speci
cations.</p>
      <p>
        The paper [
        <xref ref-type="bibr" rid="ref40">40</xref>
        ] compares di erent domain adaptation techniques on two
datasets: DEAP and SEED. Transfer Component Analysis (TCA) [
        <xref ref-type="bibr" rid="ref42">42</xref>
        ] and
Maximum Independence Domain Adaptation (MIDA) [
        <xref ref-type="bibr" rid="ref41">41</xref>
        ] performed the best results
for subject within-dataset domain adaptation. It is shown that applying these
techniques lead to an improvement gain up to 20.66% over the baseline accuracy
where no domain adaptation technique was used. A research of these techniques
application for cross-dataset domain adaptation was also conducted. The article
concluded that TCA and MIDA can e ectivly improve the accuracy by 7.25%
{ 13.40% compared to the baseline accuracy where no domain adaptation
technique was used.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref43">43</xref>
        ] another approach to a domain adaptation was considered based on
neural networks that are trained to solve emotion and domain recognition
problems. Samples of feature vectors from two domains in the same quantity are
fed to the model, producing emotion label for each EEG sample. Several rst
layers of neural network act as a feature extractor, producing a xed-dimension
representation of EEG samples in a latent space. These representations are used
to solve two di erent tasks: emotion label classi cation and domain recognition.
A gradient reversal layer is applied to the domain predictor [
        <xref ref-type="bibr" rid="ref44">44</xref>
        ] leading to the
adversarial training scheme during which the parameters of feature extractor
layers are updated to make embedding distributions of di erent domains
statistically similar. Fully connected layers make representations for label predictor,
which estimates emotion class for each sample. During training, samples from
one domain contains labels, whereas the second domain is unlabeled. The label
predictor is optimized to minimize the classi cation error on the rst domain.
During test, the model inputs are unlabeled data only. This method was
compared with multiple domain adaptation algorithms on benchmark SEED and
DEAP and proved to be superior in both cross-subject and cross-session
adaptation.
4
4.1
      </p>
      <sec id="sec-12-1">
        <title>A Proposed Approach</title>
      </sec>
    </sec>
    <sec id="sec-13">
      <title>Domain-Adversarial Training</title>
      <p>
        The problem of domain adaptation is crucial in the emotion recognition task. The
architectures of the proposed approach are presented on Fig. 3, 4. This approach
combines the ideas presented in works [
        <xref ref-type="bibr" rid="ref43">43</xref>
        ] and [
        <xref ref-type="bibr" rid="ref44">44</xref>
        ]. In g. 3 the domain classi er
predicts which domain the data belongs to and the set of feature extractor
parameters is updated by adversarial training to make the distribution of data
representations of di erent domains more similar. In g. 4 the input is data from
two domains: labeled and unlabeled. Data representations of labeled domain are
sent to label predictor and domain discriminator, representations of unlabeled
data are transmitted only to domain discriminator, which determines whether
these domains match or not.
      </p>
      <p>These architectures di er in that in the rst case, the classi cation of domains
occurs independently for input samples, and in the second, pairwise comparisons
are performed. In the future work, a comparison of these two approaches will be
carried out and the best approach will be de ned.
4.2</p>
    </sec>
    <sec id="sec-14">
      <title>Data Augmentation</title>
      <p>
        In order to improve the performance of the model, a large number of EEG
datasets without a ective labels [
        <xref ref-type="bibr" rid="ref45">45</xref>
        ] can be utilized emotion recognition task.
To train a domain classi er (for subject identity recognition), more data can be
used, since there is no need for labeled data. Since a domain predictor is trained
on a much larger number of domains, and a larger amount of training samples,
it potentially can be more robust to various speci c channel characteristic
variability. The emotion classi er is still trained on the same amount of data, but
the performance can be improved since the latent representations are trained to
be domain-independent. DEAP dataset includes data of only 32 subjects, as well
as other datasets for EEG-based emotion recognition also contain a limited
variability of subjects. At the same time, EEG datasets without a ective labeling
are much larger. For example, in the Temple University Hospital (TUH) EEG
data corpus [
        <xref ref-type="bibr" rid="ref47">47</xref>
        ] there is EGG data of more than 10000 participants. It is more
e cient to train neural networks on such data volume, therefore, as the solution
it is proposed to use unlabeled data. Thus, the neural network will be trained
on a larger set of subjects, and therefore, will provide a better generalized model
for new subjects.
4.3
      </p>
    </sec>
    <sec id="sec-15">
      <title>Auto-labeling of EEG Datasets</title>
      <p>
        Another possible solution is to enrich the sample for training using the
multimodal emotion recognition. For this purpose, EEG datasets without labels, which
contain other modalities such as video recordings of a subject's face, can be
used, for example SEED-VIG [
        <xref ref-type="bibr" rid="ref46">46</xref>
        ]. Then the data can be automatically labeled,
recognizing the emotions experienced by the participants from the video.
Unfortunately, the EEG datasets with the recordings of such modalities are rare,
so this approach probably will not be allow to signi cantly expand the training
data.
4.4
      </p>
    </sec>
    <sec id="sec-16">
      <title>A Preliminary Motivation Study</title>
      <p>Below is an illustration of the fact that the problem of cross-subject adaptation
really requires a solution. An experiment was conducted demonstrating a
decrease in the accuracy of emotions recognition in the absence of subject data in
the training sample. The preprocessed data from DEAP dataset was used. PSD
for ve frequency bands were extracted as features. The following ML
classiers were trained: SVM, Random Forest Regression. The data were divided into
training, validation and test samples in a ratio of 6 : 1 : 1 respectively. In the
rst experiment, the data of each subject was divided between the samples. In
the second experiment, the data of each subject entirely relate to one or another
sample. The table 1 shows the di erences in the accuracy of determining
emotions for these two experiments. According to the results the presence of learning
problems on isolated subjects is shown.</p>
      <p>(b) For Random Forest Regression
Rating scale 1st experiment 2nd experiment
Valence 83.2% 47.6%
Arousal 82.6% 60.3%
Dominance 81.8% 53.1%</p>
      <sec id="sec-16-1">
        <title>Conclusion and Future Work</title>
        <p>This paper discribes the EEG-based emotion recognition task and its existing
solution methods. There was formulated the problem of domain mismatch and
insu cient data amount for training neural networks. As a solution, there was
proposed the application of existing domain adaptation techniques with data
augmentation due to datasets without emotional labels.</p>
        <p>In the future, it is planned to conduct testing on DEAP dataset, using TUH
EEG data corpus, to evaluate how emotion classi cation would be robust to
subjects and session and channel di erences. It is also planned to use the SEED
dataset and perform the same analysis to study the task of training a
datasetindependent emotion recognition model. A detailed validation study will be
performed to compare the results with existing methods of domain adaptation.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Stewart</surname>
            ,
            <given-names>J. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Coan</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Towers</surname>
            ,
            <given-names>D. N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Allen</surname>
            ,
            <given-names>J. J. B.</given-names>
          </string-name>
          :
          <article-title>Frontal EEG asymmetry during emotional challenge di erentiates individuals with and without lifetime major depressive disorder</article-title>
          .
          <source>Journal of A ective Disorders</source>
          <volume>129</volume>
          (
          <issue>1-3</issue>
          ),
          <volume>167</volume>
          {
          <fpage>174</fpage>
          (
          <year>2011</year>
          ). https://doi.org/10.1016/j.jad.
          <year>2010</year>
          .
          <volume>08</volume>
          .029
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Yin</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          , Zhang, J.:
          <article-title>Cross-Subject EEG Feature Selection for Emotion Recognition Using Transfer Recursive Feature Elimination</article-title>
          .
          <source>Frontiers in Neurorobotics</source>
          <volume>11</volume>
          (
          <year>2017</year>
          ). https://doi.org/10.3389/fnbot.
          <year>2017</year>
          .00019
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Urgen</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Plank</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ishiguro</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Poizner</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Saygin</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Eeg theta and mu oscillations during perception of human and robot actions</article-title>
          .
          <source>Frontiers in Neurorobotics 7</source>
          (
          <year>2013</year>
          ). https://doi.org/10.3389/fnbot.
          <year>2013</year>
          .00019
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Calvo</surname>
            ,
            <given-names>R. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>D'Mello</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>A ect detection: An interdisciplinary review of models, methods, and their applications</article-title>
          .
          <source>IEEE Transactions on a ective computing</source>
          ,
          <volume>1</volume>
          (
          <issue>1</issue>
          ),
          <volume>18</volume>
          {
          <fpage>37</fpage>
          (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Yadava</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kumar</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Saini</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          et al:
          <article-title>Analysis of EEG signals and its application to neuromarketing</article-title>
          .
          <source>Multimedia Tools and Applications</source>
          <volume>76</volume>
          , 19087{
          <fpage>19111</fpage>
          (
          <year>2017</year>
          ). https://doi.org/10.1007/s11042-017-4580-6
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Kothe</surname>
            ,
            <given-names>C. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Makeig</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Estimation of task workload from EEG data: new and current tools and perspectives</article-title>
          .
          <source>In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society</source>
          , pp.
          <volume>6547</volume>
          {
          <fpage>6551</fpage>
          . EMBC, IEEE (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Shi</surname>
            ,
            <given-names>L. C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lu</surname>
            ,
            <given-names>B. L.</given-names>
          </string-name>
          :
          <article-title>EEG-based vigilance estimation using extreme learning machines</article-title>
          .
          <source>Neurocomputing</source>
          <volume>102</volume>
          ,
          <issue>135</issue>
          {
          <fpage>143</fpage>
          (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>Muse</given-names>
            <surname>Homepage</surname>
          </string-name>
          , https://choosemuse.com.
          <source>Last accessed 30 May 2020</source>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>Emotiv</given-names>
            <surname>Homepage</surname>
          </string-name>
          , https://www.emotiv.com.
          <source>Last accessed 30 May 2020</source>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10. Neurosky Homepage, http://neurosky.com.
          <source>Last accessed 30 May 2020</source>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Ray</surname>
            <given-names>W. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cole</surname>
            <given-names>H. W.:</given-names>
          </string-name>
          <article-title>EEG alpha activity re ects attentional demands, and beta activity re ects emotional and cognitive processes</article-title>
          .
          <source>Science</source>
          <volume>228</volume>
          (
          <issue>4700</issue>
          ),
          <volume>750</volume>
          {
          <fpage>752</fpage>
          (
          <year>1985</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Lopez-Calderon</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Luck</surname>
            ,
            <given-names>S. J.:</given-names>
          </string-name>
          <article-title>ERPLAB: an open-source toolbox for the analysis of event-related potentials</article-title>
          .
          <source>Frontiers in Human Neuroscience</source>
          <volume>8</volume>
          (
          <year>2014</year>
          ). https://doi.org/10.3389/fnhum.
          <year>2014</year>
          .00213
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Brunner</surname>
            ,
            <given-names>D. P.</given-names>
          </string-name>
          , Munch,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Biedermann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Huch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Huch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Borbely</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. A.</surname>
          </string-name>
          :
          <article-title>Changes in Sleep and Sleep Electroencephalogram During Pregnancy</article-title>
          .
          <source>Sleep</source>
          <volume>17</volume>
          (
          <issue>7</issue>
          ),
          <volume>576</volume>
          {
          <fpage>582</fpage>
          (
          <year>1994</year>
          ). https://doi.org/10.1093/sleep/17.7.
          <fpage>576</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Lang</surname>
            ,
            <given-names>P. J.:</given-names>
          </string-name>
          <article-title>The emotion probe: studies of motivation and attention</article-title>
          .
          <source>American psychologist 50(5)</source>
          ,
          <volume>372</volume>
          (
          <year>1995</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Scherer</surname>
            ,
            <given-names>K. R.</given-names>
          </string-name>
          :
          <source>What Are Emotions? And How Can They Be Measured? Social Science Information</source>
          <volume>44</volume>
          (
          <issue>4</issue>
          ),
          <volume>695</volume>
          {
          <fpage>729</fpage>
          (
          <year>2005</year>
          ). https://doi.org/10.1177/0539018405058216
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Shu</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xie</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liao</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>X.:</given-names>
          </string-name>
          <article-title>A review of emotion recognition using physiological signals</article-title>
          .
          <source>Sensors</source>
          <volume>18</volume>
          (
          <issue>7</issue>
          ),
          <year>2074</year>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Musha</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Terasaki</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haque</surname>
            ,
            <given-names>H. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ivamitsky</surname>
            ,
            <given-names>G. A.</given-names>
          </string-name>
          :
          <article-title>Feature extraction from EEGs associated with emotions</article-title>
          .
          <source>Arti cial Life and Robotics</source>
          <volume>1</volume>
          (
          <issue>1</issue>
          ),
          <volume>15</volume>
          {
          <fpage>19</fpage>
          (
          <year>1997</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Jenke</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peer</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Buss</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Feature extraction and selection for emotion recognition from EEG</article-title>
          .
          <source>IEEE Transactions on A ective computing</source>
          <volume>5</volume>
          (
          <issue>3</issue>
          ),
          <volume>327</volume>
          {
          <fpage>339</fpage>
          (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19. Hyvarinen,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Oja</surname>
          </string-name>
          , E.:
          <article-title>Independent component analysis: A tutorial</article-title>
          .
          <source>LCIS</source>
          , Helsinki University of Technology, Finland (
          <year>1999</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Vinther</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Independent Component Analysis of Evoked Potentials in EEG</article-title>
          . Orsted,
          <string-name>
            <surname>DTU</surname>
          </string-name>
          (
          <year>2002</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Vincent</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Larochelle</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lajoie</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bengio</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Manzagol</surname>
            ,
            <given-names>P. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bottou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion</article-title>
          .
          <source>Journal of machine learning research 11(12)</source>
          ,
          <volume>3371</volume>
          {
          <fpage>3408</fpage>
          (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Hochreiter</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schmidhuber</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>Long short-term memory</article-title>
          .
          <source>Neural Computing</source>
          <volume>9</volume>
          (
          <issue>8</issue>
          ),
          <volume>1735</volume>
          {
          <fpage>1780</fpage>
          (
          <year>1997</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Xing</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shu</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          :
          <article-title>SAE+LSTM: A New Framework for Emotion Recognition From Multi-Channel EEG</article-title>
          .
          <source>Frontiers in Neurorobotics</source>
          <volume>13</volume>
          (
          <issue>37</issue>
          ) (
          <year>2019</year>
          ). https://doi.org/10.3389/fnbot.
          <year>2019</year>
          .00037
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Albanie</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nagrani</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vedaldi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zisserman</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Emotion recognition in speech using cross-modal transfer in the wild</article-title>
          .
          <source>In: Proceedings of the 26th ACM international conference on Multimedia</source>
          , pp.
          <volume>292</volume>
          {
          <fpage>301</fpage>
          . ACM Press (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Koelstra</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Muhl</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Soleymani</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>J. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yazdani</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ebrahimi</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Patras</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Deap: A database for emotion analysis; using physiological signals</article-title>
          .
          <source>IEEE transactions on a ective computing</source>
          <volume>3</volume>
          (
          <issue>1</issue>
          ),
          <volume>18</volume>
          {
          <fpage>31</fpage>
          (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Savran</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ciftci</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chanel</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mota</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hong</surname>
            <given-names>Viet</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Sankur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Rombaut</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          :
          <article-title>Emotion detection in the loop from brain signals and facial images</article-title>
          .
          <source>In: Proceedings of the eNTERFACE 2006 Workshop</source>
          (
          <year>2006</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Lang</surname>
          </string-name>
          , P. J.:
          <article-title>International a ective picture system (IAPS): Digitized photographs, instruction manual and a ective ratings</article-title>
          .
          <source>Technical report</source>
          (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Duan</surname>
            ,
            <given-names>R. N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhu</surname>
            ,
            <given-names>J. Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lu</surname>
            ,
            <given-names>B. L.</given-names>
          </string-name>
          :
          <article-title>Di erential entropy feature for EEG-based emotion classi cation</article-title>
          .
          <source>In: 6th International IEEE/EMBS Conference on Neural Engineering (NER)</source>
          , pp.
          <volume>81</volume>
          {
          <fpage>84</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Zheng</surname>
            ,
            <given-names>W. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lu</surname>
            ,
            <given-names>B. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cichocki</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Emotionmeter: A multimodal framework for recognizing human emotions</article-title>
          .
          <source>IEEE transactions on cybernetics 49(3)</source>
          ,
          <volume>1110</volume>
          {
          <fpage>1122</fpage>
          (
          <year>2018</year>
          ). https://doi.org/10.1109/TCYB.
          <year>2018</year>
          .2797176
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Bradley</surname>
            ,
            <given-names>M. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lang</surname>
            ,
            <given-names>P. J.</given-names>
          </string-name>
          :
          <article-title>Measuring emotion: the self-assessment manikin and the semantic di erential</article-title>
          .
          <source>Journal of behavior therapy and experimental psychiatry 25(1)</source>
          ,
          <volume>49</volume>
          {
          <fpage>59</fpage>
          (
          <year>1994</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <surname>Onton</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Makeig</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>High-frequency broadband modulation of electroencephalographic spectra</article-title>
          .
          <source>Frontiers in human neuroscience 3</source>
          ,
          <issue>61</issue>
          (
          <year>2009</year>
          ). https://doi.org/10.3389/neuro.09.061.2009
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          32.
          <string-name>
            <surname>Grass</surname>
            ,
            <given-names>A. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gibbs</surname>
            ,
            <given-names>F. A.</given-names>
          </string-name>
          :
          <article-title>A Fourier transform of the electroencephalogram</article-title>
          .
          <source>Journal of Neurophysiology</source>
          <volume>1</volume>
          (
          <issue>6</issue>
          ),
          <volume>521</volume>
          {
          <fpage>526</fpage>
          (
          <year>1938</year>
          ). https://doi.org/10.1152/jn.
          <year>1938</year>
          .
          <volume>1</volume>
          .6.
          <fpage>521</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          33.
          <string-name>
            <surname>Korats</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Le Cam</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ranta</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hamid</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Applying ica in eeg: choice of the window length and of the decorrelation method</article-title>
          .
          <source>In: International Joint Conference on Biomedical Engineering Systems and Technologies</source>
          , pp.
          <volume>269</volume>
          {
          <fpage>286</fpage>
          . Vilamoura, Springer (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          34.
          <string-name>
            <surname>Sebe</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lew</surname>
            ,
            <given-names>M. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cohen</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garg</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>T. S.:</given-names>
          </string-name>
          <article-title>Emotion recognition using a cauchy naive bayes classi er</article-title>
          . In:
          <article-title>Object recognition supported by user interaction for service robots 1</article-title>
          , pp.
          <fpage>17</fpage>
          -
          <lpage>20</lpage>
          . IEEE (
          <year>2002</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          35.
          <string-name>
            <surname>Murugappan</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ramachandran</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sazali</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Classi cation of human emotion from EEG using discrete wavelet transform</article-title>
          .
          <source>Journal of biomedical science and engineering</source>
          <volume>3</volume>
          (
          <issue>04</issue>
          ),
          <volume>390</volume>
          (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          36.
          <string-name>
            <surname>Soleymani</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Asghari-Esfeden</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pantic</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>Analysis of EEG signals and facial expressions for continuous emotion detection</article-title>
          .
          <source>IEEE Transactions on Affective Computing</source>
          <volume>7</volume>
          (
          <issue>1</issue>
          ),
          <volume>17</volume>
          {
          <fpage>28</fpage>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          37.
          <string-name>
            <surname>Jirayucharoensak</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pan-Ngum</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Israsena</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>EEG-based emotion recognition using deep learning network with principal component based covariate shift adaptation</article-title>
          .
          <source>The Scienti c World Journal</source>
          (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          38.
          <string-name>
            <surname>Zheng</surname>
            ,
            <given-names>W. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lu</surname>
            ,
            <given-names>B. L.</given-names>
          </string-name>
          :
          <article-title>Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks</article-title>
          .
          <source>IEEE Transactions on Autonomous Mental Development</source>
          <volume>7</volume>
          (
          <issue>3</issue>
          ),
          <volume>162</volume>
          {
          <fpage>175</fpage>
          (
          <year>2015</year>
          ). https://doi.org/10.1109/tamd.
          <year>2015</year>
          .2431497
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          39.
          <string-name>
            <surname>Yan</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deng</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>A EEG-based emotion recognition model with rhythm and time characteristics</article-title>
          .
          <source>Brain informatics 6(1)</source>
          ,
          <volume>7</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          40.
          <string-name>
            <surname>Lan</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sourina</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scherer</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , Mu
          <fpage>ller</fpage>
          -Putz,
          <string-name>
            <surname>G. R.</surname>
          </string-name>
          :
          <article-title>Domain adaptation techniques for EEG-based emotion recognition: a comparative study on two public datasets</article-title>
          .
          <source>IEEE Transactions on Cognitive and Developmental Systems</source>
          <volume>11</volume>
          (
          <issue>1</issue>
          ),
          <volume>85</volume>
          {
          <fpage>94</fpage>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          41.
          <string-name>
            <surname>Yan</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
          </string-name>
          , D.:
          <article-title>Learning domain-invariant subspace using domain features and independence maximization</article-title>
          .
          <source>IEEE transactions on cybernetics 48(1)</source>
          ,
          <volume>288</volume>
          {
          <fpage>299</fpage>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          42.
          <string-name>
            <surname>Pan</surname>
            ,
            <given-names>S. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tsang</surname>
            ,
            <given-names>I. W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kwok</surname>
            ,
            <given-names>J. T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          :
          <article-title>Domain adaptation via transfer component analysis</article-title>
          .
          <source>IEEE Transactions on Neural Networks</source>
          <volume>22</volume>
          (
          <issue>2</issue>
          ),
          <volume>199</volume>
          {
          <fpage>210</fpage>
          (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          43.
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Qiu</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Du</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>He</surname>
          </string-name>
          , H.:
          <article-title>Domain Adaptation for EEG Emotion Recognition Based on Latent Representation Similarity</article-title>
          .
          <source>IEEE Transactions on Cognitive and Developmental Systems</source>
          (
          <year>2019</year>
          ). https://doi.org/10.1109/TCDS.
          <year>2019</year>
          .2949306
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          44.
          <string-name>
            <surname>Ganin</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lempitsky</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Unsupervised domain adaptation by backpropagation</article-title>
          .
          <source>In: International conference on machine learning</source>
          , pp.
          <fpage>1180</fpage>
          -
          <lpage>1189</lpage>
          . PMLR (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          45.
          <article-title>The Institute for Signal and</article-title>
          Information Processing, Temple University EEG Corpus, https://www.isip.piconepress.com/projects/tuh eeg.
          <source>Last accessed 31 May 2020</source>
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          46.
          <string-name>
            <surname>Zheng</surname>
            ,
            <given-names>W. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lu</surname>
            ,
            <given-names>B. L.:</given-names>
          </string-name>
          <article-title>A multimodal approach to estimating vigilance using EEG and forehead EOG</article-title>
          .
          <source>Journal of neural engineering 14(2)</source>
          ,
          <volume>026017</volume>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          47.
          <string-name>
            <surname>Obeid</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Picone</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>The temple university hospital EEG data corpus</article-title>
          .
          <source>Frontiers in neuroscience 10</source>
          ,
          <issue>196</issue>
          (
          <year>2016</year>
          ). https://doi.org/10.3389/fnins.
          <year>2016</year>
          .00196
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>