<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Data Augmentation for Domain-Adversarial Training in EEG-based Emotion Recognition</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Ekaterina</forename><surname>Lebedeva</surname></persName>
							<email>kate.1ebedeva@yandex.com</email>
							<affiliation key="aff0">
								<orgName type="institution">Moscow State University</orgName>
								<address>
									<settlement>Moscow</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Data Augmentation for Domain-Adversarial Training in EEG-based Emotion Recognition</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">A0A3EBD3A5D50FDDCDF254D5D0D70F9D</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T02:00+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Electroencephalography (EEG)</term>
					<term>Emotion recognition</term>
					<term>Signal processing</term>
					<term>Deep learning</term>
					<term>Domain adaptation</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Emotion Recognition is an important and challenging task of modern affective computing systems. Neuronal action potentials measured by the Electroencephalography (EEG) provide an important data source with a high temporal resolution and direct relevance to a human brain activity. EEG-based evaluation of the emotional state is complicated due to the lack of labeled training data and to a strong presence of subject-and session-dependencies. Various adaptation techniques can be applied to train a model that would be robust to a domain mismatch in EEG data but the amount of available training data is still insufficient. In this work we propose a new approach based on the domain adversarial training and combining available training corpus with much larger unlabeled dataset in a semi-supervised training framework. A detailed analysis of available datasets and existing methods for the emotion recognition task is presented. The effect of emotion recognition performance degradation caused by the subject-and session-dependencies was measured on DEAP dataset proving the need to develop approaches that would utilize larger datasets in order to obtain a better generalized model.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Recently, there has been growing interest in using the EEG signal to analyze the functioning of the human brain. The results of EEG processing began to be used in the creation of brain-computer interfaces (BCIs) and in neurophysiology studies. Emotion recognition is one of the essential tasks in these fields. Works on affective disorders report that analysing EEG signal during emotion task manipulations could provide an assessment of risk for major depressive disorder <ref type="bibr" target="#b0">[1]</ref>. There are many works on the subject of affective brain-computer interactions. The authors of these works believe that recognizing emotions from EEG signal will allow robots and machines to read people's interactive intentions and states and respond to human emotions <ref type="bibr" target="#b1">[2]</ref><ref type="bibr" target="#b2">[3]</ref><ref type="bibr" target="#b3">[4]</ref>. Moreover, solving the problem of recognizing emotions may contribute the development of neuromarketing to determine consumer preferences <ref type="bibr" target="#b4">[5]</ref>. And another area of task application is workload estimation <ref type="bibr" target="#b5">[6]</ref> and driving fatigue detection <ref type="bibr" target="#b6">[7]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1">Electroencephalography</head><p>Electroencephalography is a multichannel continuous signal recorded with electrodes that measures differences between electric potentials that are registered in two areas of the brain. During this recording, electrodes are placed on the surface of the scalp. To improve the conductivity of the skin, a gel is applied to the contact surface of the electrodes. Elastic helmets are used to fixate the electrodes on the head. In recent years a number of accessible consumer-level brain-computer interfaces (BCI) became available on the market <ref type="bibr" target="#b7">[8]</ref><ref type="bibr" target="#b8">[9]</ref><ref type="bibr" target="#b9">[10]</ref>. These devices usually include a fewer number of electrodes that often are used without conductive/adhesive gel. It makes BCI technology cheaper and more affordable. Due to this, there is a trend that more data is available. EEG recording is always contaminated with artifacts, such as EOG(ocular), ECG(cardiac), EMG(muscle), and noise. Therefore, work pipeline should contain signal preprocessing to automatically handle this problem. Different processes are reflected in different frequency bands of the electrical activity of the brain. For example, alpha rhythm (8 to 12 Hz) reflects to attentional demands and beta activity (16 to 24 Hz) reflects to emotional and cognitive processes in brain <ref type="bibr" target="#b10">[11]</ref>.</p><p>As possible variants of the experimental protocol the following systems for recording EEG signals are used:</p><p>1. Resting states with eyes open (REO) or with eyes closed (REC). The patient is relaxed state and does not think about anything. This procedure is used to analyze the general condition of the patient. And it is suitable for anyone, including people with disabilities. 2. Event-related potentials (ERPs) <ref type="bibr" target="#b11">[12]</ref>. In such experiments, a signal is sent from a computer representing a stimulus to a computer recording an EEG whenever a stimulus or response occurs. Such stimuli may be periodic light exposure at different values of the frequency of exposure. Segments of EEG data that are time-locked to the event signals are extracted from the overall EEG and averaged. 3. Task-related. Neural activity is recorded under various cognitive tasks. The patient should also be relaxed and his attention should be focused only on the implementation of the task. These can be tasks such as counting in the mind or reading. 4. Somnography <ref type="bibr" target="#b12">[13]</ref>. EEG is recorded during sleep stage. The sleep electroencephalogram (EEG) can be recorded for analyzing the stages of sleep or the causes of sleep deprivation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.2">Emotions</head><p>Emotion is a mental state and an affective reaction towards an event based on a subjective experience. It is hard to measure because it is a subjective feeling.</p><p>Emotions can be evaluated in terms of "positive", "negative" or "like", "dislike" <ref type="bibr" target="#b4">[5]</ref>. It is also possible to distinguish a set of basic emotions such as anger, fear, sadness, disgust, happiness, surprise <ref type="bibr" target="#b33">[34]</ref> and try to solve the classification problem. Researchers often use a two-or three-dimensional space to model emotions <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15]</ref>, where different emotion points can be plotted on a 2D plane consisting of a Valence axis and Arousal axis (Fig. <ref type="figure">1a</ref>) or on a 3D area with addition Dominance axis (Fig. <ref type="figure">1b</ref>). Fig. <ref type="figure">1</ref>: Emotion space models <ref type="bibr" target="#b15">[16]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.3">Methods</head><p>One of the earliest works on emotion recognition from EEG was presented in 1997 <ref type="bibr" target="#b16">[17]</ref>. Machine-learning approach is one of the classic ways for solving the problem of emotion recognition. In this method it is necessary to extract reliable informative features closely related to the emotional state of the subject. The signal is divided into components by Independent Component Analysis (ICA) <ref type="bibr" target="#b18">[19,</ref><ref type="bibr" target="#b19">20]</ref> to separate artifacts. The main method used for extracting spectral features is the Fourier transform <ref type="bibr" target="#b31">[32]</ref>. A detailed description of popular features for analysing EEG signal is presented in the article <ref type="bibr" target="#b17">[18]</ref>. In machine-learning approach, discriminant analysis <ref type="bibr" target="#b34">[35]</ref> or Bayesian analysis <ref type="bibr" target="#b33">[34]</ref> can be used for classification.</p><p>In addition to obtaining ML features, deep learning methods can be used. And that is a more modern way of solving the problem. This approach is often used as conjunction machine learning feature extraction with neural network classification. For classification such neural networks as SAE <ref type="bibr" target="#b20">[21]</ref>, LSTM <ref type="bibr" target="#b21">[22]</ref> are often configured <ref type="bibr" target="#b36">[37,</ref><ref type="bibr" target="#b38">39,</ref><ref type="bibr" target="#b22">23]</ref>. A fully neural network solution has recently been proposed: SAE+LSTM method <ref type="bibr" target="#b22">[23]</ref> on DEAP dataset. In this work Stacked AutoEncoder (SAE) is used for solving ICA problem and the emotion timing modeling is based on the Long Short-Term Memory Recurrent Neural Network (LSTM-RNN).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.4">Data Augmentation</head><p>Neural networks require a big amount of training data. Usually EEG datasets contain data from a small number of subjects. This is due to the fact that special devices and the correct experimental conditions are required to collect the data. Several datasets could be combined to increase the amount of training data. But each dataset was collected by different devices, with different experimental protocol and different stimuli. Therefore, it is difficult to conduct training on data from several sources. Another problem is the low accuracy of prediction for subjects whose data were not available in the training set. Various domain adaptation techniques are used to reduce data variability <ref type="bibr" target="#b39">[40]</ref>.</p><p>The volume of the union of datasets labeled by emotions is still not large enough. The solution for expanding data with other EEG datasets is proposed in this work. It is possible to use EEG datasets without emotional labels if they contain video recordings of the experiment. Data can be marked by emotions detected from the video, the similar approach was suggested for problem of emotion recognition from speech <ref type="bibr" target="#b23">[24]</ref>. It increases the amount of work, but helps to expand the training set.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Datasets</head><p>There are several datasets for EEG-based emotion recognition task. Every corpus was collected according to unique protocol. Available datasets for solving the problem are described below.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">DEAP</head><p>DEAP dataset (A Database for Emotion Analysis Using Physiological Signals) <ref type="bibr" target="#b24">[25]</ref> is a widely used in EEG-based emotion recognition area <ref type="bibr" target="#b38">[39,</ref><ref type="bibr" target="#b22">23,</ref><ref type="bibr" target="#b39">40]</ref>. This dataset was collected as a part of an adaptive music video recommendation system development. The experiment was attended by 32 people. Data was collected from subjects while watching 40 one-minute music videos stimuli. During the experiment, participants performed self-assessment of their levels of arousal, valence and dominance (Fig. <ref type="figure">2</ref>). As a result, 32-channel electroencephalogram and peripheral physiological signals were recorded. For 22 of the 32 participants, frontal face video was also recorded. Dataset is convenient, as it contains not only original data in BDF (Biosemi Data Format), but also preprocessed data in MAT-LAB and Python formats. Dataset is open only for academic research and it is available for download after signing the EULA (End User License Agreement).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">eNTERFACE-2006</head><p>Another popular dataset was made as a part of eNTERFACE-2006 project <ref type="bibr" target="#b25">[26]</ref>. The purpose of the project is to collect a sufficient data to build an integrated framework for multi-modal emotion recognition. Data collection was carried out for 5 male subjects in 3 sessions. Stimuli are images from the IAPS (International Affective Picture System) <ref type="bibr" target="#b26">[27]</ref> which consists of 1196 pictures evaluated in arousal-valence dimensions. For experiment 3 groups of images were selected: 106 calm, 71 positive exciting, 150 negative exciting. Each session lasted 15 minutes Fig. <ref type="figure">2</ref>: The Self-Assessment Manikin(SAM) for rating the affective dimensions of valence, arousal, and dominance levels <ref type="bibr" target="#b29">[30]</ref>. and consisted of 30 blocks, each block is succession of 5 images corresponding to a single emotion. EEG and fNIRS signals with peripheral information were recorded in ..bdf format. Eventually the data were marked not only with a preliminary evaluation of the images, but also with participants self-assessment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">SEED, SEED-IV</head><p>SEED (SJTU Emotion EEG Dataset) <ref type="bibr" target="#b27">[28,</ref><ref type="bibr" target="#b37">38]</ref> contains data from 15 subjects in 3 sessions with an interval of about one week. As stimuli, 15 video clips lasting 4 minutes were selected. During the experiment, subjects conducted selfassessments based on "positive," "negative," or "neutral" terms for evaluating emotions. Dataset contains preprocessed EEG in 45 .mat (Matlab) files. EEG data is downsampled, preprocessed and segmented. In addition, the dataset comprises files with extracted features. It contains the features of differential entropy (DE) of the EEG signals, which is convenient for testing the classifiers. SEED-IV <ref type="bibr" target="#b28">[29]</ref> is another dataset collected later. In this experiment, a different system of emotion classification was used: happy, sad, neutral, fear. And in addition to EEG, eye movement information was recorded with the eye tracking glasses, that makes SEED-IV multi-modal dataset for emotion recognition. Dataset contains EEG raw data, extracted features from EEG (differential entropy and power spectral density) and raw data and extracted features of eye movements, all in .mat format. Both of these datasets can be downloaded after signing the license agreement.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4">Neuromarketing</head><p>Neuromarketing is the field of marketing research that helps to determine consumers' preferences and predict their behavior using unconscious processes, which ensures effective utilization of the product. In <ref type="bibr" target="#b4">[5]</ref> The Neuromarketing dataset was created for building predictive modeling framework to better understand consumer choice. This corpus of data was made by recording an EEG signal from 40 subjects while viewing consumer products. During the experiment, participants marked E-commerce products in terms of "likes" and "dislikes". The resulting dataset is publicity available and can be used in scientific works and marketing researches.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.5">Imagined Emotions</head><p>A different experiment design that included cue-based emotion stimuli was presented in <ref type="bibr" target="#b30">[31]</ref>. Each participant listened to a sample of voice recording that suggested a specific emotional state. A participant had to imagine a corresponding emotional scenario or to recall a related emotional experience. The presented dataset consists of EEG signals collected from 32 subjects who have experienced 15 emotional states, and participants' assessments of the authenticity and intensity of the tested emotions on a scale of 1 to 9.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Related Works</head><p>Emotion recognition is an analysis of multi-channel samples of EEG data. Each sample is considered to have a single emotional state that is supposed to be constant during the recording. Depending on the system of classification of emotions that was used in the experiment design, either the emotion must be determined from a preassigned set, or an assessment should be given on the Arousal-Valence(-Dominance) scales. Thus, the emotion recognition task can be considered a classification or a regression problem.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Preprocessing and Feature Extraction</head><p>Electroencephalogram data consists not only of the recordings of brain activity but also of a number of artifact components of various origins. Therefore the extensive filtering and artifact removal procedures must be included as a necessary part of the analysis pipeline. Deletion of recording sections with artifacts can be performed by specialists but that requires a thorough and expensive analysis of each sample. After the initial cleaning step, the multi-channel signal can be decomposed into quasi-independent components by solving a blind source separation task. This can be achieved with the Independent Component Analysis (ICA) or with more recent autoencoder-based approaches.</p><p>During the feature extraction step, EEG signal is divided into short time frames. The EEG features are extracted from each frame and combined into a feature sequence. The signal is represented as a set of overlap frames using the window function. It can be a rectangular window, but usually a smoothing window, such as the Hanning window, is used. For spectral analysis of the EEG data, the Fourier transform <ref type="bibr" target="#b31">[32]</ref> is used to obtain a frequency domain representation of each window. Then, feature extraction can be performed independently for each frequency band. Following metrics and statistics can be utilized as informative features: max, min, average amplitude and Power spectral density (PSD). Following cross-channel features can be calculated:</p><formula xml:id="formula_0">1. Root Mean Square RM S = 1 N n n=1 S 2 n (<label>1</label></formula><formula xml:id="formula_1">)</formula><p>where S i -i th channel amplitude 2. Pearson Correlation Coefficient between 2 channels</p><formula xml:id="formula_2">P CC = N i=1 (x i − x)(y i − y) N i=1 (x i − x) 2 N i=1 (y i − y) 2 (2) 3. Magnitude Squared Coherence Estimate M SCE = |P i j| P i • P j<label>(3)</label></formula><p>where P ij -cross-PSD i, j th channels, P i -PSD i th channel A more detailed review of feature extraction methods can be found in <ref type="bibr" target="#b17">[18]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Model Training</head><p>Emotion recognition problem in feature space can be approached with one of the machine learning methods for classification. In <ref type="bibr" target="#b33">[34]</ref> an emotion recognition method using a Naive Bayes model was proposed. The classification problem under the maximum likelihood framework was formulated as:</p><formula xml:id="formula_3">y = arg max y P (X|y) (<label>4</label></formula><formula xml:id="formula_4">)</formula><p>where y is label and X is feature vector. The Naive Bayes framework assumes that the features in X are independent of each other conditioned upon the class label. This paper compares two model distribution assumptions. It is shown that the Cauchy distribution assumption typically provides better results than the Gaussian distribution assumption.</p><p>In <ref type="bibr" target="#b34">[35]</ref> a comparison of K Nearest Neighbours classifier and Linear Discriminant Analysis is presented. The experiment was conducted on a private dataset and showed the maximum average classification rate of 83.26% using KNN and 75.21% using LDA. These solutions are suitable for the classification problem when it is necessary to recognize emotion from a given set. If affective labeling is presented as a vector of real values (such as Arousal-Valence scale), this approach can also be applied with regression methods instead of classification <ref type="bibr" target="#b35">[36]</ref>. Despite this, labels are often made binary when evaluating the accuracy of an algorithm.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Deep Learning Approach</head><p>Today, neural network are used everywhere, since they can recognize deeper, sometimes unexpected patterns in data. And in the studied area, deep neural network-based feature extraction and emotion recognition began to be intensively applied.</p><p>In <ref type="bibr" target="#b37">[38]</ref> Deep Belief Network (DBN) was trained with differential entropy features. The experiment performed classification for three emotional categories on the SEED dataset. The results show that the DBN models obtain higher accuracy than previously considered models such as kNN, LR and SVM approaches.</p><p>An emotion recognition system that uses deep learning models at two stages of work pipeline was introduced in <ref type="bibr" target="#b22">[23]</ref>. Stacked autoencoder was used for decomposition of source signal (as a substitute for Independent Component Analysis) and extracting EEG channel correlations. LSTM-RNN network is used for emotion classification based on Frequency Band Power Features extracted from the SAE output. The mean accuracy of emotion recognition, calculated by binarized labels, achieved 81.10% in valence and 74.38% in arousal on the DEAP dataset.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4">Domain Adaptation</head><p>Training an accurate model requires an approach that would be robust to variations in individual characteristics of participants and recording devices, since EEG data suffers from an intense dependence on the device and the subject. It is important to apply a domain adaptation technique to a model that would compensate the subject variability or heterogeneity in various technical specifications.</p><p>The paper <ref type="bibr" target="#b39">[40]</ref> compares different domain adaptation techniques on two datasets: DEAP and SEED. Transfer Component Analysis (TCA) <ref type="bibr" target="#b41">[42]</ref> and Maximum Independence Domain Adaptation (MIDA) <ref type="bibr" target="#b40">[41]</ref> performed the best results for subject within-dataset domain adaptation. It is shown that applying these techniques lead to an improvement gain up to 20.66% over the baseline accuracy where no domain adaptation technique was used. A research of these techniques application for cross-dataset domain adaptation was also conducted. The article concluded that TCA and MIDA can effectivly improve the accuracy by 7.25% -13.40% compared to the baseline accuracy where no domain adaptation technique was used.</p><p>In <ref type="bibr" target="#b42">[43]</ref> another approach to a domain adaptation was considered based on neural networks that are trained to solve emotion and domain recognition problems. Samples of feature vectors from two domains in the same quantity are fed to the model, producing emotion label for each EEG sample. Several first layers of neural network act as a feature extractor, producing a fixed-dimension representation of EEG samples in a latent space. These representations are used to solve two different tasks: emotion label classification and domain recognition. A gradient reversal layer is applied to the domain predictor <ref type="bibr" target="#b43">[44]</ref> leading to the adversarial training scheme during which the parameters of feature extractor layers are updated to make embedding distributions different domains statistically similar. Fully connected layers make representations for label predictor, which estimates emotion class for each sample. During training, samples from one domain contains labels, whereas the second domain is unlabeled. The label predictor is optimized to minimize the classification error on the first domain.</p><p>During test, the model inputs are unlabeled data only. This method was compared with multiple domain adaptation algorithms on benchmark SEED and DEAP and proved to be superior in both cross-subject and cross-session adaptation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">A Proposed Approach</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Domain-Adversarial Training</head><p>The problem of domain adaptation is crucial in the emotion recognition task. The architectures of the proposed approach are presented on Fig. <ref type="figure" target="#fig_2">3, 4</ref>. This approach combines the ideas presented in works <ref type="bibr" target="#b42">[43]</ref> and <ref type="bibr" target="#b43">[44]</ref>. In fig. <ref type="figure" target="#fig_1">3</ref> the domain classifier predicts which domain the data belongs to and the set of feature extractor parameters is updated by adversarial training to make the distribution of data representations of different domains more similar. In fig. <ref type="figure" target="#fig_2">4</ref> the input is data from two domains: labeled and unlabeled. Data representations of labeled domain are sent to label predictor and domain discriminator, representations of unlabeled data are transmitted only to domain discriminator, which determines whether these domains match or not.</p><p>These architectures differ in that in the first case, the classification of domains occurs independently for input samples, and in the second, pairwise comparisons are performed. In the future work, a comparison of these two approaches will be carried out and the best approach will be defined.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Data Augmentation</head><p>In order to improve the performance of the model, a large number of EEG datasets without affective labels <ref type="bibr" target="#b44">[45]</ref> can be utilized emotion recognition task. To train a domain classifier (for subject identity recognition), more data can be used, since there is no need for labeled data. Since a domain predictor is trained on a much larger number of domains, and a larger amount of training samples, it potentially can be more robust to various specific channel characteristic variability. The emotion classifier is still trained on the same amount of data, but the performance can be improved since the latent representations are trained to be domain-independent. DEAP dataset includes data of only 32 subjects, as well as other datasets for EEG-based emotion recognition also contain a limited variability of subjects. At the same time, EEG datasets without affective labeling are much larger. For example, in the Temple University Hospital (TUH) EEG data corpus <ref type="bibr" target="#b46">[47]</ref> there is EGG data of more than 10000 participants. It is more efficient to train neural networks on such data volume, therefore, as the solution  it is proposed to use unlabeled data. Thus, the neural network will be trained on a larger set of subjects, and therefore, will provide a better generalized model for new subjects.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Auto-labeling of EEG Datasets</head><p>Another possible solution is to enrich the sample for training using the multimodal emotion recognition. For this purpose, EEG datasets without labels, which contain other modalities such as video recordings of a subject's face, can be used, for example SEED-VIG <ref type="bibr" target="#b45">[46]</ref>. Then the data can be automatically labeled, recognizing the emotions experienced by the participants from the video. Unfortunately, the EEG datasets with the recordings of such modalities are rare, so this approach probably will not be allow to significantly expand the training data.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.4">A Preliminary Motivation Study</head><p>Below is an illustration of the fact that the problem of cross-subject adaptation really requires a solution. An experiment was conducted demonstrating a decrease in the accuracy of emotions recognition in the absence of subject data in the training sample. The preprocessed data from DEAP dataset was used. PSD for five frequency bands were extracted as features. The following ML classifiers were trained: SVM, Random Forest Regression. The data were divided into training, validation and test samples in a ratio of 6 : 1 : 1 respectively. In the first experiment, the data of each subject was divided between the samples. In the second experiment, the data of each subject entirely relate to one or another sample. The table 1 shows the differences in the accuracy of determining emotions for these two experiments. According to the results the presence of learning problems on isolated subjects is shown. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusion and Future Work</head><p>This paper discribes the EEG-based emotion recognition task its existing solution methods. There was formulated the problem of domain mismatch and insufficient data amount for training neural networks. As a solution, there was proposed the application of existing domain adaptation techniques with data augmentation due to datasets without emotional labels.</p><p>In the future, it is planned to conduct testing on DEAP dataset, using TUH EEG data corpus, to evaluate how emotion classification would be robust to subjects and session and channel differences. It is also planned to use the SEED dataset and perform the same analysis to study the task of training a datasetindependent emotion recognition model. A detailed validation study will be performed to compare the results with existing methods of domain adaptation.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 3 :</head><label>3</label><figDesc>Fig. 3: The model architecture of the domain-adversarial training with domain classifier.</figDesc><graphic coords="10,134.77,141.01,345.83,207.50" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 4 :</head><label>4</label><figDesc>Fig. 4: The model architecture of the domain-adversarial training with domain discriminator.</figDesc><graphic coords="10,134.77,440.71,345.81,163.15" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Experiment results</figDesc><table><row><cell></cell><cell cols="2">(a) For SVM classifier</cell></row><row><cell cols="3">Rating scale 1 st experiment 2 nd experiment</cell></row><row><cell>Valence</cell><cell>68.4%</cell><cell>52.2%</cell></row><row><cell>Arousal</cell><cell>65.1%</cell><cell>57.7%</cell></row><row><cell cols="2">Dominance 68.9%</cell><cell>52.4%</cell></row><row><cell cols="3">(b) For Random Forest Regression</cell></row><row><cell cols="3">Rating scale 1 st experiment 2 nd experiment</cell></row><row><cell>Valence</cell><cell>83.2%</cell><cell>47.6%</cell></row><row><cell>Arousal</cell><cell>82.6%</cell><cell>60.3%</cell></row><row><cell cols="2">Dominance 81.8%</cell><cell>53.1%</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Frontal EEG asymmetry during emotional challenge differentiates individuals with and without lifetime major depressive disorder</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Stewart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Coan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">N</forename><surname>Towers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J B</forename><surname>Allen</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.jad.2010.08.029</idno>
		<ptr target="https://doi.org/10.1016/j.jad.2010.08.029" />
	</analytic>
	<monogr>
		<title level="j">Journal of Affective Disorders</title>
		<imprint>
			<biblScope unit="volume">129</biblScope>
			<biblScope unit="issue">1-3</biblScope>
			<biblScope unit="page" from="167" to="174" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Cross-Subject EEG Feature Selection for Emotion Recognition Using Transfer Recursive Feature Elimination</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Yin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhang</surname></persName>
		</author>
		<idno type="DOI">10.3389/fnbot.2017.00019</idno>
		<ptr target="https://doi.org/10.3389/fnbot.2017.00019" />
	</analytic>
	<monogr>
		<title level="j">Frontiers in Neurorobotics</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Eeg theta and mu oscillations during perception of human and robot actions</title>
		<author>
			<persName><forename type="first">B</forename><surname>Urgen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Plank</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Ishiguro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Poizner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Saygin</surname></persName>
		</author>
		<idno type="DOI">10.3389/fnbot.2013.00019</idno>
		<ptr target="https://doi.org/10.3389/fnbot.2013.00019" />
	</analytic>
	<monogr>
		<title level="j">Frontiers in Neurorobotics</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Affect detection: An interdisciplinary review of models, methods, and their applications</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Calvo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>D'mello</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on affective computing</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="18" to="37" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Analysis of EEG signals and its application to neuromarketing</title>
		<author>
			<persName><forename type="first">M</forename><surname>Yadava</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Kumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Saini</surname></persName>
		</author>
		<idno type="DOI">10.1007/s11042-017-4580-6</idno>
		<ptr target="https://doi.org/10.1007/s11042-017-4580-6" />
	</analytic>
	<monogr>
		<title level="j">Multimedia Tools and Applications</title>
		<imprint>
			<biblScope unit="volume">76</biblScope>
			<biblScope unit="page" from="19087" to="19111" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Estimation of task workload from EEG data: new and current tools and perspectives</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">A</forename><surname>Kothe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Makeig</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Annual International Conference of the IEEE Engineering in Medicine and Biology Society</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="6547" to="6551" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">EEG-based vigilance estimation using extreme learning machines</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">C</forename><surname>Shi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">L</forename><surname>Lu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="volume">102</biblScope>
			<biblScope unit="page" from="135" to="143" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<ptr target="https://choosemuse.com" />
		<title level="m">Muse Homepage</title>
				<imprint>
			<date type="published" when="2020-05-30">30 May 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<ptr target="https://www.emotiv.com" />
		<title level="m">Emotiv Homepage</title>
				<imprint>
			<date type="published" when="2020-05-30">30 May 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<ptr target="http://neurosky.com" />
		<title level="m">Neurosky Homepage</title>
				<imprint>
			<date type="published" when="2020-05-30">30 May 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">EEG alpha activity reflects attentional demands, and beta activity reflects emotional and cognitive processes</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">J</forename><surname>Ray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">W</forename><surname>Cole</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Science</title>
		<imprint>
			<biblScope unit="volume">228</biblScope>
			<biblScope unit="page" from="750" to="752" />
			<date type="published" when="1985">4700. 1985</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">ERPLAB: an open-source toolbox for the analysis of event-related potentials</title>
		<author>
			<persName><forename type="first">J</forename><surname>Lopez-Calderon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">J</forename><surname>Luck</surname></persName>
		</author>
		<idno type="DOI">10.3389/fnhum.2014.00213</idno>
		<ptr target="https://doi.org/10.3389/fnhum.2014.00213" />
	</analytic>
	<monogr>
		<title level="j">Frontiers in Human Neuroscience</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Changes in Sleep and Sleep Electroencephalogram During Pregnancy</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">P</forename><surname>Brunner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Münch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Biedermann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Huch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Huch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Borbély</surname></persName>
		</author>
		<idno type="DOI">10.1093/sleep/17.7.576</idno>
		<ptr target="https://doi.org/10.1093/sleep/17.7.576" />
	</analytic>
	<monogr>
		<title level="j">Sleep</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="576" to="582" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">The emotion probe: studies of and attention</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">J</forename><surname>Lang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">American psychologist</title>
		<imprint>
			<biblScope unit="volume">50</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page">372</biblScope>
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">What Are Emotions? And How Can They Be Measured</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">R</forename><surname>Scherer</surname></persName>
		</author>
		<idno type="DOI">10.1177/0539018405058216</idno>
		<ptr target="https://doi.org/10.1177/0539018405058216" />
	</analytic>
	<monogr>
		<title level="j">Social Science Information</title>
		<imprint>
			<biblScope unit="volume">44</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="695" to="729" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">A review of emotion recognition using physiological signals</title>
		<author>
			<persName><forename type="first">L</forename><surname>Shu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Xie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Liao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Yang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page">2074</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Feature extraction from EEGs associated with emotions</title>
		<author>
			<persName><forename type="first">T</forename><surname>Musha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Terasaki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">A</forename><surname>Haque</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">A</forename><surname>Ivamitsky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Life and Robotics</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="15" to="19" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Feature extraction and selection for emotion recognition from EEG</title>
		<author>
			<persName><forename type="first">R</forename><surname>Jenke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Peer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Buss</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Affective computing</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="327" to="339" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">Independent component analysis: A tutorial</title>
		<author>
			<persName><forename type="first">A</forename><surname>Hyvärinen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Oja</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1999">1999</date>
			<pubPlace>Finland</pubPlace>
		</imprint>
		<respStmt>
			<orgName>LCIS, Helsinki University of Technology</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Independent Component Analysis of Evoked Potentials in EEG</title>
		<author>
			<persName><forename type="first">M</forename><surname>Vinther</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2002">2002</date>
			<publisher>Orsted, DTU</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion</title>
		<author>
			<persName><forename type="first">P</forename><surname>Vincent</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Larochelle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Lajoie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">A</forename><surname>Manzagol</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Bottou</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of machine learning research</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="3371" to="3408" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Long short-term memory</title>
		<author>
			<persName><forename type="first">S</forename><surname>Hochreiter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Schmidhuber</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Computing</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page" from="1735" to="1780" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">SAE+LSTM: A New Framework for Emotion Recognition From Multi-Channel EEG</title>
		<author>
			<persName><forename type="first">X</forename><surname>Xing</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Shu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Xu</surname></persName>
		</author>
		<idno type="DOI">10.3389/fnbot.2019.00037</idno>
		<ptr target="https://doi.org/10.3389/fnbot.2019.00037" />
	</analytic>
	<monogr>
		<title level="j">Frontiers in Neurorobotics</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="issue">37</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Emotion recognition in speech using cross-modal transfer in the wild</title>
		<author>
			<persName><forename type="first">S</forename><surname>Albanie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Nagrani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vedaldi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zisserman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 26th ACM international conference on Multimedia</title>
				<meeting>the 26th ACM international conference on Multimedia</meeting>
		<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="292" to="301" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Deap: A database for emotion analysis; using physiological signals</title>
		<author>
			<persName><forename type="first">S</forename><surname>Koelstra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Muhl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Soleymani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Yazdani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Ebrahimi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Patras</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE transactions on affective computing</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="18" to="31" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Emotion detection in the loop from brain signals and facial images</title>
		<author>
			<persName><forename type="first">A</forename><surname>Savran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Ciftci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Chanel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mota</surname></persName>
		</author>
		<author>
			<persName><surname>Hong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Viet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Sankur</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rombaut</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the eNTERFACE 2006 Workshop</title>
				<meeting>the eNTERFACE 2006 Workshop</meeting>
		<imprint>
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">J</forename><surname>Lang</surname></persName>
		</author>
		<title level="m">International affective picture system (IAPS): Digitized photographs, instruction manual and affective ratings</title>
				<imprint>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
	<note type="report_type">Technical report</note>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Differential entropy feature for EEG-based emotion classification</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">N</forename><surname>Duan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Y</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">L</forename><surname>Lu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">6th International IEEE/EMBS Conference on Neural Engineering (NER)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="81" to="84" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Emotionmeter: A multimodal framework for recognizing human emotions</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">L</forename><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">L</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Cichocki</surname></persName>
		</author>
		<idno type="DOI">10.1109/TCYB.2018.2797176</idno>
		<ptr target="https://doi.org/10.1109/TCYB.2018.2797176" />
	</analytic>
	<monogr>
		<title level="j">IEEE transactions on cybernetics</title>
		<imprint>
			<biblScope unit="volume">49</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="1110" to="1122" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Measuring emotion: the self-assessment manikin and the semantic differential</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Bradley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">J</forename><surname>Lang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of behavior therapy and experimental psychiatry</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="49" to="59" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">High-frequency broadband modulation of electroencephalographic spectra</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Onton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Makeig</surname></persName>
		</author>
		<idno type="DOI">10.3389/neuro.09.061.2009</idno>
		<ptr target="https://doi.org/10.3389/neuro.09.061.2009" />
	</analytic>
	<monogr>
		<title level="j">Frontiers in human neuroscience</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page">61</biblScope>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">A Fourier transform of the electroencephalogram</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Grass</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">A</forename><surname>Gibbs</surname></persName>
		</author>
		<idno type="DOI">10.1152/jn.1938.1</idno>
		<idno>.6.521</idno>
		<ptr target="https://doi.org/10.1152/jn.1938.1" />
	</analytic>
	<monogr>
		<title level="j">Journal of Neurophysiology</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="521" to="526" />
			<date type="published" when="1938">1938</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Applying ica in choice of the window length and of the decorrelation method</title>
		<author>
			<persName><forename type="first">G</forename><surname>Korats</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Le Cam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ranta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hamid</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Joint Conference on Biomedical Engineering Systems and Technologies</title>
				<meeting><address><addrLine>Vilamoura,</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="269" to="286" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Emotion recognition using a cauchy naive bayes classifier</title>
		<author>
			<persName><forename type="first">N</forename><surname>Sebe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Lew</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Cohen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Garg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">S</forename><surname>Huang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Object recognition supported by user interaction for service robots 1</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2002">2002</date>
			<biblScope unit="page" from="17" to="20" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Classification of human emotion from EEG using discrete wavelet transform</title>
		<author>
			<persName><forename type="first">M</forename><surname>Murugappan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ramachandran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Sazali</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of biomedical science and engineering</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">04</biblScope>
			<biblScope unit="page">390</biblScope>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">Analysis of EEG signals and facial expressions for continuous emotion detection</title>
		<author>
			<persName><forename type="first">M</forename><surname>Soleymani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Asghari-Esfeden</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Fu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pantic</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Affective Computing</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="17" to="28" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">EEG-based emotion recognition using deep learning network with principal component based covariate shift adaptation</title>
		<author>
			<persName><forename type="first">S</forename><surname>Jirayucharoensak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Pan-Ngum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Israsena</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Scientific World Journal</title>
		<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">L</forename><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">L</forename><surname>Lu</surname></persName>
		</author>
		<idno type="DOI">10.1109/tamd.2015.2431497</idno>
		<ptr target="https://doi.org/10.1109/tamd.2015.2431497" />
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Autonomous Mental Development</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="162" to="175" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">A EEG-based emotion recognition model with rhythm and time characteristics</title>
		<author>
			<persName><forename type="first">J</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Deng</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Brain informatics</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page">7</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<analytic>
		<title level="a" type="main">Domain adaptation techniques for EEG-based emotion recognition: a comparative study on two public datasets</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Lan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Sourina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Scherer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">R</forename><surname>Müller-Putz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Cognitive and Developmental Systems</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="85" to="94" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b40">
	<analytic>
		<title level="a" type="main">Learning domain-invariant subspace using domain features and independence maximization</title>
		<author>
			<persName><forename type="first">K</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Kou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Zhang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE transactions on cybernetics</title>
		<imprint>
			<biblScope unit="volume">48</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="288" to="299" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b41">
	<analytic>
		<title level="a" type="main">Domain adaptation via transfer component analysis</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">J</forename><surname>Pan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">W</forename><surname>Tsang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">T</forename><surname>Kwok</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Yang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Neural Networks</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="199" to="210" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b42">
	<analytic>
		<title level="a" type="main">Domain Adaptation for EEG Emotion Recognition Based on Latent Representation Similarity</title>
		<author>
			<persName><forename type="first">J</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Qiu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>He</surname></persName>
		</author>
		<idno type="DOI">10.1109/TCDS.2019.2949306</idno>
		<ptr target="https://doi.org/10.1109/TCDS.2019.2949306" />
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Cognitive and Developmental Systems</title>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b43">
	<analytic>
		<title level="a" type="main">Unsupervised domain adaptation by backpropagation</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Ganin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Lempitsky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International conference on machine learning</title>
				<imprint>
			<publisher>PMLR</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="1180" to="1189" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b44">
	<monogr>
		<ptr target="https://www.isip.piconepress.com/projects/tuheeg" />
		<title level="m">The Institute for Signal and Information Processing</title>
				<imprint>
			<date type="published" when="2020-05-31">31 May 2020</date>
		</imprint>
		<respStmt>
			<orgName>Temple University EEG Corpus</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b45">
	<analytic>
		<title level="a" type="main">A multimodal approach to estimating vigilance using EEG and forehead EOG</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">L</forename><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">L</forename><surname>Lu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of neural engineering</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page">26017</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b46">
	<analytic>
		<title level="a" type="main">The temple university hospital EEG data corpus</title>
		<author>
			<persName><forename type="first">I</forename><surname>Obeid</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Picone</surname></persName>
		</author>
		<idno type="DOI">10.3389/fnins.2016.00196</idno>
		<ptr target="https://doi.org/10.3389/fnins.2016.00196" />
	</analytic>
	<monogr>
		<title level="j">Frontiers in neuroscience</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page">196</biblScope>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
