<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Facial Expression Recognition using Facial Mask with EMG Sensors</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ivana Kiprijanovska</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Borjan Sazdov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Martin Majstoroski</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Simon Stankoski</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Martin Gjoreski</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Charles Nduka</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hristijan Gjoreski</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Emteq Ltd.</institution>
          ,
          <addr-line>Brighton BN1 9SB</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Faculty of Electrical Engineering and Information Technologies, Ss. Cyril and Methodius University in Skopje</institution>
          ,
          <addr-line>North</addr-line>
          <country country="MK">Macedonia</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Faculty of Informatics, Università della Svizzera Italiana</institution>
          ,
          <addr-line>6900 Lugano</addr-line>
          ,
          <country country="CH">Switzerland</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this study, we examine the relationship between surface electromyography (sEMG) and facial expressions using a novel Virtual Reality multi-sensor mask insert - emteqPROtm, equipped with seven sEMG sensors. We designed a dataset collection scenario to analyze the efects of expression intensity, expression duration, and head movements. Using data from 30 participants, we developed a machine learning pipeline that included preprocessing of the sensor data, de-noising, filtering, segmentation, feature engineering, and training a classification model. The experimental results indicate that the mask is suitable for recognizing five posed facial expressions (smile, frown, eyebrows raise, squeezed eyes, and neutral expression). The best-performing model achieved an F1-Macro score of 0.86. Head movement decreased the results to an F1-macro score of 0.82. The facial expressions that activate the same muscles were the most challenging to diferentiate. We also present results on the influence of diferent scaling and oversampling techniques. Finally, expression duration, intensity, and head movements influence the performance of the models for expression recognition and should be considered in the development of recognition algorithms.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Facial expressions</kwd>
        <kwd>Surface EMG</kwd>
        <kwd>Wearable sensors</kwd>
        <kwd>Machine learning</kwd>
        <kwd>emteqPRO</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        build resilience and establish supportive environments
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        The book “The Expression of the Emotions in Man and Virtual reality (VR) has been a growing trend in the
Animals” by Charles Darwin reports on the first studies past decade. It enables the simulation of ecologically
valion human emotions [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] arguing that the emotions are a dated scenarios, which are ideal for studying behaviour in
universal language. These findings were later supported controllable conditions. Physiological measures captured
by Ekman’s groundbreaking work on emotions and their in such conditions provide a deeper insight into how an
relation to facial expressions [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The human face is con- individual responds to a given stimuli, making the VR
sidered as one of the primary afect expression mediators, tools suitable for diagnosis, intervention, and monitoring
and as such, it has been explored as the primary marker of mental health and wellbeing outcomes. Such solutions
of human afect. Generally, facial expressions result from for improved emotion tracking will positively impact the
the contraction of a set of facial muscles, from which lives of over one hundred million people in the EU alone
afective states can be inferred [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Besides their relation who experience mental health problems.
to the afective states, facial expressions account for a Automatic facial expression recognition has been an
large proportion of nonverbal communication [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. active scientific subject since the early 1990s [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
Re
      </p>
      <p>
        Afective states can lead to diferent physiological and cent studies have considered EMG sensing for facial
exbehavioral responses [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Measuring these responses is pressions and emotion recognition, and classification
a key factor in understanding human behavior [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and methods have seen significant improvements in recent
how these behaviors afect one’s mental health. Mental years. Mithbavkar et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] focused on the recognition
health monitoring is a growing scientific field striving to of emotions through facial expressions using data
colhelp people in need. An important goal of the field is to lected in a musical environment. They trained several
detect the first signs of mental health problems so that neural networks to classify four emotions: joy, anger,
they can be identified and acted upon to reduce risks, sadness, and pleasure, and achieved the highest accuracy
of 99.1% using a Nonlinear autoregressive exogenous
network (NARX). A comparison between an EMG-based
facial expression detection model and an image processing
model was made by Kulke et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Afectiva iMotions
software was compared with EMG measurements of the
zygomaticus major and corrugator supercilii muscles in
Workshop on Virtual Reality for Health and Wellbeing, 21st
International Conference on Mobile and Ubiquitous Multimedia,
November 27th-30th, 2022, Lisbon
$ ivana.kiprijanovska@emteqlabs.com (I. Kiprijanovska)
      </p>
      <p>
        © 2023 Copyright for this paper by its authors. Use permitted under Creative
CPWrEooUrckReshdoinpgs IhStpN:/c1e6u1r3-w-0s.o7r3g CCoEmmUoRns LWiceonsrekAstthribouptionP4r.0oIncteerenadtiionnagl s(CC(CBYE4U.0)R.-WS.org)
identifying happy, angry, and neutral faces. They
concluded that the outputs from both systems were highly
correlated, showing that EMG-based model can identify
facial expressions produce results comparable to an
image processing-based model. Chen et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] intended
to recognize facial emotions from sEMG data in a
human–computer interaction scenario. They used a
specially designed headband to record sEMG signals from
the frontalis and corrugator supercilii muscles of six
participants who were instructed to pose the facial
expressions of anger, fear, sadness, surprise, and disgust. They
achieved 95% accuracy using an Elman neural network
(ENN).
      </p>
      <p>Our study aims to explore the usage of a novel VR
facial mask equipped with seven surface
electromyography (sEMG) sensors to monitor facial muscle activity
and classify five diferent facial expressions. Our
approach is based on signal-processing and machine
learning (ML) techniques to detect smiles, frowns, eyebrows
raise, squeezed eyes, and neutral facial expressions with
diferent intensities (high and low) and duration (short
and long). We chose these five facial expressions
because of their relation to specific afective states: smiles
are related to positive afect and happiness; frowns are
related to negative afect, depression, and anxiety;
eyebrows raise is related to surprise, which can be positive
and negative in terms of afective valence; and squeezed
eyes is a facial expression generally related to negative
afective states like fear and disgust.</p>
      <sec id="sec-1-1">
        <title>B includes the same posed expressions as those in Task A.</title>
        <p>The main diference was the inclusion of head movement
in a specific direction (left, right, up, down) while doing
the expressions. Also, as a diference from task A, the
expressions in task B were only of high intensity and long
duration. The data collection process was uninterrupted.
The participants had a neutral expression on their faces
between the posed expressions, making the neutral class
the most common in the dataset, with 55.6%, while the
rest of the classes comprised 11.1% each.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>3. Methodology</title>
      <sec id="sec-2-1">
        <title>3.1. Sensor Data Preprocessing and</title>
      </sec>
      <sec id="sec-2-2">
        <title>Feature Extraction</title>
        <sec id="sec-2-2-1">
          <title>During the data collection procedure, the sEMG data</title>
          <p>
            2. Data were continuously recorded at a fixed rate of 1000 Hz.
These data underwent a data preparation process,
includThe experiment was done on 30 participants aged 16 - 23 ing data filtering, segmentation, and feature engineering.
(20.8 ± 1.4), eighteen males and twelve females. All par- To increase the data quality, we performed signal
deticipants were healthy and had no family history of facial noising and filtering. The EMG signals were initially
neuromuscular and nervous disorders or heart problems. filtered with a Hampel filter to remove sudden peaks in
The data were recorded using the emteqPROtm mask the signals that appear because of rapid movements.
Ad[
            <xref ref-type="bibr" rid="ref12 ref7">7, 12</xref>
            ]. It is a face-mounted mask that can be combined ditionally, to reduce the noise caused by electromagnetic
with a VR head-mounted display, or it can be used as interference, which has visible components at 50 Hz and
an open-face mask. The EMG sensors in the mask are its harmonics, we utilized a frequency-based filtering
positioned to overlap the zygomaticus muscles (which method based on spectrum interpolation [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ]. A
slidspread from the cheekbones to the corners of the lips), ing window technique was utilized for the sensor data
the frontalis muscles (which cover parts of the forehead segmentation. The signals were segmented using a
0.5above the eyebrows), the orbicularis muscles (which are second window and a 0.1-second stride. Eventually, we
close to the outside of the eyes), and the corrugator (a extracted 34 features per EMG channel, resulting in a total
small muscle between the eyebrows). The sensor mask of 238 features. The features included various
amplitudemounted on a VR device and the sensor positions are based features (e.g., average amplitude change and mean
depicted in Figure 1. absolute value), amplitude derivatives, auto-regressive
          </p>
          <p>The participants were asked to perform two tasks (Task coeficients, cepstral coeficients, frequency-based
feaA and Task B) that included five posed expressions: smile, tures (e.g., main frequency), and statistical features (e.g.,
frown, eyebrows raise, squeezed eyes, and neutral expres- statistical moments).
sion. Task A contains the five posed expressions with
diferent durations (short and long) and intensities (low
and high), with three repetitions of each expression. Task</p>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>3.2. Modeling</title>
        <sec id="sec-2-3-1">
          <title>Due to the class imbalance, we experimented with three</title>
          <p>diferent data resampling techniques to achieve a
balanced class distribution. These were: (i) Random
Undersampling – instances from the majority class are
randomly chosen and removed from the training dataset; (ii)
Synthetic Minority Oversampling Technique (SMOTE)
– an oversampling technique that creates a synthetic
example of the minority class based on the features of
K-nearest neighbors; and (iii) One-Sided Selection
Undersampling (OSS) – an undersampling technique that
combines Tomek Links and the Condensed Nearest Neighbor
(CNN) Rule to remove ambiguous points on the class
boundary and to eliminate redundant examples from the
majority class that are far from the decision boundary.
For feature scaling, we implemented standardization and
normalization. Both techniques were done
participantwise, i.e., for each participant data separately.</p>
          <p>
            The data resampling and feature scaling techniques
were combined, and such processed data were used as
input to several ML algorithms, including Decision Tree
Classifier [
            <xref ref-type="bibr" rid="ref14">14</xref>
            ], Random Forest, and Extreme Gradient
Boost (XGBoost) [
            <xref ref-type="bibr" rid="ref15">15</xref>
            ]. Eventually, the best performing
classifiers were combined with a Hidden Markov Model
(HMM) [
            <xref ref-type="bibr" rid="ref16">16</xref>
            ].
          </p>
          <p>The dataset was divided into three disjoint subsets: a
validation set (five randomly selected participants), a test
set (five randomly selected participants), and a training
set consisting of the remaining 20 participants’ data. The
validation set was used for tunning, and all the models
were evaluated on the test set. As performance metrics,
accuracy and F1-Macro scores were used.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Experimental Results</title>
      <sec id="sec-3-1">
        <title>A) Task A – Long and short-duration expressions with</title>
        <p>diferent intensity.</p>
        <p>The results obtained when data from Task A were
used for training and testing are shown in Table 1. All
the results presented in the table were achieved with the</p>
      </sec>
      <sec id="sec-3-2">
        <title>Random Forest algorithm, which proved to be the most</title>
        <p>efective one out of the ML algorithms used in our
experiments (on the validation set). Depending on the scaling
and the over/undersampling technique, the accuracy
values range from 84.2% to 89.48%, while F1-Macro score
values are between 0.75 and 0.86.</p>
        <p>Regarding the feature scaling, both the standardization
and the normalization improved the model’s performance
compared to the default (no scaling and no additional
data sampling). The improvement was greater in the
case where feature standardization was used. We believe
there are two reasons for the improvement: (i) the
scaling was performed for each participant’s data separately,
thus it acts as an unsupervised personalization technique
reducing the inter-participants diferences; (ii) besides
scaling the feature ranges (e.g., in the range -3 to 3), the
standardization also shifts the data distribution for each
participant separately. Whereas the normalization only
scales the feature ranges (e.g., 0 to 1). Thus, the
additional distribution shift that the standardization causes
for each feature may be why standardization is better
than normalization.</p>
        <p>Regarding the data subsampling technique, both OSS
undersampling and SMOTE oversampling performed
better than the random undersampling. The SMOTE
oversampling technique was the best performing one,
achieving an accuracy of 86.34% and an F1-score of 0.8. By
combining feature scaling (standardization) and SMOTE,
we achieved the highest results (an F1-score of 0.84).
Finally, Hidden Markov Method was applied in
combination with Standardization and SMOTE, achieving an
F1Macro score of 0.86 and an accuracy of 89.48%.</p>
        <p>To further inspect the best-performing model, we
present the confusion matrix in Figure 2. It indicates
that the model struggles to correctly predict the smiling
expressions. We speculate that this may be the case
because smiling activates only the zygomatic face muscles.</p>
        <p>A high intense smile is easily distinguishable from
neutral expressions as the zygomatic muscle activity is high.</p>
        <p>However, the low-intensity smile leads to low activation
of the zygomatic muscles. A low-intensity smile
resem</p>
        <p>,
bles a neutral expression, and it doesn’t afect the sEMG
sensors enough to notice the diference between these
two expressions.</p>
        <p>B) Task B – Long-duration expressions with high
intensity and head movements.</p>
        <p>The results obtained when data from Task B were used
for training and testing are shown in Table 2. All the
results presented in the table were achieved with Extreme
Gradient Boost (XGBoost) with gbtree, which proved to
be the most efective one on the validation set.</p>
        <p>The accuracy values range from 82.6% to 86.4%, and
the F1-Macro scores are between 0.76 and 0.82. Feature
scaling and resampling are not as critical preprocessing
steps as in Task A. The method with the highest accuracy
is the one where only participant-wise standardization
was performed. It achieved 86.46% accuracy and an
F1Macro score of 0.82. However, the method that combines
standardization and random undersampling has the
highest F1-Macro score of 0.82 and 85.88% accuracy. Although
this method’s accuracy is lower, we consider it the best
performing one since the F1-Macro score is more suitable
for evaluations on an unbalanced dataset.</p>
        <p>Figure 3 presents the confusion matrix for the
bestperforming model. We can see from the confusion matrix
that the model can diferentiate between all the classes
with the neutral class, which was not the case in task A.</p>
        <p>This is because, in Task B, only high-intensity expressions
with a long duration were examined. In this case, the
muscles were highly activated, making the expressions
more distinguishable from the neutral expression.</p>
        <p>The problem with this model is that it struggles to
detect frowns and eyes squeezed. Both expressions activate
the same facial muscles: mainly the frontalis and
corrugator muscles are activated, and this leads to wrongly
predicting these expressions.</p>
        <p>Overall, it seems that the head movements had a minor
influence, i.e., the best- performing model achieved an
F1score of 0.82, which is on par with the best-performing
model from Task A (Table 1), where the method without
HMM achieved and F1-score of 0.84. We excluded the
HMM-based method from this analysis and the following
one because it adds a layer of complexity to the training
process.</p>
        <p>C) Task A and Task B combined – Long and
shortduration expressions with head movements.</p>
        <p>
          Table 3 shows the performance of the methods for
six train-test combinations: (i) training on Task A data
and testing on Task A data; (ii) training on Task B data
and testing on Task A data; (iii) training on both Task A pants, the best-performing model achieved an accuracy
and Task B data and testing on Task A data; (iv) training of 89.48% and an F1-Macro score of 0.86. The approach
on Task B data and testing on Task B data; (v) training is based on Random Forest in combination with
stanon Task A data and testing on Task B data; (vi) training dardization and oversampling (SMOTE) steps, and the
on both Task A and Task B data and testing on Task B Hidden Markov Method as a prediction-smoothing
techdata. With this, we want to examine whether mixing no- nique [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. The best-performing model evaluated on the
movement and movement data for training and testing data that includes head movement achieved an F1-Macro
will substantially influence the method’s performance. score of 0.82 (a decrease from 0.84). These results
indiFor evaluating methods’ performance on Task A and Task cate that there is an influence of the head movement on
B data, we used the best-performing methods from Table the detection of facial expressions. The main weakness
1 and Table 2, respectively. of the models was observed in distinguishing between
        </p>
        <p>From Table 3, we can see that for Task A, the best frown and squeezed eyes. Both expressions activate
foreresults are achieved when only no-movement data is used head muscles closely placed to each other (corrugator and
in the training set (i.e., Task A is used), and the inclusion frontalis muscles). In the future, we plan to investigate
of movement data (Task B data) reduces the method’s feature selection, model personalization, and end-to-end
accuracy by 3 percentage points and the F1-score drops deep learning to overcome this weakness.
by 0.06. On the other hand, when Task B data are used for It should also be noted that in some cases, the
difertesting, the inclusion of no-movement data (Task A data) ences in the results may have been due to the diferent
in the training set has a lower influence on the results random steps in the processing steps and in the learning
for Task B, as the F1-score drops by 0.02. By mixing the ensembles that have built-in random steps. Nonetheless,
training sets, we did not observe any improvements in this does not diminish the main findings of the study:(i)
the results for both Task A and Task B data. These results The novel VR-mask equipped with sEMG sensors in
comindicate that scenario specificity is important for the bination with ML is suitable for recognizing facial
expresmodel’s accuracy, i.e., if we expect movement during the sions (smile, frown, eyebrows raise, squeezed eyes, and
usage of the models, then it is better to include training neutral); (ii) facial expressions that activate the same
musdata that involves movement. cles are the most challenging to diferentiate; (iii) feature
scaling is an important step which enables for
minimizing inter-participant feature diferences; (iv) the results
5. Conclusion regarding data oversampling or undersampling were
inconclusive, as it improved the results in some cases (Table
This study examined the relationship between sEMG 1), but not in other cases (and Table 2); Finally, (v)
expressensor data from facial muscles and posed facial expres- sion duration, intensity, and head movements influence
sions using the novel emteqPROtm VR multi-sensor facial the performance of the models for expression
recognimask. We analyzed sEMG data from 30 participants that tion and should be taken into account in the development
performed five facial expressions while wearing the de- of facial expression recognition algorithms. These
convice. The data collection scenario was specifically de- clusions contribute to afect sensing in VR, which has
signed to inspect several aspects of facial expression potential in symptom monitoring during VR-delivered
recognition - duration (short vs. long), intensity (low vs. therapy for mental health disorders.
high), and head movements. The collected data was then
used to develop models that recognize smiles, frowns,
eyebrows raise, squeezed eyes, and neutral facial expres- Acknowledgments
sions. We explicitly inspected the influence of
normalization techniques and data oversampling and undersam- This study was partially supported by the WideHealth
pling techniques. On the test data of five unseen partici- project (European Horizon 2020) under grant agreement</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Darwin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Prodger</surname>
          </string-name>
          ,
          <source>The expression of the emotions in man and animals</source>
          , Oxford University Press, USA,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F. W. . E. P.</given-names>
            <surname>Ekman</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          ,
          <article-title>Emotion in the human face in studies in emotion and social interaction (</article-title>
          <year>1972</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>A. Van Boxtel</surname>
          </string-name>
          ,
          <article-title>Facial emg as a tool for inferring afective states</article-title>
          ,
          <source>in: Proceedings of measuring behavior</source>
          , volume
          <volume>2</volume>
          ,
          <year>2010</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>C.</given-names>
            <surname>Oh Kruzic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kruzic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Herrera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bailenson</surname>
          </string-name>
          ,
          <article-title>Facial expressions contribute more than body movements to conversational outcomes in avatarmediated virtual environments</article-title>
          ,
          <source>Scientific reports 10</source>
          (
          <year>2020</year>
          )
          <fpage>1</fpage>
          -
          <lpage>23</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D. G.</given-names>
            <surname>Myers</surname>
          </string-name>
          ,
          <article-title>Theories of emotion in psychology: Seventh edition (</article-title>
          <year>2004</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>World</given-names>
            <surname>Health</surname>
          </string-name>
          <string-name>
            <surname>Organization</surname>
          </string-name>
          ,
          <article-title>Mental health: strengthening our response (june</article-title>
          <year>2022</year>
          ), https://www.who.
          <article-title>int/news-room/fact-sheets/ detail/mental-health-strengthening-our-</article-title>
          <string-name>
            <surname>response</surname>
          </string-name>
          ,
          <year>2022</year>
          . Online; accessed
          <issue>30</issue>
          <year>July 2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Gnacek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Broulidakis</surname>
          </string-name>
          , I. Mavridou,
          <string-name>
            <given-names>M.</given-names>
            <surname>Fatoorechi</surname>
          </string-name>
          , E. Seiss,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kostoulas</surname>
          </string-name>
          , E. BalaguerBallester, I. Kiprijanovska,
          <string-name>
            <given-names>C.</given-names>
            <surname>Rosten</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Nduka, emteqpro-fully integrated biometric sensing array for non-invasive biomedical research in virtual reality, Frontiers in virtual reality 3 (</article-title>
          <year>2022</year>
          )
          <article-title>3</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>V.</given-names>
            <surname>Bettadapura</surname>
          </string-name>
          ,
          <article-title>Face expression recognition and analysis: the state of the art</article-title>
          ,
          <source>arXiv preprint arXiv:1203.6722</source>
          (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Mithbavkar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <article-title>Recognition of emotion through facial expressions using emg signal, in: 2019 international conference on nascent technologies in engineering (ICNTE)</article-title>
          , IEEE,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>L.</given-names>
            <surname>Kulke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Feyerabend</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Schacht</surname>
          </string-name>
          ,
          <article-title>Comparing the afectiva imotions facial expression analysis software with emg (</article-title>
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Eyebrow emotional expression recognition using surface emg signals</article-title>
          ,
          <source>Neurocomputing</source>
          <volume>168</volume>
          (
          <year>2015</year>
          )
          <fpage>871</fpage>
          -
          <lpage>879</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>H.</given-names>
            <surname>Gjoreski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. I.</given-names>
            <surname>Mavridou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Fatoorechi</surname>
          </string-name>
          , I. Kiprijanovska,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gjoreski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Cox</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Nduka, emteqpro: Face-mounted mask for emotion recognition and afective computing</article-title>
          ,
          <source>in: Adjunct Proceedings of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2021 ACM International Symposium on Wearable Computers</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>23</fpage>
          -
          <lpage>25</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>D. T.</given-names>
            <surname>Mewett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. J.</given-names>
            <surname>Reynolds</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Nazeran</surname>
          </string-name>
          ,
          <article-title>Reducing power line interference in digitised electromyogram recordings by spectrum interpolation</article-title>
          ,
          <source>Medical and Biological Engineering and Computing</source>
          <volume>42</volume>
          (
          <year>2004</year>
          )
          <fpage>524</fpage>
          -
          <lpage>531</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>J.</given-names>
            <surname>Ali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Khan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ahmad</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Maqsood</surname>
          </string-name>
          ,
          <article-title>Random forests and decision trees</article-title>
          ,
          <source>International Journal of Computer Science Issues (IJCSI) 9</source>
          (
          <year>2012</year>
          )
          <fpage>272</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>A. . M.-M. G. Bentéjac</surname>
            ,
            <given-names>Candice</given-names>
          </string-name>
          <string-name>
            <surname>Csörgő</surname>
          </string-name>
          ,
          <article-title>A comparative analysis of xgboost (</article-title>
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M.</given-names>
            <surname>Awad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Khanna</surname>
          </string-name>
          ,
          <article-title>Eficient learning machines: theories, concepts, and applications for engineers and system designers</article-title>
          , Springer nature,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>M.</given-names>
            <surname>Gjoreski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Janko</surname>
          </string-name>
          , G. Slapničar,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mlakar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Reščič</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bizjak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Drobnič</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Marinko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Mlakar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Luštrek</surname>
          </string-name>
          , et al.,
          <article-title>Classical and deep learning methods for recognizing human activities and modes of transportation with smartphone sensors</article-title>
          ,
          <source>Information Fusion</source>
          <volume>62</volume>
          (
          <year>2020</year>
          )
          <fpage>47</fpage>
          -
          <lpage>62</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>