<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Assessing Emotion Mitigation through Robot Facial Expressions for Human-Robot Interaction</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Luigi D'Arco</string-name>
          <email>luigi.darco@unina.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alessandra Rossi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Silvia Rossi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Electrical Engineering and Information Technologies, University of Naples Federico II</institution>
          ,
          <addr-line>Via Claudio 21, 80125 Naples</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Afective responses are one of the primary and clearer signals used by agents for communicating their internal state. These internal states can represent a positive or negative acceptance of a robotic agent's behavior during a humanrobot interaction (HRI). In these scenarios, it is fundamental for robots to be able to interpret people's emotional responses and to adjust their behaviors accordingly, to appease them, and to provoke an emotional change in them. This research investigates the impact of robot facial expressions on human emotional experiences within HRI, focusing specifically on whether a robot's expressions can amplify or mitigate users' emotional responses when viewing emotion-eliciting videos. To evaluate participants' emotional states, an AI-based multimodal emotion recognition approach was employed, combining analysis of facial expressions and physiological signals, complemented by a self-assessment questionnaire. Findings indicate that participants responded more positively when the robot's facial expressions aligned with the emotional tone of the videos, suggesting that emotioncoherent displays could enhance user experience and strengthen engagement. These results underscore the potential for expressive social robots to influence human emotions efectively, ofering promising applications in therapy, education, and entertainment. By incorporating emotional facial expressions, socially assistive robots could foster behavior change and emotional engagement in HRI, broadening their role in supporting human emotional well-being.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Emotion elicitation</kwd>
        <kwd>Socially Assistive Robotics</kwd>
        <kwd>Human-Robot Interaction</kwd>
        <kwd>Emotion Recognition</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Socially Assistive Robotics (SAR) is an emerging field of robotics that focuses on developing robots
that can assist users with hands-of interaction strategies, providing emotional and cognitive assistance
[
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. To improve the Human-Robot Interaction (HRI) experience, SARs must be capable of interpreting,
mimicking, and responding to emotional cues, with facial expressions being a primary mode of emotional
communication. This ability is essential when robots are used in contexts where emotional engagement
can facilitate positive outcomes, such as therapy, learning, and behavior change. In human-human
communication, facial expressions are critical for conveying emotions, improving understanding, and
guiding social interactions. Several studies showed that facial expressions not only reflect how a person
is feeling but also influence how others feel [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. This phenomenon, known as emotional contagion [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ],
suggests that emotions can spread from one person to another through non-verbal cues, influencing
the emotional state of the observer. If SARs are to be efective in emotionally engaging users, they must
be able to use facial expressions in ways that influence the user’s emotional experience, particularly
in situations where emotional states can influence behavior and decision-making. Stafa et. al [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]
investigated whether positive or negative robot personalities can afect the mental state of users
during HRI by assessing the Electroencephalogram (EEG) signals of participants. They involved an
anthropomorphic robot with two engagement personalities, one more prone to engage the user and
the other not, by modeling voice, dialogues, and head and body movements. The results showed that
participants felt the robot’s personality, afecting their emotional state and engagement. Similarly,
      </p>
      <p>
        Fiorini et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] explored the impact of a robot’s behavior on the emotional state of users during
exposure to emotion-eliciting images. The robot displayed emotions that were coherent or incoherent
with those experienced by the user to assess the level of influence it could have on the user. The
results showed high accuracy, up to 98% in the robot recognizing 3 emotional states, including positive,
negative, and neutral, reporting that such states were better identified when the robot was not neutral
but performed coherent or incoherent behaviors. Rossi et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] conducted a study to evaluate the
impact of non-verbal behaviors of an anthropomorphic robot on the emotional responses in users. By
using coherent, incoherent, and neutral behaviors the robot’s non-verbal cues were modeled using
emotional gestures. Findings revealed that emotional reactions with high arousal could be challenging
by using only emotional gestures, and additional interaction strategies are needed.
      </p>
      <p>
        In light of the diferent achievements in the literature, the impact of robot facial expressions on human
emotions during HRI has yet to be fully investigated. Hence, the present study focuses on assessing
whether the facial expressions of a robot can afect the mood of users while watching emotion-eliciting
videos. By displaying facial expressions that either match or contrast the user’s emotional state, the
robot could promote the general efect of mirroring or emotional contagion, whereby an observer
tends to covertly and unconsciously mimic the behavior of the person being observed [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. The study
design is based on the approach outlined by Rossi et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], with the modification of including only two
conditions: the robot’s facial expressions either align with the emotional content of the videos or display
opposing emotions. To evaluate the emotion felt by the user, an Artificial Intelligence ( AI) approach
has been developed that predicts the users’ emotional state based on a fusion of facial expressions
and physiological signals. Furthermore, participants of the study were provided with a questionnaire
at the beginning to ascertain their empathetic capacity and one questionnaire at the end to evaluate
their perception of the robot’s emotional display. By demonstrating the potential of robots to influence
human emotions through facial expressions, the study can contribute to the development of SARs that
are more emotionally intelligent and capable of supporting users in emotionally meaningful ways,
unleashing their application in scenarios where emotional engagement can facilitate positive outcomes,
such as therapy, learning, and behavior change.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Materials and Methods</title>
      <p>This study evaluates a robot’s ability to mitigate emotions in participants watching emotion-eliciting
videos. A multimodal emotion recognition system assessed participants’ emotional states through facial
expression and physiological signal analysis. Pre-experiment questionnaires evaluated participants’
empathy levels, while post-experiment questionnaires assessed their perceptions of the robot’s emotional
displays. The study was conducted in a controlled environment to ensure reliable results.</p>
      <sec id="sec-2-1">
        <title>2.1. Robotic Agent and Sensing Elements</title>
        <p>
          The robotic agent involved in this study is a Furhat robot [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], which is a human-like, rear-projected
robotic head that uses computer animations and neck movements to provide facial expressions [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
The robot is equipped with a camera and a microphone to capture information from the surrounding
environment. However, due to the need to show emotion-eliciting videos from the laptop, the laptop
camera is used to have a straight view of the user’s face, better for identifying the emotion felt. Although
facial expression may be the most significant nonverbal form of emotional expression [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], some people
can mask their facial emotions by adopting a neutral expression and using non-intuitive human
body language that can lead to misinterpretations [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. Therefore, a multimodal information-based
solution for emotion recognition has been pursued to produce a more reliable emotion recognition
system. Alongside the facial expressions, physiological signals have been considered, including the
Electrocardiogram (ECG) and Galvanic Skin Response (GSR) signals, which can be considered more
reliable indicators of emotions, as they are more dificult to cover or alter through human disguise [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
These signals are acquired by the BITalino biosignal platform provided by PLUX Biosignals [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. The
settings of the equipment involved in the study are shown in Figure 1.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Emotion Recognition Model</title>
        <p>
          The emotion recognition model employed in this study builds upon a baseline architecture previously
established in [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. The model was selected due to its proven efectiveness in recognizing emotions from
facial expressions and physiological signals. The model was trained using the AMIGOS dataset [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ],
which contains multimodal data, including EEG, ECG, GSR, and facial video recordings. The dataset
provides afective level annotations for the participants based on the Self-Assessment Manikin ( SAM)
scale [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] and evaluations made by the dataset’s authors. The annotations include valence, arousal,
dominance, and basic emotions (Neutral, Disgust, Happiness, Surprise, Anger, Fear, and Sadness)
for each participant for every video. The model is based on a multimodal approach that combines
facial expressions and physiological signals to predict the user’s emotional state. Each modality has
been processed individually by an artificial intelligence model, with the facial images processed by a
Convolutional Neural Network (CNN)-based architecture, and the physiological signals processed by a
Support Vector Machine (SVM) model. The predictions of the extracted emotions from each modality
are then fused to provide a final prediction. The model was trained on the AMIGOS dataset using a
train-test split approach of 70-30.
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Emotion Elicitation Videos</title>
        <p>
          The videos for emotion elicitation have been selected from the DECAF database [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], which is a
multimodal dataset for decoding user physiological responses to afective multimedia content. Videos
with a total length not exceeding 120 seconds have been chosen to avoid fatigue and to maintain the
participants’ attention, but also to ensure that only one emotion is elicited at a time. Three videos were
selected for each of the four emotional categories Low Arousal - Negative Valence (LALV), Low Arousal
- Positive Valence (LAHV), High Arousal - Negative Valence (HALV), and High Arousal - Positive
Valence (HAHV) based on the annotations provided in the DECAF dataset, resulting in a total of 12
videos. For instance, the scene from Bambi where “Bambi’s mother gets killed” was categorized under
LALV due to its emotionally distressing content, while the scene from Wall-E where “Wall-E and Eve
spend a romantic night together” was classified under LAHV to evoke positive but calm emotions.
Videos have been presented to the participants in random order to avoid any bias in the results.
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Questionnaires</title>
        <p>
          Two questionnaires have been prepared for the study, one to be completed before the experiment
and one after. The pre-experiment questionnaire aimed to collect demographic information about the
participants, such as age and gender, as well as information about their previous experience with robots
and their empathetic capacity. The empathetic capacity is assessed using the Empathy Quotient test
[
          <xref ref-type="bibr" rid="ref17">17</xref>
          ], which is a self-report questionnaire designed to measure empathy in adults. The short version of
the test has been chosen, which consists of 40 questions, where each question is scored on a scale from
0 to 2, with higher scores indicating higher levels of empathy. The test provides a total score that ranges
from 0 to 80. To distinguish participants’ level of empathy, four categories have been identified: low
empathy (0-20), medium-low empathy (21-40), medium-high empathy (41-60), and high empathy (61-80).
On the other hand, the post-experiment questionnaire aimed to evaluate the participant’s perception of
the robot’s emotional display during the experiment. The post-questionnaire included questions about
the robot’s facial expressions, the perceived emotions, and the impact of the robot’s expressions on the
participants’ emotional state. The post-questionnaire is created above the System Usability Scale (SUS)
score principles [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ], and each item is scored on a scale of 1 to 5, with 1 being completely disagree and
5 being completely agree. The post-questionnaire results were scored with a range from 0 to 100, with
higher scores indicating a more positive perception of the robot’s emotional display.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results</title>
      <p>A total of 60 subjects, aged 18 to 34, voluntarily participated in the study. Participants included 34
males, 17 females, and 1 non-binary individual. Of these, 24 participants reported no prior experience
with robots. Two participants withdrew from the study before completing the session due to personal
commitments. The participants were randomly assigned to two groups: coherent ( = 29) and
incoherent ( = 29). This preliminary analysis aims to assess participants’ experiences and perceptions
of the robot, comparing post-experiment responses between the two groups. Statistical analyses were
conducted using the Student’s t-test for independent samples.</p>
      <p>Participants in the coherent group rated the robot’s behavior as more natural ( = 3.393,  = 1.197)
than those in the incoherent group ( = 2.448,  = 1.213), with a statistically significant diference
( &lt; 0.05). Similarly, participants in the incoherent group reported a higher discomfort level created by
the robot ( = 3.000,  = 1.363) than those in the coherent group ( = 1.786,  = 0.994,  &lt; 0.05).
However, both groups did not perceive a significant influence from the robot while watching the videos
( = 2.579,  = 1.224,  &gt; 0.05). Furthermore, participants in the coherent group perceived the robot
as more aware of the video content ( = 4.143,  = 1.02) compared to those in the incoherent group
( = 2.414,  = 1.21). They also rated the robot as less incoherent relative to the video scenes presented
( = 1.964 for the coherent group,  = 3.793 for the incoherent group). The robot’s expressions were
more distracting for participants in the incoherent group ( = 3.000,  = 1.15) than for those in the
coherent group ( = 2.357,  = 1.07). For other aspects, such as the social acceptability of the robot
and its potential utility in communicating emotions, no significant diferences were observed between
the two groups ( &gt; 0.05), with both groups agreeing on the robot’s usefulness.</p>
      <p>Overall, these findings suggest that coherent emotional expressions in the robot enhance perceptions
of it as more natural, aware, and non-intrusive, whereas incoherent expressions increase perceptions of
discomfort, distraction, and incoherence. Although participants did not report feeling influenced by
the robot, future studies will explore potential unconscious emotional changes in participants using
emotion recognition models.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>This preliminary study explores how robot facial expressions influence human emotional experiences
in HRI. In the experiment, participants watched emotion-eliciting videos while interacting with a
robot that displays facial expressions either aligned or misaligned with the emotional content of the
videos. The findings indicate that participants generally responded positively to interactions where the
robot’s expressions matched the emotional content of the videos, underscoring the potential of using
facial expressions in SARs to enhance user engagement. This study lays a foundation for incorporating
emotionally expressive robots in SAR applications across therapeutic, educational, and entertainment
settings. Future research will delve further into the collected data to determine whether participants
experienced unconscious emotional changes, advancing the understanding of how emotionally aware
robots might foster behavioral change and enhance emotional connection in HRI.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>This research is supported by the Italian MUR and EU under the project ADVISOR (ADaptiVe leglble
robotS for trustwORthy health coaching) - PRIN PNRR 2022 PE6 - Cod. P202277EJ2 and under the
complementary actions to the NRRP “Fit4MedRob - Fit for Medical Robotics” Grant (# PNC0000007).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Matarić</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Scassellati</surname>
          </string-name>
          , Socially assistive robotics, Springer handbook of robotics (
          <year>2016</year>
          )
          <fpage>1973</fpage>
          -
          <lpage>1994</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>L. D'Arco</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Zheng</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Sensebot: A wearable sensor enabled robotic system to support health and well-being</article-title>
          ,
          <source>in: 6th Collaborative European Research Conference</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>30</fpage>
          -
          <lpage>45</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>C.</given-names>
            <surname>Frith</surname>
          </string-name>
          ,
          <article-title>Role of facial expressions in social interactions</article-title>
          ,
          <source>Philosophical Transactions of the Royal Society B: Biological Sciences</source>
          <volume>364</volume>
          (
          <year>2009</year>
          )
          <fpage>3453</fpage>
          -
          <lpage>3458</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>C.-E.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <article-title>Emotional contagion in human-robot interaction</article-title>
          ,
          <source>E-review of Tourism Research</source>
          <volume>17</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Stafa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Rossi</surname>
          </string-name>
          ,
          <article-title>Enhancing afective robotics via human internal state monitoring</article-title>
          ,
          <source>in: 31st IEEE Intern. Conf. on Robot and Human Interactive Communication (RO-MAN)</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>884</fpage>
          -
          <lpage>890</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.</given-names>
            <surname>Fiorini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. G.</given-names>
            <surname>Loizzo</surname>
          </string-name>
          ,
          <string-name>
            <surname>G. D'Onofrio</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Sorrentino</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Ciccone</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Russo</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Giuliani</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Sancarlo</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Cavallo</surname>
          </string-name>
          ,
          <article-title>Can i feel you? recognizing human's emotions during human-robot interaction</article-title>
          ,
          <source>in: International Conference on Social Robotics</source>
          , Springer,
          <year>2022</year>
          , pp.
          <fpage>511</fpage>
          -
          <lpage>521</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Rossi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rossi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sangiovanni</surname>
          </string-name>
          ,
          <article-title>Towards the Evaluation of the Role of Embodiment in Emotions Elicitation</article-title>
          ,
          <source>in: 2023 11th International Conference on Afective Computing and Intelligent Interaction Workshops and Demos (ACIIW)</source>
          , IEEE,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>U.</given-names>
            <surname>Dimberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Thunberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Elmehed</surname>
          </string-name>
          ,
          <article-title>Unconscious facial reactions to emotional facial expressions</article-title>
          ,
          <source>Psychological science 11</source>
          (
          <year>2000</year>
          )
          <fpage>86</fpage>
          -
          <lpage>89</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>F.</given-names>
            <surname>Robotics</surname>
          </string-name>
          , Furhat robot, www.furhatrobotics.com/,
          <year>2024</year>
          . Accessed:
          <fpage>2024</fpage>
          -03-01.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Al Moubayed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Beskow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Skantze</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Granström</surname>
          </string-name>
          ,
          <article-title>Furhat: A back-projected human-like robot head for multiparty human-machine interaction</article-title>
          ,
          <source>in: Cognitive Behavioural Systems</source>
          , Springer Berlin Heidelberg, Berlin, Heidelberg,
          <year>2012</year>
          , pp.
          <fpage>114</fpage>
          -
          <lpage>130</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Rescigno</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Spezialetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Rossi</surname>
          </string-name>
          ,
          <article-title>Personalized models for facial emotion recognition through transfer learning</article-title>
          ,
          <source>Multimedia Tools and Applications</source>
          <volume>79</volume>
          (
          <year>2020</year>
          )
          <fpage>35811</fpage>
          -
          <lpage>35828</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Zhao, Facial-video-based physiological signal measurement: Recent advances and afective applications</article-title>
          ,
          <source>IEEE Signal Processing Magazine</source>
          <volume>38</volume>
          (
          <year>2021</year>
          )
          <fpage>50</fpage>
          -
          <lpage>58</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>P.</given-names>
            <surname>Biosignals</surname>
          </string-name>
          , Bitalino, www.pluxbiosignals.com/collections/bitalino,
          <year>2024</year>
          . Accessed:
          <fpage>2024</fpage>
          -03-01.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Miranda-Correa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. K.</given-names>
            <surname>Abadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Sebe</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Patras</surname>
          </string-name>
          ,
          <article-title>Amigos: A dataset for afect, personality and mood research on individuals and groups</article-title>
          ,
          <source>IEEE transactions on afective computing 12</source>
          (
          <year>2018</year>
          )
          <fpage>479</fpage>
          -
          <lpage>493</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Morris</surname>
          </string-name>
          , Observations: Sam:
          <article-title>the self-assessment manikin; an eficient cross-cultural measurement of emotional response</article-title>
          ,
          <source>Journal of advertising research 35</source>
          (
          <year>1995</year>
          )
          <fpage>63</fpage>
          -
          <lpage>68</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>M. K. Abadi</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Subramanian</surname>
            ,
            <given-names>S. M.</given-names>
          </string-name>
          <string-name>
            <surname>Kia</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Avesani</surname>
            , I. Patras,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Sebe</surname>
          </string-name>
          , Decaf:
          <article-title>Meg-based multimodal database for decoding afective physiological responses</article-title>
          ,
          <source>IEEE Transactions on Afective Computing</source>
          <volume>6</volume>
          (
          <year>2015</year>
          )
          <fpage>209</fpage>
          -
          <lpage>222</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>E. J.</given-names>
            <surname>Lawrence</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Shaw</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Baker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Baron-Cohen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>David</surname>
          </string-name>
          ,
          <article-title>Measuring empathy: reliability and validity of the empathy quotient</article-title>
          ,
          <source>Psychological medicine 34</source>
          (
          <year>2004</year>
          )
          <fpage>911</fpage>
          -
          <lpage>920</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bangor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kortum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <article-title>Determining what individual sus scores mean: Adding an adjective rating scale</article-title>
          ,
          <source>Journal of usability studies 4</source>
          (
          <year>2009</year>
          )
          <fpage>114</fpage>
          -
          <lpage>123</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>