<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Workshop on sociAL roboTs for peRsonalized, continUous and adaptIve aSsisTance,
Workshop on Behavior Adaptation and Learning for Assistive Robotics, Workshop on Trust, Acceptance and Social Cues in
Human-Robot Interaction, and Workshop on Weighing the benefits of Autonomous Robot persoNalisation. August</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Exploring how users across cultures design and perceive multimodal robot emotion - Abstract</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mathieu DePaul</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dagoberto Cruz-Sandoval</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alyssa Kubota</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>San Francisco State University, School of Engineering</institution>
          ,
          <addr-line>1600 Holloway Ave, San Francisco, CA 94132</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of California San Diego, Computer Science and Engineering</institution>
          ,
          <addr-line>9500 Gilman Dr, La Jolla, CA 92093</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>26</volume>
      <issue>2024</issue>
      <fpage>0000</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>As robots enter more human-centered spaces, such as homes, and engage with more diverse populations, they will need to interact with people in a culturally appropriate manner. This interaction plays an important role in maintaining engagement over long periods of time to maximize eficacy for applications, such as delivering health interventions. In our work, we seek to understand how a user's cultural background influences how they design expressions to convey diferent emotions on robots, as well as how they perceive those emotions. We explore how cultural factors impact how people perceive robot emotions composed of diferent modalities, including sounds (verbal and non-verbal expressions) and color. Our proposed work will contribute towards design considerations to make robots more culturally sensitive and inclusive.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Human-robot interaction</kwd>
        <kwd>Cross-cultural perception</kwd>
        <kwd>Multimodal robot expression</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Perception of robot expressions may vary widely across cultures and contexts [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ], as cultural values
influence users’ perception, acceptance, and trust of robots [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. However, inappropriate design of
these expressions may have the potential to perpetuate cultural biases and stereotypes particularly if
designers are not familiar with the culture of the intended end users [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Understanding these factors
together may help reduce the perpetuation of cultural stereotypes and biases in robots while promoting
social equity [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        However, it is unclear how robots can leverage multiple modalities to most efectively convey emotion
across cultures. The lack of universality of perceptions of robot emotion and social cues across cultures
presents new design considerations for researchers seeking to increase the quality of human-robot
interactions [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Furthermore, robot expressions are typically designed by roboticists rather than the
intended end users of these systems, possibly leading to misalignments in an intended robot emotion
and how users perceive them.
      </p>
      <p>
        Our work explores synthesizing multimodal robot expressions, focusing on sound and color, which
efectively communicate robot emotion [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. We aim to identify how combining these modalities afect
human perceptions of robot emotion across cultures and how these perceptions may impact design
consideration for culturally aware robots, with the long-term goal of supporting autonomous
personalization. We propose a mixed-measures study in which participants from various cultural backgrounds
will design diferent robot expressions that they perceive to convey specific emotions. We will leverage
an online tool for the Cognitively Assistive Robot for Motivation and Neurorehabilitation (CARMEN)
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] to enable participants to design personalized, multimodal robot expressions on a simulated robot
(see Figure 1). We will evaluate how participants perceive expressions designed by other participants
from diferent cultures.
      </p>
      <p>
        We anticipate two main contributions from our proposed work. First, we will provide insight into how
various modalities of expression and their combinations afect the perception of emotions and social
behaviors across cultures. Second, we will propose design considerations for multimodal expression that
researchers can leverage to make socially assistive robots more culturally sensitive and synthesize higher
quality interactions between users and robots. Our research also extends to maintaining longitudinal
engagement with robot-delivered interventions in the home, where multimodal robot expression may
provide more quality and engaging interactions between users and robots across cultures [
        <xref ref-type="bibr" rid="ref10 ref6 ref8 ref9">6, 8, 9, 10</xref>
        ].
Ultimately, our work seeks to create more efective care that promotes inclusiveness across diferent
cultures.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <sec id="sec-2-1">
        <title>2.1. Robot Emotion and Social Cues</title>
        <p>
          The design of robot expressions can convey complex information to users such as diferent emotions or
social cues [
          <xref ref-type="bibr" rid="ref11 ref6">11, 6</xref>
          ]. Researchers have found that robot expressions of emotion and social cues have a
positive efect on user perception and how accurately they can be recognized [
          <xref ref-type="bibr" rid="ref1 ref12 ref13 ref5 ref6 ref8">6, 1, 8, 5, 12, 13</xref>
          ]. For
instance, utilizing colored lights for expression significantly improves participants’ accuracy when
identifying a robot’s internal state and improves trust towards a robot [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. Other research on vocal
expression revealed that intonation, pitch, and timbre are the primary sound parameters which impact
its perception [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. Nonverbal sounds may also convey important information to users about robot
emotion or social cues [
          <xref ref-type="bibr" rid="ref15 ref5">5, 15</xref>
          ]. However, questions remain on how the combinations of these modalities
impacts users’ perception of robot expressions across cultures. Thus, we plan to enable participants to
design these expressions to better understand how cultural backgrounds afect the perception of robot
emotion.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Cross-Cultural Emotion Perception</title>
        <p>
          Researchers have identified that multimodal robot expression may provide more quality and engaging
interactions between users and robots across cultures [
          <xref ref-type="bibr" rid="ref10 ref6 ref8 ref9">6, 8, 9, 10</xref>
          ]. Research on the cross-cultural impact
of vocal expression shows that human emotion and social cue expressions manifest as subtle, nuanced
patterns for expressing and perceiving such expressions [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. A related study [16] provided various
human vocal expressions of emotions (happiness, anger, fear, disgusted, sadness, and surprise) made in
the US to a culturally diverse group of participants. They found that a wider gap between cultures led
to decreased emotional recognition accuracy among the participants [16]. Other work focuses on how
personalizing robots across cultures promotes acceptance of robots during human-robot interaction [17].
These studies highlight the importance of understanding how users’ cultural backgrounds influence
their perception of robot expressions and how these expressions can be personalized across cultures.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed Methodology</title>
      <p>
        We plan to conduct an online mixed-methods study to identify how multimodal expression, through
sound and color, impacts a user’s perception of robot emotion and social cues, and how these perceptions
change across cultures. We will follow commonly used frameworks from psychology [18] and focus on
both innate primary emotions (joy, sadness, anger, fear, and disgust) and acquired secondary emotions
(guilt, regret, pride, and jealousy) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. We plan to recruit participants from the US and Mexico to
design multimodal robot expressions and social cues to convey these emotions. These locations allow
us to explore how cultural elements, such as expressiveness, communication, and attitudes towards
technology, impact the design and perception of robot emotions. Furthermore, our research will be
primarily conducted in California where there is a relatively high population of people of Mexican
heritage. Participants will report the culture with which they self-identify.
      </p>
      <p>
        We will use the CARMEN platform [
        <xref ref-type="bibr" rid="ref7">19, 7</xref>
        ], a cognitively assistive robot which supports flexible,
expressive modalities that is designed to deliver longitudinal interventions at home. Participants can
use CARMEN’s online interface to design their preferred robot expressions to convey emotions with an
easy to use block programming system. We will provide participants with a brief tutorial on using the
interface to design their own robot emotions in order to better understand how people from diferent
cultural backgrounds perceive these emotions in robots. We will present participants with a predefined
neutral robot expression, and they can adjust both the color and sounds in order to isolate the efects of
these two modalities. Colored lights will be visible through the robot’s body to enable participants to
personalize their design preferences. They can consider features such as frequency, light animation,
hue, saturation, and brightness. For sound modalities, verbal and non-verbal sounds will be considered
to maximize personalization options for participants where they can adjust features like intonation,
pitch, and timbre.
      </p>
      <p>After completing the design process, we will ask participants open-ended questions to understand the
reasoning behind their choices, including what features they chose and why. We will conduct a thematic
analysis of the qualitative data, evaluating how participants weigh each modality and their features, the
efect of multimodal expression on emotion and social cue perception, and how this influences users’
perception of the conveyed robot emotion. We will also explore how cultural diferences afect users’
perception of the emotions they assign to diferent robot expressions. In order to understand users’
perception of the robot emotions, we will ask participants to identify the designs from both their own
and the other culture with the emotion they perceive, allowing us to compare cross-cultural diferences
in perception.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Future Work</title>
      <p>
        This proposed work aims to identify how multimodal expressions of robot emotion and social cues
are perceived by users, understand how users’ respective cultures afect their perception of robot
expressions, and learn how these modalities can be combined to be more culturally aware. Multimodal
expressions have a significantly positive impact on user engagement [ 20, 21] which may improve robot
healthcare interventions deployed longitudinally in the home [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. In future work, we will also explore
these diferences among other cultures and how additional modalities, such as facial expressions, can
produce more efective modality combinations to improve expression recognition accuracy across
cultures [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. With more modalities to consider, there is the possibility to expand this study to include
more emotion and social cues, or even more complex behavior, such as creating more personalized
robot personalities. Finally, we want to better understand the impact of these culturally influenced
emotions on longitudinal engagement [20, 21], as well as ethical implications such as trust, attachment,
and reliance on robots with these abilities [22, 23, 24]. The results of this work may allow researchers
to automatically synthesize personalized behaviors based on a user’s cultural background. Our work
will enable robots to interact with more culturally diverse populations and ultimately improve equity
and accessibility of personalized systems.
[16] P. Laukka, H. A. Elfenbein, Cross-cultural emotion recognition and in-group advantage in vocal
expression: A meta-analysis, Emotion Review 13 (2021) 3–11.
[17] N. Gasteiger, M. Hellou, H. S. Ahn, Factors for personalization and localization to optimize
human–robot interaction: A literature review, International Journal of Social Robotics 15 (2023)
689–701.
[18] N. Spatola, O. A. Wudarczyk, Ascribing emotions to robots: Explicit and implicit attribution of
emotions and perceived robot anthropomorphism, Computers in Human Behavior 124 (2021)
106934.
[19] A. Kubota, R. Pei, E. Sun, D. Cruz-Sandoval, S. Kim, L. D. Riek, Get smart: Collaborative goal
setting with cognitively assistive robots, in: Proceedings of the 2023 ACM/IEEE International
Conference on Human-Robot Interaction, 2023, pp. 44–53.
[20] H. Salam, O. Celiktutan, I. Hupont, H. Gunes, M. Chetouani, Fully automatic analysis of engagement
and its relationship to personality in human-robot interactions, IEEE Access 5 (2016) 705–721.
[21] F. Del Duchetto, P. Baxter, M. Hanheide, Are you still with me? continuous engagement assessment
from a robot’s point of view, Frontiers in Robotics and AI 7 (2020) 116.
[22] T. Law, M. Scheutz, Trust: Recent concepts and evaluations in human-robot interaction, Trust in
      </p>
      <p>Human-Robot Interaction (2020) 27.
[23] T. Sanders, A. Kaplan, R. Koch, M. Schwartz, P. A. Hancock, The relationship between trust and
use choice in human-robot interaction, Human factors 61 (2019) 614–626.
[24] A. Kubota, M. Pourebadi, S. Banh, S. Kim, L. Riek, Somebody that i used to know: The risks of
personalizing robots for dementia care, Proceedings of We Robot (2021).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>K.</given-names>
            <surname>Terada</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Yamauchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ito</surname>
          </string-name>
          ,
          <article-title>Artificial emotion expression for a robot by dynamic color change</article-title>
          ,
          <source>in: 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication</source>
          , IEEE,
          <year>2012</year>
          , pp.
          <fpage>314</fpage>
          -
          <lpage>321</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Sauter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Eisner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ekman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Scott</surname>
          </string-name>
          ,
          <article-title>Cross-cultural recognition of basic emotions through nonverbal emotional vocalizations</article-title>
          ,
          <source>Proceedings of the National Academy of Sciences</source>
          <volume>107</volume>
          (
          <year>2010</year>
          )
          <fpage>2408</fpage>
          -
          <lpage>2412</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>V.</given-names>
            <surname>Lim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rooksby</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. S.</given-names>
            <surname>Cross</surname>
          </string-name>
          ,
          <article-title>Social robots on a global stage: establishing a role for culture during human-robot interaction</article-title>
          ,
          <source>International Journal of Social Robotics</source>
          <volume>13</volume>
          (
          <year>2021</year>
          )
          <fpage>1307</fpage>
          -
          <lpage>1333</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L.</given-names>
            <surname>Londoño</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Röfer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Welschehold</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Valada</surname>
          </string-name>
          ,
          <article-title>Doing right by not doing wrong in human-robot collaboration</article-title>
          ,
          <source>arXiv preprint arXiv:2202.02654</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>B. J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , N. T. Fitter,
          <article-title>Nonverbal sound in human-robot interaction: a systematic review</article-title>
          ,
          <source>ACM Transactions on Human-Robot Interaction</source>
          <volume>12</volume>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>46</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>D.</given-names>
            <surname>Löfler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Tscharn</surname>
          </string-name>
          ,
          <article-title>Multimodal expression of artificial emotion in social robots using color, motion and sound</article-title>
          ,
          <source>in: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>334</fpage>
          -
          <lpage>343</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bouzida</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kubota</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Cruz-Sandoval</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. W.</given-names>
            <surname>Twamley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Riek</surname>
          </string-name>
          ,
          <article-title>Carmen: A cognitively assistive robot for personalized neurorehabilitation at home</article-title>
          ,
          <source>in: Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>55</fpage>
          -
          <lpage>64</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Pörtner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Schröder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Rasch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Sprute</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hofmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>König</surname>
          </string-name>
          ,
          <article-title>The power of color: A study on the efective use of colored light in human-robot interaction</article-title>
          ,
          <source>in: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</source>
          , IEEE,
          <year>2018</year>
          , pp.
          <fpage>3395</fpage>
          -
          <lpage>3402</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>H.</given-names>
            <surname>Su</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Qi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sandoval</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Laribi</surname>
          </string-name>
          ,
          <article-title>Recent advancements in multimodal human-robot interaction</article-title>
          ,
          <source>Frontiers in Neurorobotics</source>
          <volume>17</volume>
          (
          <year>2023</year>
          )
          <fpage>1084000</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Feng</surname>
          </string-name>
          , G. Perugia,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. I.</given-names>
            <surname>Barakova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. M.</given-names>
            <surname>Rauterberg</surname>
          </string-name>
          ,
          <article-title>Context-enhanced human-robot interaction: exploring the role of system interactivity and multimodal stimuli on the engagement of people with dementia</article-title>
          ,
          <source>International Journal of Social Robotics</source>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>20</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Habibian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Valdivia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. H.</given-names>
            <surname>Blumenschein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. P.</given-names>
            <surname>Losey</surname>
          </string-name>
          ,
          <article-title>A review of communicating robot learning during human-robot interaction</article-title>
          ,
          <source>arXiv preprint arXiv:2312.00948</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Barnes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. H.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Howard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jeon</surname>
          </string-name>
          ,
          <article-title>The efects of robot voices and appearances on users' emotion recognition and subjective perception</article-title>
          ,
          <source>International Journal of Humanoid Robotics</source>
          <volume>20</volume>
          (
          <year>2023</year>
          )
          <fpage>2350001</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>K.</given-names>
            <surname>Baraka</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Veloso</surname>
          </string-name>
          ,
          <article-title>Mobile service robot state revealing through expressive lights: formalism, design, and evaluation</article-title>
          ,
          <source>International Journal of Social Robotics</source>
          <volume>10</volume>
          (
          <year>2018</year>
          )
          <fpage>65</fpage>
          -
          <lpage>92</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>E.-S.</given-names>
            <surname>Jee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.-J.</given-names>
            <surname>Jeong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. H.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kobayashi</surname>
          </string-name>
          ,
          <article-title>Sound design for emotion and intention expression of socially interactive robots</article-title>
          ,
          <source>Intelligent Service Robotics</source>
          <volume>3</volume>
          (
          <year>2010</year>
          )
          <fpage>199</fpage>
          -
          <lpage>206</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>B.</given-names>
            <surname>Orthmann</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Leite</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bresin</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Torre</surname>
          </string-name>
          ,
          <article-title>Sounding robots: Design and evaluation of auditory displays for unintentional human-robot interaction</article-title>
          ,
          <source>ACM Transactions on Human-Robot Interaction</source>
          <volume>12</volume>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>26</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>