<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>M. E Bailey);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>The Effect of Emoji Type on Trust in AI Teammates</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Morgan E. Bailey</string-name>
          <email>m.bailey.1@research.gla.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Benjamin Gancz</string-name>
          <email>benjamin.gancz@qumo.do</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Frank E. Pollick</string-name>
          <email>frank.pollick@glasgow.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>MultiTTrust: 2nd Workshop on Multidisciplinary Perspectives on Human-AI Team</institution>
          ,
          <addr-line>Dec 04, 2023, Gothenburg</addr-line>
          ,
          <country country="SE">Sweden</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Qumodo Ltd</institution>
          ,
          <addr-line>7 Bell Yard, London, WC2A 2JR</addr-line>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Glasgow, School of Computer Science</institution>
          ,
          <addr-line>Sir Alwyn Williams Building, Glasgow, G12 8RZ, Scotland</addr-line>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of Glasgow, School of Psychology &amp; Neuroscience</institution>
          ,
          <addr-line>62 Hillhead Street, Glasgow, G12 8QB, Scotland</addr-line>
        </aff>
      </contrib-group>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>The rapid advancement of Artificial Intelligence (AI) has revolutionized various sectors, with the workplace being no exception. Collaborative efforts between humans and AI, known as Human-AI teams (HATs), have gained increasing attention. Trust plays a central role in shaping HAT dynamics, as excessive trust can lead to over-reliance, while insufficient trust can hinder AI utilization. This study explores the potential of emojis in enhancing Social Intelligence (SI) within HATs and influencing trust calibration. Drawing on prior research indicating the role of emojis in conveying emotional states, the study implemented a mixed-methods design, participants were divided into two groups based on a between-group factor, with one group interacting with a highly reliable AI, and the other with a less reliable AI. The within groups factor was emoji type in the following three conditions: Face Emojis (☹, ), Icon Emojis ( , ) or No Emojis. Participants also had a human teammate who never used emojis and performed at the same level across all conditions. The task involved determining geographic locations with the help of teammates' responses, with AI and human teammates often providing conflicting answers. The analysis revealed that the use of emojis in AI responses and the reliability of AI teammates had no significant impact on trust or influence ratings. Furthermore, the type of emojis used did not affect trust calibration. The Trust in Automation Questionnaire results indicated that reliability significantly affected trust and familiarity while emoji type did not. Despite the limited influence of emojis on trust calibration in HATs, the study sheds light on the complex dynamics at play. The specific nature of tasks in HATs, requiring precision and cognitive effort, may overshadow emotional cues conveyed by emojis. Nevertheless, the study identified that participants perceived highly reliable AI as less familiar, possibly due to anthropomorphic priming, which aligns with past research. Trust calibration strategies should consider AI's human-like performance. In conclusion, this research underscores the intricate nature of trust calibration in HATs and suggests that while emojis hold potential for enhancing human-computer interactions, their impact on trust may be more restrained in some contexts. Future studies should delve deeper into trust complexities in HATs and explore strategies beyond emojis to foster trust in HATs.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Human-AI Teams</kwd>
        <kwd>Human-AI Dynamic Team Trust</kwd>
        <kwd>Trust-Calibration</kwd>
        <kwd>Trust 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In recent decades, the rapid advancement of Artificial Intelligence (AI) has profoundly transformed various aspects of
society. Specifically, within the workplace, AI has proven to excel in tasks involving extensive data analysis, high
precision, and sustained cognitive effort. Nevertheless, research consistently emphasizes the effectiveness of
humanAI collaboration, often referred to as hybrid intelligence, in achieving optimal results [
        <xref ref-type="bibr" rid="ref11 ref5">5,11</xref>
        ]. This has sparked a growing
interest in comprehending the dynamics of Human-AI teams (HATs) to implement AI effectively within the workforce.
      </p>
      <p>
        Trust emerges as a pivotal factor in shaping the dynamics of HATs, as it underlies critical team interactions. Striking
the right balance of trust is essential within HATs, where excessive trust can lead to an over reliance on AI systems,
causing users to overlook mistakes and errors [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Conversely, insufficient trust may result in team members
underutilizing the capabilities of AI, ultimately leading to reduced team performance [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Calibrating trust within HATs
entails transitioning from black box AI methods to explainable AI, which is essential for successful trust calibration.
Presenting AI outputs in a more human-friendly manner, integrating elements of Social Intelligence (SI) [
        <xref ref-type="bibr" rid="ref11 ref9">9,11</xref>
        ], proves
to be a valuable approach for explaining AI and facilitating trust calibration [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>
        Previous research has indicated emojis can play a significant role in enhancing SI within professional settings.
Emojis offer a valuable means for AI to convey emotional states, thereby allowing AI systems to better interpret and
respond to users' emotional cues and potentially calibrate trust successfully. Building on prior research, which has
effectively employed emojis on platforms like Twitter to develop models for inferring affect from emoji usage patterns
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], the use of emojis can be extended to foster SI in HATs and could allow for mutual understanding of affective state
from both the human teammate and AI teammate,
      </p>
      <p>
        Furthermore, in the domain of health-related applications, particularly those involving chatbots inquiring about
participants' mental well-being, studies have demonstrated the positive impact of emojis. Chatbots that incorporate
emojis have received higher ratings in terms of user enjoyment, attitude, and confidence [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Research has also
indicated messages from chatbots featuring emojis were rated on par with those from human senders [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Additionally,
both human and AI senders who utilized emojis were perceived as significantly more socially appealing, competent in
computer-mediated communication, and credible compared to senders who relied solely on verbal messages [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The
incorporation of emojis into AI-mediated communication not only enhances the ability to understand and express
affective states but also fosters positive user experiences and perceptions, aligning with the goals of social intelligence
within work environments. From the current literature we pose the following hypotheses:
      </p>
      <p>H1: Use of Emojis in AI responses will influence the decision-making process when determining which teammate to
trust.</p>
      <p>H2: Type of Emojis in AI responses will influence the decision-making process when determining which teammate to
trust.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Method</title>
      <p>We used a mixed between-within subjects' design (2x3 configuration) where participants interacted with an AI
teammate of either high (90%) or low (60%) reliability and a human teammate with 30% reliability. Within these
separate groups participants then interacted with three different emoji uses, either face emojis, icon emojis or no
emojis. We determined a sample size of N = 44 for 85% power to detect a medium effect in a two-way ANOVA (α =
.05).</p>
      <p>The study employed a Wizard of Oz experimental method to facilitate development convenience and ensure
optimal control. Participants were led to believe they were collaborating with an AI and a human teammate when
instead they were actually interacting with responses produced by ChatGPT. The task involved presenting participants
with random locations extracted from Google Earth. Participants were tasked with determining the continent, country,
and city associated with each location, with the final decision resting on the participant, who assumed the role of the
'team leader'. A time constraint of 120 seconds per location was enforced, meaning participants had to rely on their
teammates' responses to submit the location in time. Notably, the AI and human team-mates provided conflicting
answers 90% of the time, necessitating participants to discern which teammate they trusted more.</p>
      <p>
        A total of 30 locations were identified by each participant across three blocks, comprising 10 trials per block. Each
block either used Face Emojis (☹, ), Icon Emojis ( , ) or No Emojis. Following each trial, the correct answer
was revealed, enabling participants to assess the performance of the human and AI teammates. On each trial
participants rated which teammate influenced them most and which teammate they trusted most, at the end of each
block participants completed the Trust in Automation Questionnaire [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], which is has six sub-sections (Trust,
Familiarity, Understanding, Intentions of developers, Reliability of AI and Propensity to trust) to measure different
elements of trust of the AI interacted with in the previous block, questionnaires were slightly altered to fit the zero
embodiment scenario being explored. We collected the location responses from ChatGPT, a large language model, by
inputting location descriptions and requesting versions with emojis distributed throughout them, minimal editing was
needed to make the responses suitable. The AI's writing style mimicked humans, following previous successful
approaches [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. We conducted the experiment using PsychoPy and hosted it on Pavlovia.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Results</title>
      <p>A total of 42 participants from the University of Glasgow were recruited. The group consisted of 24 males and 18
females and had a mix of students (n = 27) and professionals (n = 17).</p>
      <p>We conducted a two-way ANOVA with interactions to compare trust ratings of the AI. The analysis indicated that
neither the type of emojis used in AI responses nor the reliability level of AI teammates had a significant impact on
trust. Specifically, the main effects of emoji types (F(2, 4) = 0.647, p = 0.524) and reliability (F(1, 4) = 1.363, p = 0.243)
were non-significant, as well as the interaction effect between emoji type and reliability (F(2, 4) = 0.554, p = 0.575).</p>
      <p>We also conducted a two-way ANOVA with interactions to compare influence ratings of the AI. The analysis indicated
that neither emoji type nor reliability had a statistically significant impact on influence. Specifically, both the main
effects of within-condition (F(2, 4) = 0.368, p = 0.692) and reliability (F(1, 4) = 0.010, p = 0.921), along with the
interaction effect between emoji type and reliability (F(2, 4) = 1.493, p = 0.225), were found to be non-significant.</p>
      <p>
        We also analyzed the Trust in Automation Questionnaire [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]by completing two ANOVAs on the various subsections
assessing different dimensions of trust. For the subsection Trust, the analysis demonstrated a statistically significant
effect of Reliability (F(1, 2) = 69.133, p = 0.0142) while emoji type showed no significant impact. Post hoc comparisons
via Tukey method revealed significant differences for all emoji type, but only by reliability. For the Familiarity
subsection, Reliability demonstrated a significant impact (F (1, 2) = 141.187, p = 0.007), while emoji type had no
significant effect. Post hoc tests indicated that there were differences in emoji type but only by reliability, not between
emoji types. In the Propensity to trust subsection, Reliability showed a significant effect (F(1, 2) = 30.990, p = 0.0308),
emoji type did not significantly influence trust. Tukey post hoc tests did not identify specific trust differences based on
emoji type and reliability. In the Reliability of AI, Understanding and Intentions of developers’ subsections, neither
Reliability nor emoji type had a significant effect on trust, with both showing p-values above 0.05.
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Discussion</title>
      <p>
        The aim of this study was to investigate how emojis influence trust calibration within Human-AI teams (HATs) and
what this means for team dynamics. While emojis have shown potential in improving human-computer interactions
[
        <xref ref-type="bibr" rid="ref3 ref4">3,4</xref>
        ], our research revealed that their impact on trust calibration within HATs was not as significant as anticipated and
neither of our research hypotheses were fully supported.
      </p>
      <p>
        Contrary to our expectations, integrating emojis into AI-mediated communication did not enhance trust calibration
between human team members and AI. Despite emojis offering a more human-friendly and emotionally expressive
interface, their effect on trust calibration in HATs seemed limited. Several factors may explain these outcomes. Trust
in HATs appears to be influenced by multifaceted dynamics that go beyond emotional cues. Transparency of AI systems
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], their past performance [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], and the unique traits of human team members [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] likely play crucial roles in trust
development. Emojis, while enhancing emotional expressiveness, might not address these fundamental trust
determinants in HATs.
      </p>
      <p>
        Additionally, the specific nature of tasks in HATs, demanding precision, data analysis, and cognitive effort, might
overshadow the emotional cues conveyed by emojis. the experimental task does not require any emotions, in other
situations where emojis are found to be useful there is often a need for emotion, such as health care [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        Our research did find significant results concerning participants' trust and familiarity with AI, these results were
only for reliability. Participants rated highly reliable AI as significantly less familiar compared to less reliable AI. This
suggests that proficient AI using humanized behavior effectively, might not be frequently encountered. These findings
could explain why the high reliability AI received lower trust scores on the TIA, previous research has shown highly
reliable AI with high humanness is less trustworthy than humanized low reliability AI [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], possibly influenced by
anthropomorphic priming [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Although limited because these trust results were not replicated in experimental data
but only in the questionnaire, they support past research indicating that users find AI more trustworthy when it
appears more human-like, especially when the AI's performance is not perfect [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>In conclusion, our research highlights the intricate nature of trust calibration within HATs and indicates that while
emojis have potential in enhancing human-computer interactions, their impact on trust might be more restrained in
this specific context. Future studies should delve deeper into the complexities of trust in HATs and explore strategies
beyond emojis that can effectively foster trust in the evolving realm of human-AI collaboration.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgements</title>
      <p>Morgan Bailey is supported by the UKRI Centre for Doctoral Training in Socially Intelligent Artificial Agents, Grant
Number EP/S02266X/1.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>E.</given-names>
            <surname>Kamar</surname>
          </string-name>
          ,
          <article-title>Directions in hybrid intelligence: complementing AI systems with human intelligence</article-title>
          .
          <source>In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI'16)</source>
          . AAAI Press, New York, NY, pp.
          <fpage>4070</fpage>
          -
          <lpage>4073</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Williams</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Fiore</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Jentsch</surname>
          </string-name>
          ,
          <source>Supporting Artificial Social Intelligence With Theory of Mind, Frontiers in Artificial Intelligence</source>
          <volume>5</volume>
          (
          <year>2022</year>
          )
          <fpage>750</fpage>
          -
          <lpage>763</lpage>
          . doi:
          <volume>10</volume>
          .3389/frai.
          <year>2022</year>
          .
          <volume>750763</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>E.</given-names>
            <surname>J. de Visser</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.M.M. Peeters</surname>
            ,
            <given-names>M.F.</given-names>
          </string-name>
          <string-name>
            <surname>Jung</surname>
          </string-name>
          , et al.
          <article-title>Towards a Theory of Longitudinal Trust Calibration in HumanRobot Teams</article-title>
          ,
          <source>International Journal of Social Robotics</source>
          <volume>12</volume>
          (
          <year>2020</year>
          ), pp.
          <fpage>459</fpage>
          -
          <lpage>478</lpage>
          . doi:
          <volume>10</volume>
          .1007/s12369-019-00596- x.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>E. L.</given-names>
            <surname>Thorndike</surname>
          </string-name>
          , Intelligence and its uses,
          <source>Harper's Magazine</source>
          <volume>140</volume>
          (
          <year>1920</year>
          ), pp.
          <fpage>227</fpage>
          -
          <lpage>235</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ahanin</surname>
          </string-name>
          and
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Ismail</surname>
          </string-name>
          ,
          <article-title>A multi-label emoji classification method using balanced pointwise mutual information-based feature selection</article-title>
          ,
          <source>Computer Speech and Language</source>
          <volume>73</volume>
          , (
          <year>2022</year>
          ). doi:
          <volume>10</volume>
          .1016/j.csl.
          <year>2021</year>
          .
          <volume>101330</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fadhil</surname>
          </string-name>
          , G. Schiavo,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. A.</given-names>
            <surname>Yilma</surname>
          </string-name>
          ,
          <article-title>The effect of emojis when interacting with conversational interface assisted health coaching system</article-title>
          ,
          <source>in Proceedings of the 12th EAI International Conference on Pervasive Computing Technologies for Healthcare</source>
          , New York, NY,
          <year>2018</year>
          . doi:
          <volume>10</volume>
          .1145/3240925.3240965.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Beattie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. P.</given-names>
            <surname>Edwards</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Edwards</surname>
          </string-name>
          ,
          <article-title>A Bot and a Smile: Interpersonal Impressions of Chatbots and Humans Using Emoji in Computer-mediated Communication</article-title>
          ,
          <source>Communication Studies</source>
          <volume>71</volume>
          (
          <year>2020</year>
          ). doi:
          <volume>10</volume>
          .1080/10510974.
          <year>2020</year>
          .
          <volume>1725082</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Körber</surname>
          </string-name>
          ,
          <article-title>Theoretical considerations and development of a questionnaire to measure trust in automation</article-title>
          ,
          <source>in Advances in Intelligent Systems and Computing</source>
          ,
          <year>2019</year>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -96074-
          <issue>6</issue>
          _
          <fpage>2</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Bailey</surname>
          </string-name>
          and
          <string-name>
            <given-names>F. E.</given-names>
            <surname>Pollick</surname>
          </string-name>
          ,
          <article-title>Social Intelligence towards Human-AI Teambuilding</article-title>
          ,
          <source>in Proceedings of the AAAI Conference on Artificial Intelligence</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>16160</fpage>
          -
          <lpage>16161</lpage>
          . https://doi.org/10.1609/aaai.v37i13.
          <fpage>26940</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>P.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Biessmann</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Teubner</surname>
          </string-name>
          ,
          <article-title>Transparency and trust in artificial intelligence systems</article-title>
          ,
          <source>Journal of Decision Systems</source>
          <volume>29</volume>
          (
          <year>2020</year>
          ). doi:
          <volume>10</volume>
          .1080/12460125.
          <year>2020</year>
          .
          <volume>1819094</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D.</given-names>
            <surname>Zanatto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Patacchiola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Goslin</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Cangelosi</surname>
          </string-name>
          ,
          <article-title>Priming anthropomorphism: Can the credibility of humanlike robots be transferred to non-humanlike robots?</article-title>
          ,
          <source>in ACM/IEEE International Conference on HumanRobot Interaction</source>
          ,
          <year>2016</year>
          . doi:
          <volume>10</volume>
          .1109/HRI.
          <year>2016</year>
          .
          <volume>7451847</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>N. N.</given-names>
            <surname>Sharan</surname>
          </string-name>
          and
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Romano</surname>
          </string-name>
          ,
          <article-title>The effects of personality and locus of control on trust in humans versus artificial intelligence</article-title>
          ,
          <source>Heliyon</source>
          <volume>6</volume>
          (
          <year>2020</year>
          ). doi:
          <volume>10</volume>
          .1016/j.heliyon.
          <year>2020</year>
          .e04572.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>D.</given-names>
            <surname>Zanatto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Patacchiola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Goslin</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Cangelosi</surname>
          </string-name>
          , “
          <article-title>Investigating cooperation with robotic peers,” PLoS One14 (</article-title>
          <year>2019</year>
          ). doi:
          <volume>10</volume>
          .1371/journal.pone.
          <volume>0225028</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>