<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Investigations and Evidence</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Francesco Semeraro</string-name>
          <email>francesco.semeraro@manchester.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marta Romeo</string-name>
          <email>m.romeo@hw.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Angelo Cangelosi</string-name>
          <email>angelo.cangelosi@manchester.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Samuele Vinanzi</string-name>
          <email>s.vinanzi@shu.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Social Robotics, Artificial Intelligence</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Computational Trust, Artificial Trust</institution>
          ,
          <addr-line>Natural Trust, Human-Robot Interaction, Human-Robot Collaboration</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Computing, Shefield Hallam University</institution>
          ,
          <addr-line>Shefield</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Manchester Centre for Robotics and AI, The University of Manchester</institution>
          ,
          <addr-line>Manchester</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>School of Mathematical and Computer Sciences, Heriot-Watt University</institution>
          ,
          <addr-line>Edinburgh</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <fpage>0000</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>Trust plays a crucial role in the design of human-robot interaction. Most of the current research focuses on the human factors that afect trust towards the robot, while only few works mathematically model the trust a robot can have towards the user and viceversa. In this work, we term this line of research as “Computational Trust” and provide empirical evidence of this trend through preliminary results from an ongoing systematic review. MultiTTrust: 3rd Workshop on Multidisciplinary Perspectives on Human-AI Team Trust, June 11, 2024, Malmö, Sweden ∗Corresponding author. †These authors contributed equally. https://www.shu.ac.uk/about-us/our-people/staff-profiles/samuele-vinanzi (S. Vinanzi) 0000-0002-8812-0968 (F. Semeraro); 0000-0003-4438-0255 (M. Romeo); 0000-0002-4709-2243 (A. Cangelosi); Proceedings</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction and background</title>
      <p>
        Trust is essential in shaping human-human relationships [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. As autonomous agents are starting to enter
our everyday environments, we see an increasing number of researchers in Human-Robot Interaction
(HRI) directing their eforts towards understanding how this factor influences the relationships between
humans and intelligent machines [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. For example, in a collaborative setting between humans and
robots, establishing a trust relationship enables the user to delegate a portion of the shared task to the
robot [
        <xref ref-type="bibr" rid="ref3 ref4 ref5">3, 4, 5</xref>
        ]. As a result, the user can concentrate on their own responsibilities within the task, thus
enhancing the overall outcome [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Trust in HRI has been defined as the “attitude that an agent will
help achieve an individual’s goals in a situation characterised by uncertainty and vulnerability” [7],
and it is mainly studied from the point of view of the human. In fact, most eforts have been directed
towards understanding what robotic characteristics facilitate or hinder human partners’ trust [8, 9], or
what strategies lead to trust repair once the latter is lost [10]. Research on trust and trustworthiness
in automation has become even more critical with the integration of Artificial Intelligence (AI) in the
decision-making processes of autonomous agents.
      </p>
      <p>
        Within this research panorama, we are witnessing an increasing interest in trying to mathematically
model human trust towards robots in an attempt to understand and exploit the human partner’s internal
state [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. We term this modeling attempt as “Natural Trust”. The focus of the latter is understanding the
human partner’s internal model of the robot to facilitate the establishment of trust. In a dual fashion,
few mathematical models address the trust that an artificial agent could have towards the user it is
interacting with, named “Artificial Trust” [ 11, 12]. Both these forms of trust are rarely utilized during a
human-robot interaction to alter the behavioral policy of the artificial agent.
      </p>
      <p>CEUR</p>
      <p>ceur-ws.org</p>
      <p>From these considerations, a gap in the current state-of-the-art emerges: robots are often considered
as passive receptacles of trust, and not as active social agents that makes use of such trust to improve
their own behaviour. Starting from the consideration that trust is a bidirectional relationship needed
to successfully complete collaborative tasks [13], we are interested in investigating when and how it
is possible for robots to model the trustworthiness of their human partners [14] and/or the trust put
by the users in them, to allow them to exploit this knowledge and enhance their interactions with the
users. Attempts at regularizing the design of models of trust for robots towards their users or other
agents are still scarce, yet of great importance. Research in the current literature [15, 14] has shown
that robots that possess cognitive mechanisms to identify and anticipate mistakes or deceptions in their
human partner’s strategy (in other words, evaluating their trustworthiness) can increase the success
rate of joint collaborative tasks.</p>
      <p>In an attempt to address this knowledge gap, we propose the term “Computational Trust” (CT) to
refer to the mathematical models that can be used by a robot or an artificial agent to perform trust
evaluations on other agents. This term incorporates both cases of Artificial Trust and Natural Trust.
We discuss initial results from a systematic review we are currently finalizing. This work aims to be the
ifrst systematic dive into this new research domain.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Discussion and preliminary results</title>
      <p>From the analysis of the state-of-the-art in trust in HRI detailed in Section 1, we have identified a
gap in the current research landscape. Specifically, many works presented in the literature focus on
the human side of robotic trust, i.e., the trust that humans place in robots. As we have seen, this is
an important issue to consider during the worldwide efort to integrate social robots into our daily
lives and environments. Despite this, we argue that this does not draw a complete picture of trust
relationships between humanity and automation. In fact, there is evidence that trust is a bidirectional
relationship and that the dynamics of mutual trust should not be ignored [13].</p>
      <p>To address this gap in the current knowledge, we have performed a systematic review of the current
literature, searching for scientific publications that describe computational models of CT in HRI. Our
investigation across three major scientific databases (IEEE Xplore, Scopus, and Web of Science) has led
to the selection of 101 papers. By analysing them, we have generated a map of the co-occurrence of
their keywords, reported in Figure 1. In its upper branch, the keyword “robot trust” appears, which is
very closely linked to CT. Not only it appears in very recent publications, but it derives from topics
such as “decision making” and “cognitive model”. This is evidence of a recent trend in the literature to
embed trust within robotic agents. However, this term is ambiguous, as it is mainly used to depict the
classical perspective of human trust in robots. To avoid any confusion, we should instead refer to the
more objective term of CT, which unambiguously refers to any way of mathematically modeling trust
estimates in HRI. Furthermore, the keyword “human-robot collaboration” is closely linked to “trust”.</p>
      <p>Finally, it is worth noting that, since we looked for CT models, it emerges that these models are
increasingly being used to modulate the behaviour of robots during collaborative tasks with humans.
All this evidence underscores the importance of pursuing standards in designing CT models.</p>
    </sec>
    <sec id="sec-3">
      <title>Acknowledgments</title>
      <p>Francesco Semeraro’s work was supported by the UKRI DTP CASE-conversion “Human-Robot
Collaboration for Flexible Manufacturing” (Ref. 2480772), sponsored by UKRI Engineering and Physical
Sciences Research Council and BAE Systems plc.</p>
      <p>Marta Romeo’s work was supported by the UKRI Node on Trust (Ref. EP/V026682/1, https://trust.tas.
ac.uk).</p>
      <p>Angelo Cangelosi’s work was partially supported by the Horizon projects PRIMI, MUSAE and the
ERC Advanced eTALK (funded by UKRI) and the UKRI Trustworthy Autonomous Systems Node on
Trust (Ref. EP/V026682/1).</p>
      <p>Samuele Vinanzi’s work was partially supported by Shefield Hallam University’s Early Career
Research and Innovation Fellowship. This material is based upon work supported by the Air Force
Ofice of Scientific Research, Air Force Materiel Command, USA. Funder award Ref. FA9550-19-1-7002.
For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) license
to any Author Accepted Manuscript version arising from this submission.
within and between two games of strategic interaction with computerized confederate agents,
Frontiers in Psychology 7 (2016) 1–17.
[7] J. D. Lee, K. A. See, Trust in automation: Designing for appropriate reliance, Human Factors 46
(2004) 50–80. doi:10.1518/hfes.46.1.50/_30392.
[8] P. A. Hancock, T. T. Kessler, A. D. Kaplan, J. C. Brill, J. L. Szalma, Evolving trust in robots:
Specification through sequential and comparative meta-analyses, Human Factors 63 (2021) 1196–1229.
[9] M. Romeo, I. Torre, S. L. Maguer, A. Cangelosi, I. Leite, Putting robots in context: Challenging the
influence of voice and empathic behaviour on trust, in: 32nd IEEE International Conference on
Robot and Human Interactive Communication, RO-MAN, 2023.
[10] S. S. Sebo, P. Krishnamurthi, B. Scassellati, “I don’t believe you”: Investigating the efects of
robot trust violation and repair, in: Proceedings of the 14th ACM/IEEE International Conference
on Human-Robot Interaction (HRI), Association for Computing Machinery, 2019, pp. 57–65.
doi:10.1109/HRI.2019.8673169.
[11] H. Azevedo-Sa, X. J. Yang, L. P. Robert, D. M. Tilbury, A unified bi-directional model for natural
and artificial trust in human-robot collaboration, Ieee Robotics and Automation Letters 6 (2021)
5913–5920. doi:10.1109/lra.2021.3088082.
[12] C. C. Jorge, M. L. Tielman, C. M. Jonker, Artificial trust as a tool in human-ai teams, in: 2022
17th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, 2022, pp.
1155–1157.
[13] J. Zonca, A. Sciutti, Does human-robot trust need reciprocity?, RO-MAN 2021 Workshop on Robot</p>
      <p>Behavior Adaptation to Human Social Norms (TSAR), 2021.
[14] S. Vinanzi, M. Patacchiola, A. Chella, A. Cangelosi, Would a robot trust you? developmental
robotics model of trust and theory of mind, Philosophical Transactions of the Royal Society B 374
(2019) 20180032.
[15] S. Vinanzi, A. Cangelosi, C. Goerick, The collaborative mind: intention reading and trust in
human-robot interaction, Iscience 24 (2021).
[16] N. Van Eck, L. Waltman, Software survey: Vosviewer, a computer program for bibliometric
mapping, scientometrics 84 (2010) 523–538.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Rousseau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. B.</given-names>
            <surname>Sitkin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. S.</given-names>
            <surname>Burt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. F.</given-names>
            <surname>Camerer</surname>
          </string-name>
          ,
          <article-title>Not so diferent after all: A cross-discipline view of trust, Academy of management review 23 (</article-title>
          <year>1998</year>
          )
          <fpage>393</fpage>
          -
          <lpage>404</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. F.</given-names>
            <surname>Mahani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <article-title>Human trust in robots: A survey on trust models and their controls/robotics applications</article-title>
          ,
          <source>IEEE Open Journal of Control Systems</source>
          <volume>3</volume>
          (
          <year>2023</year>
          )
          <fpage>58</fpage>
          -
          <lpage>86</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>F.</given-names>
            <surname>Semeraro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Carberry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Leadbetter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cangelosi</surname>
          </string-name>
          ,
          <article-title>Good things come in threes: The impact of robot responsiveness on workload and trust in multi-user human-robot collaboration</article-title>
          ,
          <source>in: 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</source>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>F.</given-names>
            <surname>Semeraro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Carberry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cangelosi</surname>
          </string-name>
          ,
          <article-title>Simpler rather than challenging: Design of non-dyadic human-robot collaboration to mediate human-human concurrent tasks</article-title>
          ,
          <source>in: Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems</source>
          , AAMAS '23,
          <string-name>
            <surname>International</surname>
            <given-names>Foundation</given-names>
          </string-name>
          <source>for Autonomous Agents and Multiagent Systems</source>
          , Richland,
          <string-name>
            <surname>SC</surname>
          </string-name>
          ,
          <year>2023</year>
          , p.
          <fpage>2541</fpage>
          -
          <lpage>2543</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>F.</given-names>
            <surname>Semeraro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Carberry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cangelosi</surname>
          </string-name>
          ,
          <article-title>Towards multi-user activity recognition through facilitated training data and deep learning for human-robot collaboration applications</article-title>
          , in: 2023
          <source>International Joint Conference on Neural Networks (IJCNN)</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>01</fpage>
          -
          <lpage>09</lpage>
          . doi:
          <volume>10</volume>
          .1109/IJCNN54540.
          <year>2023</year>
          .
          <volume>10191782</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M. G.</given-names>
            <surname>Collins</surname>
          </string-name>
          , I. Juvina,
          <string-name>
            <given-names>K. A.</given-names>
            <surname>Gluck</surname>
          </string-name>
          ,
          <article-title>Cognitive model of trust dynamics predicts human behavior</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>