<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Robot Errors⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Rafaella</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Esposito</string-name>
          <email>raffaella.esposito3@unina.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alessandra Rossi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Silvia Rossi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Naples Federico II</institution>
          ,
          <addr-line>Via Cintia, 80126 Naples</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>robot appears. Trusting a robot involves perceiving its actions as driven by a benevolent purpose, making intentionality attribution a psychological mechanism worthy of attention in Human-Robot Interaction (HRI). By integrating findings from studies on intentionality bias in HRI, we highlight a gap in the literature and discuss implications for user expectations and trust. In particular, we know that people often explain robot mistakes as deliberate choices, but we do not yet know whether this judgment hinges on how human‑like the We argue that a humanoid appearance will amplify attributions of agency and purpose, whereas a mechanical guise will steer observers toward design‑based or accidental explanations. Demonstrating this efect would pinpoint when embodiment alone reshapes error interpretation, revealing when and how a robot's appearance alters the perceived intentionality behind its actions.</p>
      </abstract>
      <kwd-group>
        <kwd>robot errors</kwd>
        <kwd>intentionality attribution in HRI</kwd>
        <kwd>robot human‑likeness</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>People are driven by a spontaneous tendency to make sense of the world around them. For this reason,
when we observe someone act, we almost automatically ask ourselves, “Why did they do that?”. To
explain someone’s behavior, we assume the person had motives, goals, feelings, and other mental states,
and those assumptions help us turn a stream of actions into a coherent story.</p>
      <p>
        The human mind shows a strong intentionalistic bias, preferring explanations of others’ behavior as
intention‑driven even when information is ambiguous. Indeed, when context does not ofer immediate
mechanical explanations, intention attribution reduces uncertainty and facilitates prediction of future
events [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Once an act is framed as intentional, the door opens to evaluating the actor’s motives, an
appraisal that, in turn, underpins moral judgement and social expectations. In this way, intentionality
attribution becomes the precondition for deciding whether another agent can be trusted [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        According to the trust literature, trust matters precisely because the trustee possesses the capacity to
choose actions that could either benefit or harm the trustor [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. In fact, trust is defined as the willingness
of the trustor to expose themselves to vulnerability in relation to the trustee [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. For this reason, the
trustee must be regarded as capable of deliberate, goal‑directed behavior [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ]. In other words, a party
can only be trusted if they are viewed as having agency; without that perception, the very concept of
trust collapses [
        <xref ref-type="bibr" rid="ref3 ref7">7, 3</xref>
        ].
      </p>
      <p>
        Investigating whether humans extend their intentionality bias to robots is therefore worthy of
attention. Viewing robots as intentional agents can make users attribute benevolence to the robot’s
motives and can help trust repair [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ]. Understanding when and why intentionality attributions arise
will clarify one of the hinges on which human–robot trust turns.
      </p>
      <p>Here, we argue that a moment that may be particularly revealing of these psychological mechanisms
is the moment when a robot fails. An error forces observers to decide whether the slip stems from blind
mechanics or from the choices of an intentional agent, and such a claim is supported by experimental
© 2025 Copyright for this paper by its authors.</p>
      <p>CEUR</p>
      <p>
        ceur-ws.org
evidence [
        <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
        ]. We also argue that the robot’s human‑likeness could amplify the inference that an
inner decision‑making system lies behind a robot’s error.
      </p>
      <p>We believe that the degree of anthropomorphism may influence the extent to which a robot’s errors
are perceived as intentional or mere malfunctions. A humanoid form, with its familiar human‑like
features and expressive capabilities, may serve as a powerful cue for interpreting an error as a deliberate
act with underlying motives. Studying how people explain robotic errors across a continuum of
human‑likeness may ofer a natural test‑bed for tracing when intentionality is conferred, and whether
trust is ultimately eroded or repaired.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Intentionality Attribution to Robots</title>
      <p>Understanding how the degree of anthropomorphism influences intentionality attributions for robot
errors raises a broader question: do people attribute intentionality to robots in the first place?</p>
      <p>
        Studies indicate that people use similar social‑cognitive tools to explain both human and robot
actions [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. In particular, the correspondence bias (i.e., the tendency to explain behaviour in terms of
dispositional choice despite situational constraints) operates for both human and robotic agents. In
experiments with the humanoid robot Pepper [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], observers attributed volitional choice to the robot
even after the experimental script made clear that its actions were externally programmed; the bias
grew stronger when Pepper voiced a counter‑normative stance, signalling an opinion that clashed with
social expectations.
      </p>
      <p>Functional‑imaging work complements these behavioural findings: activity in classic Theory‑of‑Mind
regions—the medial prefrontal cortex, temporoparietal junction and posterior superior temporal sulcus—
rises linearly with a robot’s human‑likeness, from mechanical devices through zoomorphic platforms to
fully anthropomorphic embodiments [14]. This graded neural response suggests that perceived agency
is neurally encoded well before any explicit judgement is made.</p>
      <p>
        When people confront complex robot behaviours whose internal logic they cannot fully parse, they
seem to default to inferring intentions; if the behavioural pattern then breaks, they interpret the
deviation as a deliberate act of opposition [15]. Consistent with this, Short [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and Ullman et al. [16]
showed that humanoid robots programmed to cheat during games drew markedly stronger attributions
of intent. Further, Ciardo et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] demonstrated that the type of error matters: observers framed
a clearly mechanical malfunction as a design glitch, whereas a more human‑like slip sustained their
mentalistic reading of the robot’s behaviour. Taken together, these findings indicate that deviations
from normative scripts or user expectations may amplify intentionality attributions.
      </p>
      <p>Beyond errors and cheating, subtler non‑verbal cues also invite mental‑state inferences. Human
partners read intention into robots’ gaze shifts [ 17] and reactive micro‑movements [18] using the same
heuristics deployed in human–human interaction.</p>
      <p>Overall, while people may not view robots as fully equivalent to humans in terms of intentionality,
they do ascribe mental states and intentions to robots to varying degrees depending on the robot’s
design and behaviour. Whether, and in what way, these two factors interact remains an open question.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Robot Errors and Anthropomorphism: An Open Question</title>
      <p>
        Although numerous studies have shown that errors enacted by humanoid robots prompt attributions of
purpose and mental states [
        <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
        ], no work to date has directly compared an anthropomorphic‑looking
robot with a mechanical one making the same error.
      </p>
      <p>Figuring out whether the anthropomorphic envelope alone is enough to cast an error as an intentional
act means deciding whether the robot body functions as a magnifying glass for any future moral and
relational assessment. If the blame falls on the plastic face rather than the robot’s programming, then
aesthetics becomes ethics. Thus, we would expect that the more deliberate an error appears to be, the
more our trust in the robot will swing—upward when we infer benevolent motives, downward when
we deem them malevolent or negligent.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions</title>
      <p>This paper set out to bridge two strands of research: (i) studies showing that humanoid appearance
alone can trigger an intentionality bias, and (ii) studies demonstrating that certain robot errors invite
mental‑state attributions. Bringing these lines together highlights an untested junction: does the very
same error elicit diferent intentional readings solely because of the body that performs it?</p>
      <p>Robots will disappoint us. The question is not if but how we will explain those disappointments.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>This material is based upon work supported by the Air Force Ofice of Scientific Research under award
number FA8655‑23‑1‑7060. Any opinions, findings, conclusions, or recommendations expressed in
this material are those of the authors and do not necessarily reflect the views of the United States Air
Force.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.
[14] S. Krach, F. Hegel, B. Wrede, G. Sagerer, F. Binkofski, T. Kircher, Can machines think? interaction
and perspective taking with robots investigated via fmri, PLoS ONE 3 (2008) e2597. doi:10.1371/
journal.pone.0002597.
[15] Y. Imamura, K. Terada, H. Takahashi, Efects of behavioral complexity on intention attribution to
robots, in: Proceedings of the 3rd International Conference on Human–Agent Interaction (HAI),
2015, pp. 65–72.
[16] D. Ullman, L. Leite, J. Phillips, J. Kim-Cohen, B. Scassellati, Smart human, smarter robot: How
cheating afects perceptions of social agency, in: Proceedings of the Annual Meeting of the
Cognitive Science Society, volume 36, 2014.
[17] B. Mutlu, F. Yamaoka, T. Kanda, H. Ishiguro, N. Hagita, Nonverbal leakage in robots:
Communication of intentions through seemingly unintentional behavior, in: Proceedings of the
4th ACM/IEEE International Conference on Human–Robot Interaction (HRI), 2009, pp. 69–76.
doi:10.1145/1514095.1514110.
[18] K. Terada, T. Shamoto, A. Ito, H. Mei, Reactive movements of non-humanoid robots cause intention
attribution in humans, in: Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), 2007, pp. 3715–3720.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B. F.</given-names>
            <surname>Malle</surname>
          </string-name>
          ,
          <article-title>Attribution theories: How people make sense of behavior</article-title>
          , Wiley (
          <year>2022</year>
          )
          <fpage>93</fpage>
          -
          <lpage>120</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Vanneste</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Puranam</surname>
          </string-name>
          , Artificial intelligence, trust, and perceptions of agency,
          <source>Acad. Manage. Rev</source>
          . (
          <year>2024</year>
          ). doi:
          <volume>10</volume>
          .5465/amr.
          <year>2022</year>
          .
          <volume>0041</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Hardin</surname>
          </string-name>
          , Trust and Trustworthiness, Russell Sage Fdn., New York, NY,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Schilke</surname>
          </string-name>
          , Trust in social relations,
          <source>Annual Review of Sociology</source>
          <volume>47</volume>
          (
          <year>2021</year>
          )
          <fpage>239</fpage>
          -
          <lpage>259</lpage>
          . doi:
          <volume>10</volume>
          .1146/ annurev- soc
          <string-name>
            <surname>-</surname>
          </string-name>
          082120- 082850.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>C.</given-names>
            <surname>Taylor</surname>
          </string-name>
          ,
          <source>Human Agency and Language</source>
          , Cambridge Univ. Press, Cambridge, UK,
          <year>1985</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M. W.</given-names>
            <surname>Morris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Menon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. R.</given-names>
            <surname>Ames</surname>
          </string-name>
          ,
          <article-title>Culturally conferred conceptions of agency: A key to social perception of persons, groups, and other actors, in: Lay Theories and Their Role in the Perception of Social Groups</article-title>
          , Psychology Press,
          <year>2003</year>
          , pp.
          <fpage>169</fpage>
          -
          <lpage>182</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Rousseau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. B.</given-names>
            <surname>Sitkin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. S.</given-names>
            <surname>Burt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Camerer</surname>
          </string-name>
          ,
          <article-title>Not so diferent after all: A cross-discipline view of trust</article-title>
          ,
          <source>Acad. Manage. Rev</source>
          .
          <volume>23</volume>
          (
          <year>1998</year>
          )
          <fpage>393</fpage>
          -
          <lpage>404</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Tolmeijer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Weiss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hanheide</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Lindner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. M.</given-names>
            <surname>Powers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Dixon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Tielman</surname>
          </string-name>
          ,
          <article-title>Taxonomy of trust-relevant failures and mitigation strategies</article-title>
          ,
          <source>in: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI)</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>12</lpage>
          . doi:
          <volume>10</volume>
          .1145/3319502. 3374793.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Rezaei Khavas</surname>
          </string-name>
          ,
          <article-title>A review on trust in human-robot interaction</article-title>
          ,
          <source>arXiv</source>
          (
          <year>2021</year>
          ) arXiv:
          <fpage>2105</fpage>
          .???
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>E.</given-names>
            <surname>Short</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hart</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Scassellati</surname>
          </string-name>
          , No fair!
          <article-title>! an interaction with a cheating robot</article-title>
          ,
          <source>in: Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI)</source>
          ,
          <year>2010</year>
          , pp.
          <fpage>219</fpage>
          -
          <lpage>226</lpage>
          . doi:
          <volume>10</volume>
          .1109/HRI.
          <year>2010</year>
          .
          <volume>5453193</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>F.</given-names>
            <surname>Ciardo</surname>
          </string-name>
          ,
          <string-name>
            <surname>D. De Tommaso</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Wykowska</surname>
          </string-name>
          ,
          <article-title>Efects of erring behavior in a human-robot joint musical task on adopting intentional stance toward the icub robot</article-title>
          ,
          <source>in: Proceedings of the 2021 IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>698</fpage>
          -
          <lpage>703</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>M. M. A. De Graaf</surname>
            ,
            <given-names>B. F.</given-names>
          </string-name>
          <string-name>
            <surname>Malle</surname>
          </string-name>
          ,
          <article-title>People's explanations of robot behavior subtly reveal mental state inferences</article-title>
          ,
          <source>in: Proceedings of the 2019 ACM/IEEE International Conference on Human-Robot Interaction (HRI)</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>239</fpage>
          -
          <lpage>248</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Edwards</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Edwards, Does the correspondence bias apply to social robots?: Dispositional and situational attributions of human versus robot behavior</article-title>
          ,
          <source>Front. Robot. AI</source>
          <volume>8</volume>
          (
          <year>2022</year>
          )
          <article-title>788242</article-title>
          . doi:
          <volume>10</volume>
          .3389/frobt.
          <year>2021</year>
          .
          <volume>788242</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>