<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>take place⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Kiran M. Sabu</string-name>
          <email>kiran.mini-sabu@oru.se</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jennifer Renoux</string-name>
          <email>Jennifer.Renoux@oru.se</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hermine J. Grosinger</string-name>
          <email>Hermine.Grosinger@oru.se</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alessandro Safiotti</string-name>
          <email>Alessandro.Saffiotti@oru.se</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Center for Applied Autonomous Sensor Systems (AASS), Örebro University</institution>
          ,
          <addr-line>Örebro</addr-line>
          ,
          <country country="SE">Sweden</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Deliberative Communication, Human-Robot Interaction, Human-Agent Communication</institution>
          ,
          <addr-line>Human-Agent Collabo-</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>HAIC 2025 - First International Workshop on Human-AI Collaborative Systems</institution>
          ,
          <addr-line>editors Michele Braccini, Allegra De Filippo</addr-line>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Michela Milano</institution>
          ,
          <addr-line>Alessandro Safiotti, Mauro Vallati</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this short paper, we consider scenarios in human-robot collaboration where the robot relies on deliberation to activate communicative actions. We claim that reasoning about the perception of these actions is a key, but often disregarded ingredient for successful communication, and propose a pre-theoretical model that accounts for this. We also discuss the problems that may arise if perception is neglected when reasoning about communication. ration.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>In many situations, humans and artificial agents work together to perform joint tasks. Each of them
may have their own strengths and weaknesses related to their capabilities. Communication can help the
human and the agent in communicating their needs and help each other. For example, in a Human-Robot
collaborative setting, the robots may have to act as communicators, and autonomously communicate
with their human teammate when the need arises. Consider an assembly scenario where a robot and
a human are both placing various pieces on a board. In some cases, pieces may be defective, which
the robot can perceive. In this case, the robot has to inform the human about the defective pieces such
that the human can replace/discard them. Due to the nature of the environment, which is potentially
noisy, the robot may need to use other communication forms than speech signals. For instance, it may
instead display a visual clue on a monitor, or move the defective object on a dedicated space as a way
to inform the human. The robot therefore has to reason upon the communication actions to perform.
In this paper, we discuss some perceptual aspects required for successful communication in such a
collaborative setting, and claim that these aspects ought to be taken into account when reasoning about
communication activities..</p>
      <p>
        Deliberation is often considered a necessary ingredient for communication in purposeful human-agent
interaction [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1, 2, 3</xref>
        ]. In particular, Sabu et al [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] introduce the notion of deliberative communication for
agent communicators: a reasoning process before and during communication that addresses questions
relating to the efects of a communicative action. These questions include:
      </p>
      <sec id="sec-1-1">
        <title>What to communicate,</title>
        <p>namely, reasoning on the content of the message based on the communicator’s knowledge, interlocutor,
environment, and context so that the communicator can achieve the communicative goal; How to
communicate, which includes two dependent levels: inter-how and intra-how, based on the modality
used to represent the message (Inter-How) and the expressiveness of the message (Intra-How); and
When to communicate, namely, identifying the need to communicate (When-Need), and reasoning
behind acting (When-Act). As an example, a robot may use reasoning to decide to ask a human operator
to carry a given box (What), and to ask it now (When) using its voice interface (How).</p>
        <p>CEUR</p>
        <p>ceur-ws.org</p>
        <p>Just performing the action does not guarantee that communication will take place unless the action is
perceived. In this paper, we consider communication as the action being performed by the communicator
and being perceived by the interlocutor. In the context of deliberative communication considered here,
this means that when deciding to perform communicative actions, the agent may have to reason about
the perceiving capabilities of the human before and during communication to ensure that these actions
are perceivable. Interestingly, these capabilities may change dynamically: for example, they may be
afected by environmental factors, like noise, or by the cognitive and physical state of the human, like
fatigue or impairment. Depending on the capabilities, the human may best perceive actions done via
touch, auditory or visual means, and the agent can choose what communicative action to perform
accordingly. In our assembly scenario, the robot may decide that the environment is too noisy for
a voice message to be perceived, and decide instead to post a visual message on a monitor (How) as
soon as the human will turn toward it (When). After having performed the action, the human may or
may not have perceived the action. If the action is not perceived, the message will not be conveyed
to the interlocutor. Therefore, the agent also needs to monitor and reason about the uncertainty in
the action being perceived by the human after the action is performed. All these reasoning activities
require a model of communicative actions that include perception explicitly: this paper reports some
initial reflections toward the development of such a model, where, before an artificial agent decides to
communicate with a human, the action being perceived must also be considered.</p>
        <p>
          In the field of Human-Robot Interaction (HRI), much attention has been put to the issues of
communicating with the human [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] and of perceiving the human [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], but these issues are rarely considered
in combination for deliberative communication. Some previous work in HRI [
          <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
          ] considered the
perceiving capabilities of a human interlocutor in the context of a robot deliberating for communication
as actions. Specifically, Cha et al. [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] highlight the importance of perceiving the communicative signal,
but reasoning about the signal being perceived by the human was not considered in the planning of
communicative actions. Chen et al. [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] focus on an assistive shared-control or tele-operation robot
setting, and consider the efect of having perceived a communicative action but without considering
whether or not the action was actually perceived. Dadvar et al. [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] propose a framework for joint
communication and motion planning for cobots (Collaborative Robots) that allows robots to communicate
to a human during navigation. The sensor model presented accounts for what the human observes and
for representing situations where the human many not perfectly understand or observe the robot’s
communicative actions. However, the robots always have a perfect knowledge of whether the humans
perceived and understood the communicative actions, which is a simplifying assumption. In many
cases, the robot may not know whether the communicative action has been perceived or understood,
and this may result in false-beliefs. Therefore this is an important aspect of deliberative communication
which must be considered.
        </p>
        <p>In the rest of this paper, we discuss the ingredients needed to reason about the perception of
communication actions in a human-robot collaborative context, and discuss some of the challenges
that arise when there is a mismatch between performing a communicative action, perceiving it, and
knowing that it has been perceived.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Background: Communicative Action</title>
      <p>
        In Speech Acts theory [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] communication is considered as actions, thus embracing the viewpoint that
speech actions are performed by agents like other actions. As an agent decides to communicate, it has to
perform actions for communication, or communicative actions. Building on this, agent communication
languages in Multi-Agent Systems (MAS) include communicative actions consisting of a message
and a performative [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. These approaches typically do not consider the perceptual capabilities of the
interlocutor or the fact that the actions are perceived [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>
        – “actions performed by an agent, with the intention of increasing another agent’s knowledge of
the first agent’s state of mind.”
Existing works have defined communicative actions in various contexts as:
• Hellström et al. [11]
• Frijns et al. [12]
• Sabu et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]
– “actions performed by humans and robots with the aim of coordinating behaviors, reducing
uncertainty, and building a common understanding.”
– “perceivable actions representing a message that can be produced by the communicator and
that can produce a change in the mental state of the interlocutor when perceived.”
      </p>
      <p>The definition by Frijns et al. [12] does not distinguish between actions and communicative actions.
Since we consider the need for deliberation in communication and with agents deciding to communicate,
such a distinction is necessary. The definitions by Hellström
et al. [11] and Frijns et al [12] consider the
actions being perceived only implicitly. Both definitions focus on the goal which the communication
aims to achieve, whereas the definition by Sabu</p>
      <p>
        et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] focuses on what the action constitutes and
what it does by also considering the perceptual aspect.
      </p>
      <p>
        In this work, we adopt the definition of communicative actions by Sabu et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Communication as Perceived Actions</title>
      <p>We now present our preliminary model, that focuses on dyadic communication between a robot agent ( )
as the communicator, and a human agent ( ) as the interlocutor. We assume the agents are deliberating
and acting in a partially observable environment where actions can have non-deterministic efects. The
model below is pre-theoretical, in that we do not propose a full formalization but only present those
elements that are the focus of this paper.</p>
      <sec id="sec-3-1">
        <title>3.1. Basic ingredients</title>
        <p>Consider a pair of agents A = { , }</p>
        <p>, and finite set of states  where  ⊆   ×    ×    .   are the ontic
components of states, representing the physical state of the environment and of the agents;   
are the epistemic components of states, representing the mental state of the robot and of the human,
and   
respectively. A state  ∈  is a triple ⟨  ,  

,    ⟩. Each agent  ∈ A can perform a finite set of actions
  =    ∪</p>
        <p>∪ { 0}.    is the set of ontic actions that  can perform, i.e., those that result in a change in
the ontic state, like the action MOVE_defective-obj.    is set of epistemic actions that  can perform,
i.e., those that result in a change in the epistemic state, like the action SAY_defective-obj-name.
Note that in cognitive science, the term “epistemic action” is often used as a synonym of information
gathering action, that is, an action that only changes the epistemic state of the actor [13, 14]. In our
model, we use the terminology from Dynamic Epistemic Logic [15] and postulate that an epistemic
action   can change the epistemic state of the communicator (
 ), of the interlocutor (

 ), or both.

Lastly,  0 is the null action that does not produce any change in state, introduced for convenience.</p>
        <p>An important feature of actions in our context is that actions may have communicative content. This
is obvious for epistemic actions that change the epistemic state of the other agent. However, ontic
actions can also have a communicative content and therefore afect the epistemic state: e.g., the previous
action MOVE_defective-obj may convey the message that the object needs to be replaced. We refer to
any action with a communicative content as a communicative action. The crucial observation here is
that an agreement has been established between the agents on a shared meaning of these actions, which
we informally refer to as their message. In our model, we assume that such an agreement exists, and
that communicative actions are interpreted accordingly by both  and  . In our example, the robot and
the human agree that moving an object to a certain dedicated space means that the object is defective.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Perceiving actions</title>
        <p>We now come to the main point of this paper, that is, the claim that communicative actions must be
perceived in order for their efect on the epistemic state of the interlocutor to materialize, and that any
deliberative communication should take this fact into account. To model this, we introduce, for any
agent  ∈ A, the following predicate:</p>
        <p>• Perceive (  , ) if  perceives   ∈   in  performed by  in state  , where  ∈  ,  ∈ A and  ≠  .</p>
        <p>
          We can use this predicate to give a more precise definition of the robot communicative actions
introduced by Sabu et al. [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]:
Definition 1.
        </p>
        <p>A communicative action is an action   ∈   representing a message, performed in a state
 ∈  , produces a change in    provided that Perceive  (  , ) is true.
state of the human, provided that the human has perceived the action.1
could result in  ′ = ⟨ ′, 
′ ,  ′ ⟩, where  ′ is a new state of the environment and  



In the previous example, the robot’s executing the ontic action MOVE_defective-obj in  = ⟨  ,  
′ is a new epistemic

,   ⟩

In the previous example, performing   also changed the epistemic state  

′ of the performing agent
 . One obvious reason for this change is that  may know that it has performed the action. In this
paper, we are more interested in another reason why  
′ may change: after performing a communicative
action, the communicator may have the belief that the action has been perceived, which in turns implies
the belief that its epistemic efects have been produced. To model this, we introduce, for any pair of
agents ,  ∈ A with  ≠  , the following predicate:</p>
        <p>• Believe Perceive (  , ) if  believes that  perceives   ∈   in  performed by  in  , where  ∈  .
In general, if a communicative action   ∈   is applicable in a state  = ⟨  ,  

,    ⟩, then
performing   in  changes  to a new state 
′
=  (, 
 ), where  is a transition function.</p>
        <p>How
the epistemic components of  are afected by the action depends both on
Believe Perceive (  , ) , as summarized in Table 1. Note that how the ontic component  ′ changes
only depends on   , but is independent on the action’s being perceived or not. To illustrate, consider
our assembly scenario where the robot has to decide to communicate in state  by either performing
the action MOVE_defective-obj or SAY_defective-obj-name. If the environment is noisy, and the
Believe Perceive (SAY_defective-obj-name, ) is False, the robot can decide to perform the action
MOVE_defective-obj, provided that Believe Perceive (MOVE_defective-obj, ) is True. Under
these conditions, the action will be perceived, which results in successful communication.
Perceive (  , ) and on
may entail a request to the interlocutor to perform an action. For instance, the robot’s (epistemic)
action SAY_defective-obj-name may entail the request to the human to perform the (ontic) action to
1A possible variation is one where the message is conveyed by the efect of the action rather than by its mere performance: in
that case the Perceive predicate should use the resulting state as an argument, but the essence of our discussion would not
change.
dispose of the object; the (ontic) action MOVE_defective-obj may entail the same request (assuming
the right agreement, as we discussed above); and the (epistemic) action SAY_ask-obj-name may entail
the request to perform the (epistemic) action to tell the object name to the robot. In our initial model, we
assume that requests to the human will always be complied to: under this assumption, a communicative
action   done by the robot may result in additional, mediated efects, 2 namely, the efects produced
by the action (  ) requested to (and executed by) the human. We refer to these efects as after-efects .
Depending on the request contained in   , the after-efects can be ontic, epistemic, or null. Also, these
efects can be produced in the state  ″, resulting from the execution of   in state  ′, or in a later state
 ‴, if the human postpones the execution of   : we refer to these as immediate and delayed after-efects,
respectively.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Misperceiving actions</title>
        <p>An important observation regarding the after-efects of a communicative action   is that these efects
can only materialize if Perceive (  , ) is true. Modeling this can be important when reasoning about
communication.</p>
        <p>Consider again the diferent cases in Table 1. Case 1 is the nominal case, in which the action  
performed as communication by  results in the action being perceived by  , and  correctly believes
that the action has been perceived. Hence,  will expect that the states resulting from the execution
of   will be afected by both the efects of   , and by its after-efects, as we discussed above. In our
example, the robot would expect that after SAY_defective-object-name is executed, the object will
be disposed of.</p>
        <p>In case 4, the action performed by  is not perceived and, correctly,  does not believe that the action
has been perceived. Thus, the action performed by the communicator does not result in the change of
the mental state of  , nor in the related after-efects. In our example, the object will not be disposed of,
and the robot knows that it will not be disposed of: this would enable the robot to, e.g., decide to do the
communication again, possibly using a diferent action.</p>
        <p>Cases 2 and 3 result in false beliefs and may lead to problematic situations. These false beliefs may
arise due to the partial observability of the actions and epistemic states of other agents. In case 2,
although the action performed by  was perceived by  ,  has a false belief that it was not perceived.
As a result  may decide to perform   again, which could result in annoyance on  . In case 3, the
action performed by  is not perceived by  , but  has a false belief that is was perceived. As a result,
 may move on with execution of other actions that presume that the communication has taken place,
or it would wait to observe the efects expected to be produced by  . In our scenario, the robot might
have used a SAY_defective-obj-name to communicate a defective object, unaware that the human
can not hear that because of the noise level, and then wait forever that the human replaces the object.</p>
        <p>Note that the above four cases require diferent decisions by the robot after   has been performed.
They may also lead to diferent decision about which   to perform in the first place, depending on the
context. These cases of course could not be distinguished if we did not explicitly represent and reason
about perception.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Next steps</title>
      <p>In this work, we discussed the importance of reasoning about communicative actions by a robot
collaborating with a human, which — crucially — includes reasoning about whether the communicative
action was perceived by the human. Two situations were highlighted in which the robot can have
false beliefs where the action may or may not be perceived by the human. As a result, the robot may
unnecessarily perform the same action again, or erroneously move on with the execution of other
actions, or wait for efects that will not be produced. To address this, the robot has to reason about
the uncertainty in the action being perceived by the human. For simplicity, we considered the robot
2This relates to the ramification problem , which has been extensively discussed in the literature [16].
agent (communicator) performing actions representing a message with the human agent (interlocutor)
having to perceive the action. But communication can also be considered as having the message
being perceived by the interlocutor instead of the action. Also, while we focused our discussion on
human-robot communication, it is worth mentioning that similar considerations (and solutions) could
be made for any agent-agent communication, whenever the reliability of communication can be afected
by contextual conditions.</p>
      <p>The pre-theoretical model presented in this work is not formal. Future work will focus on formalizing
the states and actions within the scope of deliberative communication for human-robot collaboration,
and use this formalization to address the challenges with respect to the false beliefs of the perceiving of
actions. Future models will also address possible delays in perceiving the message. Another challenge
that will be addressed is the uncertainty in the human producing the after-efects of communicative
actions when the assumption of human complying to a request is relaxed, as in the real-world there is
no guarantee that this assumption will always hold.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>This work was funded by the Swedish Research Council, grants number 2022-04676 and number
202105542, and by the European Commission, under the Horizon AI4Europe project, grant agreement
101070000.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <sec id="sec-6-1">
        <title>The authors have not employed any Generative AI tools.</title>
        <p>[11] T. Hellström, S. Bensch, Understandable robots - What, Why, and How, Paladyn, Journal of</p>
        <p>Behavioral Robotics 9 (2018) 110–123. doi:10.1515/pjbr-2018-0009.
[12] H. A. Frijns, O. Schürer, S. T. Koeszegi, Communication Models in Human–Robot Interaction:
An Asymmetric MODel of ALterity in Human–Robot Interaction (AMODAL-HRI), International
Journal of Social Robotics 15 (2023) 473–500. doi:10.1007/s12369-021-00785-7.
[13] D. Kirsh, P. Maglio, On distinguishing epistemic from pragmatic action, Cognitive science 18
(1994) 513–549. doi:https://doi.org/10.1016/0364-0213(94)90007-8.
[14] S. Croom, H. Zhou, C. Firestone, Seeing and understanding epistemic actions, Proceedings of
the National Academy of Sciences 120 (2023) e2303162120. doi:https://doi.org/10.1073/pnas.
2303162120.
[15] T. Bolander, M. B. Andersen, Epistemic planning for single-and multi-agent systems, Journal of</p>
        <p>Applied Non-Classical Logics 21 (2011) 9–34. doi:https://doi.org/10.3166/jancl.21.9-34.
[16] M. Shanahan, The ramification problem in the event calculus, in: Proceedings of the 16th
International Joint Conference on Artifical Intelligence - Volume 1, IJCAI’99, Morgan Kaufmann
Publishers Inc., San Francisco, CA, USA, 1999, p. 140–146. doi:10.5555/1624218.1624239.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D. E.</given-names>
            <surname>Appelt</surname>
          </string-name>
          ,
          <article-title>Planning english referring expressions</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>26</volume>
          (
          <year>1985</year>
          )
          <fpage>1</fpage>
          -
          <lpage>33</lpage>
          . doi:https://doi.org/10.1016/
          <fpage>0004</fpage>
          -
          <lpage>3702</lpage>
          (
          <issue>85</issue>
          )
          <fpage>90011</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Wooldridge</surname>
          </string-name>
          , An Introduction to Multiagent Systems, 2. ed., repr ed., Wiley, Chichester,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>K. M. Sabu</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Renoux</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Safiotti</surname>
          </string-name>
          ,
          <article-title>Deliberative Communication for Human-Agent Interaction: A Position Paper</article-title>
          ,
          <source>in: Proc of the 12th Int Conf on Human-Agent Interaction, ACM</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>11</fpage>
          -
          <lpage>16</lpage>
          . doi:
          <volume>10</volume>
          .1145/3687272.3688299.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bonarini</surname>
          </string-name>
          ,
          <article-title>Communication in human-robot interaction</article-title>
          ,
          <source>Current Robotics Reports</source>
          <volume>1</volume>
          (
          <year>2020</year>
          )
          <fpage>279</fpage>
          -
          <lpage>285</lpage>
          . doi:https://doi.org/10.1007/s43154-020-00026-1.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Duncan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Alambeigi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. W.</given-names>
            <surname>Pryor</surname>
          </string-name>
          ,
          <article-title>A survey of multimodal perception methods for humanrobot interaction in social environments</article-title>
          ,
          <source>ACM Transactions on Human-Robot Interaction</source>
          <volume>13</volume>
          (
          <year>2024</year>
          )
          <fpage>1</fpage>
          -
          <lpage>50</lpage>
          . doi:https://doi.org/10.1145/3657030.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>E.</given-names>
            <surname>Cha</surname>
          </string-name>
          , E. Meschke,
          <string-name>
            <given-names>T.</given-names>
            <surname>Fong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Mataric</surname>
          </string-name>
          ,
          <article-title>A Probabilistic Approach to Human-Robot Communication</article-title>
          ,
          <source>in: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</source>
          , IEEE, Macau, China,
          <year>2019</year>
          , pp.
          <fpage>6217</fpage>
          -
          <lpage>6222</lpage>
          . doi:
          <volume>10</volume>
          .1109/IROS40897.
          <year>2019</year>
          .
          <volume>8968051</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>K.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Fong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Soh</surname>
          </string-name>
          , MIRROR:
          <article-title>Diferentiable Deep Social Projection for Assistive HumanRobot Communication</article-title>
          , in: Robotics: Science and
          <string-name>
            <surname>Systems</surname>
            <given-names>XVIII</given-names>
          </string-name>
          ,
          <source>Robotics: Science and Systems Foundation</source>
          ,
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .15607/RSS.
          <year>2022</year>
          .XVIII.
          <volume>020</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Dadvar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Majd</surname>
          </string-name>
          , E. Oikonomou, G. Fainekos,
          <string-name>
            <given-names>S.</given-names>
            <surname>Srivastava</surname>
          </string-name>
          ,
          <article-title>Joint communication and motion planning for cobots</article-title>
          ,
          <source>in: 2022 International Conference on Robotics and Automation (ICRA)</source>
          , IEEE Press,
          <year>2022</year>
          , p.
          <fpage>4771</fpage>
          -
          <lpage>4777</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICRA46639.
          <year>2022</year>
          .
          <volume>9812261</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Searle</surname>
          </string-name>
          ,
          <article-title>Speech acts: An essay in the philosophy of language</article-title>
          , Cambridge university press,
          <year>1969</year>
          . doi:https://doi.org/10.1017/CBO9781139173438.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>P. R.</given-names>
            <surname>Cohen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. R.</given-names>
            <surname>Perrault</surname>
          </string-name>
          ,
          <article-title>Elements of a plan-based theory of speech acts</article-title>
          ,
          <source>Cognitive Science 3</source>
          (
          <year>1979</year>
          )
          <fpage>177</fpage>
          -
          <lpage>212</lpage>
          . doi:
          <volume>10</volume>
          .1016/S0364-
          <volume>0213</volume>
          (
          <issue>79</issue>
          )
          <fpage>80006</fpage>
          -
          <lpage>3</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>