<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>On Proactive Human-AI Systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jasmin Grosinger</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Örebro University</institution>
          ,
          <addr-line>Fakultetsgatan 1, 70182 Örebro</addr-line>
          ,
          <country country="SE">Sweden</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>With a growing number of AI systems and robots sharing the environment of humans, the need to define and investigate the particular topic of artificial proactivity is greater than ever. This position paper advocates the importance of this endeavor and starts the work by giving an initial definition of proactivity for artificial agents, analyzing the cognitive abilities necessary to create proactive agent behavior and suggests a categorization of approaches in diferent types of proactivity.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Proactivity</kwd>
        <kwd>Proactive AI systems</kwd>
        <kwd>Proactive agents</kwd>
        <kwd>Proactive robots</kwd>
        <kwd>Hybrid Human-AI systems</kwd>
        <kwd>HumanMachine-Interaction</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        human-centered environments [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. Humans prefer proactive AI systems [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and build trust
more easily to them [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>In this position paper, I highlight the topic of proactivity and proactive behavior in AI systems,
artificial agents and intelligent robots, and call for a general theory of proactivity. Literature
does not provide one distinct definition of proactivity — Section 2 proposes one. I investigate
some cognitive abilities that are necessary or useful when creating proactive behavior, and
how they interact in Section 3. Depending on their focus, there can be diferent types of
proactivity, covered in Section 4. The paper finishes with concluding remarks, future directions
and challenges in Section 5.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Definition of Proactivity</title>
      <p>
        Proactivity is a feature that is characteristic for humans. Humans can predict and understand
what others will do. In behavioral sciences it has been claimed that this ability gives humans an
evolutionary advantage compared to other species, enabling us to engage in collaborative and
proactive behavior [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. In organizational psychology, the term proactive behavior refers to
• anticipatory, self-initiated action,
• meant to impact people and/or their environments.
[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. This is opposed to reactive behavior which merely is responding to explicit requests or
external events. Most of today’s AI systems and robots are not proactive according to this
definition but reactive. However, there is an emerging tendency towards creating proactive
systems. Yet we lack a distinct common definition of what it means for an AI system to be
proactive and we lack a clear scope of the field. Many current works on proactivity do not
define the term but rely on the reader’s intuitive understanding. Many sources [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref9">9, 10, 11, 12</xref>
        ]
implicitly understand proactivity to be self-initiated acting, but neglect the predictive part of
the human proactivity definition. Some researchers (including ourselves) [
        <xref ref-type="bibr" rid="ref13 ref14 ref15 ref16">13, 14, 15, 16</xref>
        ] do
integrate prediction into their understanding of artificial proactivity, together with self-initiated
acting. I propose a definition of artificial proactivity that bases on the definition of human
proactivity:
      </p>
      <p>Proactivity is the ability to autonomously initiate anticipatory action
based on reasoning, meant to impact people and/or their environments.</p>
      <p>
        Note that to reason goes beyond using "hard-wired" rules for acting which are based on some
external trigger. This would classify as a reactive, not a proactive approach. Rather one may be
able to take Dennett’s Intentional stance [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] and ascribe ’rationality’, ’intentions’, ’beliefs’, etc.,
to the reasoning proactive agent. Note also that the outcome of reasoning might be proactive
action but might also be deliberate inaction. Thus, the proactive agent does not only decide
when and how to act but also when not to act.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Cognitive Abilities for Proactivity</title>
      <p>A number of cognitive abilities interacting jointly are required to achieve proactive behavior.
Here I discuss some, however, a complete list is an open question.</p>
      <p>
        Context. To be able to reason and self-initiate actions, an AI system needs to to understand
the world around it. Fields such as context-awareness and situation assessment make perceptions
of the environment, using sensors, and infer the current state which is one of the factors for
proactive action. A large number of works base proactivity on reasoning on current context only,
neglecting prediction [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref9">9, 10, 11, 12</xref>
        ]. I argue that context awareness is a necessary requirement
for proactivity, but it is not suficient.
      </p>
      <p>
        Prediction. The definitions of human and artificial proactivity comprise anticipation (see
Section 2). The proactive agent is able to reason beyond the current state and can deliberate on
how the future might evolve. An agent that reasons just about the present, takes actions that are
beneficial just for the present, and misses alternative acting behaviors that may be better when
considering a wider time horizon. For example, a robot companion might decide to bring the
backpack to the human to assist in their current task of preparing for a hiking trip. On the other
hand, when the robot also takes into account the future development of states, it can predict
that there is a high chance for a thunderstorm in the human’s hiking destination. Therefore
the robot may choose a diferent action than bringing the backpack, that is, a communicative
action to inform the human about the expected thunderstorm. Some works on proactivity exist
that take prediction into account [
        <xref ref-type="bibr" rid="ref13 ref14 ref15 ref16">13, 14, 15, 16</xref>
        ].
      </p>
      <p>
        Mental simulation. For making a deliberate acting decision the proactive agent may need
to compare the consequences of diferent acting alternatives. To be able to do this, the agent
needs to simulate possible proactive behaviors and compute their efects. Inherent to such
computations is uncertainty which the proactive agent needs to be able to handle. Note that
mental simulation is diferent from prediction; the latter makes forecasts about the development
of the world by itself, that is, without robot acting, while the former makes forecasts of the
consequences of diferent robot acting. For example, a robot companion may consider acting
alternative 1, to bring the ringing phone to the human now, or choose alternative 2, to inform
the human later about the missed call. The efects of option 1 include that the human does not
miss the phone call while in option 2 the human misses the phone call. Which of the options
is better depends on other factors of proactivity. Option 2 may be preferable if the human is
currently busy, whereas option 1 may be better otherwise. Examples of works that include
mental simulation are [
        <xref ref-type="bibr" rid="ref14 ref18">18, 14</xref>
        ].
      </p>
      <p>
        Preference. The question when and how a proactive agent should act may be informed
by human preference, of both single and multiple humans, short- or long-term. Russell [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]
calls for completely altruistic robots which base their actions solely on human preferences.
Human preferences are dynamic and uncertain. An intelligent agent should be aware of its own
uncertainty about the human’s preferences. This will prevent robots from behaving like Ms.
Heavy-handed following a verbatim single-minded pursuit without a chance for the human to
confirm that this is what they actually want (see Section 1). Example works inferring proactive
behavior by reasoning on preferences or user needs are [
        <xref ref-type="bibr" rid="ref10 ref14">10, 14</xref>
        ].
      </p>
      <p>
        Epistemic reasoning. An AI agent may reason about the mental states of other agents
(the human) to make proactive acting decisions. In philosophy and psychology Theory of Mind
(ToM) is the study of ascribing another individual particular mental states (beliefs, intentions,
desires) [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. ToM and epistemic reasoning has also gained attention within AI. It can be
employed to initiate proactive action based on false beliefs, intentions, or desires of the human.
For example, the human’s belief that the weather will be nice in their hiking destination is
false; the robot companion can decide to proactively approach the human and inform that the
human’s belief is false and the weather will be bad. Based on the recognized intention that the
human wants to go hiking, the robot can either assist in packing or inform about an upcoming
thunderstorm, depending on weather forecasts. There are several recent approaches which have
taken up the work of creating agents that can reason on ToM and this way enabling proactive
behavior [
        <xref ref-type="bibr" rid="ref13 ref21 ref22 ref5">21, 22, 13, 5</xref>
        ].
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Types of Proactivity</title>
      <p>To the best of my knowledge, no one has made the efort to group approaches on creating
proactive agent behavior into diferent types. The attempt here is intended to start this work
but makes no claim of completeness.</p>
      <p>
        Proactivity to Support the Human to Achieve their Intention. Proactivity of this type
is seen as the problem of helping the human fulfill their intention by self-initiated anticipatory
acting. This makes necessary to have the ability to do intention recognition in order to
understand the human’s intention which the artificial agent should help them to achieve. Examples
of such an approach are [
        <xref ref-type="bibr" rid="ref13 ref5">5, 13</xref>
        ]. Harman and Simoens [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] employ action graphs, that enable
them to model action dependencies and predict the human’s next actions in a plan; then they
compute which of these the robot can take over in a domestic scenario. Liu et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] use a
probabilistic Markov model to do both human intention inference and intention learning and
let a robotic arm proactively assist the human in a table-top task of assembling diferent cube
configurations.
      </p>
      <p>Summary: This type of proactivity is based on: human intention recognition; the ultimate
aim is to: support the human in achieving their intention/goal.</p>
      <p>
        Proactivity with a Goal Given. In this category we find approaches that create proactive
behavior but only when an explicit goal is given first (by the human, or by an external trigger).
One example in this category in the field of proactivity is Bremner et al. [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. They propose
an architecture for a robot system that includes an ethical layer (using BDI) to ’moderate’ the
robot’s actions, simulate behavior alternatives and do anticipation. First, external goals are
provided to the robot controller. Then it computes behavioral alternatives and simulates their
outcomes. The ethical module evaluates them and proactively initiates a new cycle of computing
diferent plans for behavior alternatives and simulating them, that are more ethical. The ethical
module does a final evaluation and the ’most ethical’ behavior alternative is dispatched and
executed.
      </p>
      <p>Summary: This type of proactivity is based on: one or multiple given goal(s); the ultimate
aim is to: employ proactive behavior to achieve the given goal(s).</p>
      <p>
        Proactivity from First Principles. There exist approaches that attempt to create proactive
agent behavior by reasoning on first principles. Works in this category aim to understand what
the factors and cognitive abilities are that create proactive behavior, and how these interact.
One example of this type is Martins et al. [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. The authors employ a variant of a POMDP to
inform the acting decision of a user-adaptive social robot. They achieve to maintain the user in
positive states encoded by value functions while being able to learn the robot’s actions’ impact
on the user ’on-the-fly’. My own work (together with colleagues) [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] is part of this category.
We model change in the environment including change induced by the human, and controllable
change (actions by a robot), which we set into diferent relations in formal concepts called
opportunity types. A desirability function to model preference is also used in these opportunity
types which enables us to evaluate diferent acting alternatives.
      </p>
      <p>Summary: This type of proactivity is based on: first principles or fundamentals; the ultimate
aim is to: understand the factors and their interaction in proactive decision making; generate
proactive behavior from it.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Concluding Remarks, Future Directions and Challenges</title>
      <p>This paper emphasizes the need to define and study the field of proactivity of AI systems and
artificial agents. A definition and scope is suggested, derived from the human proactive process.
Cognitive abilities that are necessary (to varying degrees) are presented and put into the context
of examples. The author defines types of proactivity and what characterizes them.</p>
      <p>Proactivity is a promising emerging field of interest in the AI community. There is still a
long list of open issues, and we are just starting to denfie this field (which this paper intends to
contribute to). Many approaches call their work ’proactive’ while this ’proactivity’ depends on
hard-coded rules for when the artificial agent should act. Approaches might not take anticipation
into account. This is not corresponding with the definition in the current paper which calls for
anticipatory acting based on reasoning. Another problem with many works within proactivity is
they often present domain-specific and/or ad-hoc solutions, meaning, they lack an underlying
general theory. Finally, there are numerous aspects that are necessary or useful when trying to
create proactive agent behavior (see Section 3). It will be a future milestone to integrate most
(or all) of them to achieve artificial agent proactivity.</p>
      <p>
        Proactivity implies a high degree of autonomy, which demands a high degree of responsibility.
Bremner et al. [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]’s work is one step in the direction of creating proactive robots that are
conforming with human ethical values. This in turn can create trust, which is a necessary basis
for human-robot interactions in social contexts. Bremner et al. [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] further point out, the early
Laws of Robotics by Asimov [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] are demanding a robot to be proactive. The first law starts
with "A robot should not harm a human. . . ", for this it is enough to have reactive robots. But
then the law resumes, ". . . or, through inaction, allow a human to come to harm", which, in fact,
demands robots that are proactive.
      </p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This research was funded by the Swedish Research Council (Vetenskapsrådet), No. 2021-05542.
The content of this work benefited from discussions with (alphabetical order) Thomas Bolander,
Sera Buyukgoz, Mohamed Chetouani, Federico Pecora and Alessandro Safiotti.
1–7</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Lieto</surname>
          </string-name>
          ,
          <article-title>Cognitive design for artificial minds</article-title>
          ,
          <source>Routledge</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>U.</given-names>
            <surname>KC</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chodorowski</surname>
          </string-name>
          ,
          <article-title>A case study of adding proactivity in indoor social robots using belief-desire-intention (bdi) model</article-title>
          ,
          <source>Biomimetics</source>
          <volume>4</volume>
          (
          <year>2019</year>
          )
          <fpage>74</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Pandey</surname>
          </string-name>
          ,
          <article-title>Socially intelligent robots, the next generation of consumer robots and the challenges</article-title>
          ,
          <source>in: Proc of the Int Conf on ICT Innovations</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>41</fpage>
          -
          <lpage>46</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <article-title>[4] SPARC partnership, SPARC partnership</article-title>
          , https://www.eu-robotics.net/sparc/about/ roadmap,
          <year>2016</year>
          .
          <article-title>Multi-Annual Roadmap for Robotics in Europe</article-title>
          , release B 02/12/
          <year>2016</year>
          .
          <article-title>(Sec 5.7 "Cognition")</article-title>
          .
          <source>Accessed: 2020-04-05.</source>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>H.</given-names>
            <surname>Harman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Simoens</surname>
          </string-name>
          ,
          <article-title>Action graphs for proactive robot assistance in smart environments</article-title>
          ,
          <source>Journal of Ambient Intelligence and Smart Environments</source>
          <volume>12</volume>
          (
          <year>2020</year>
          )
          <fpage>1</fpage>
          -
          <lpage>21</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kraus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Schiller</surname>
          </string-name>
          , G. Behnke,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bercher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dorna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dambier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Glimm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Biundo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Minker</surname>
          </string-name>
          ,
          <article-title>"Was that successful?" On integrating Proactive Meta-Dialogue in a DIYAssistant using Multimodal Cues</article-title>
          ,
          <source>in: Proceedings of the 2020 International Conference on Multimodal Interaction</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>585</fpage>
          -
          <lpage>594</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tomasello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Carpenter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Call</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Behne</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Moll</surname>
          </string-name>
          ,
          <article-title>Understanding and sharing intentions: The origins of cultural cognition</article-title>
          ,
          <source>Behavioral and Brain Sciences</source>
          <volume>28</volume>
          (
          <year>2005</year>
          )
          <fpage>675</fpage>
          -
          <lpage>735</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Grant</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. J.</given-names>
            <surname>Ashford</surname>
          </string-name>
          , The dynamics of proactivity at work,
          <source>Research in Organizational Behavior</source>
          <volume>28</volume>
          (
          <year>2008</year>
          )
          <fpage>3</fpage>
          -
          <lpage>34</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Nicora</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ambrosetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. J.</given-names>
            <surname>Wiens</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Fassi</surname>
          </string-name>
          ,
          <article-title>Human-robot collaboration in smart manufacturing: Robot reactive behavior intelligence</article-title>
          ,
          <source>Journal of Manufacturing Science and Engineering</source>
          <volume>143</volume>
          (
          <year>2021</year>
          )
          <fpage>031009</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Umbrico</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cesta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Cortellessa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Orlandini</surname>
          </string-name>
          ,
          <article-title>A holistic approach to behavior adaptation for socially assistive robots</article-title>
          ,
          <source>International Journal of Social Robotics</source>
          (
          <year>2020</year>
          )
          <fpage>1</fpage>
          -
          <lpage>21</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>C.</given-names>
            <surname>Sirithunge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. B. P.</given-names>
            <surname>Jayasekara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Chandima</surname>
          </string-name>
          ,
          <article-title>Proactive robots with the perception of nonverbal human behavior: A review</article-title>
          ,
          <source>IEEE Access 7</source>
          (
          <year>2019</year>
          )
          <fpage>77308</fpage>
          -
          <lpage>77327</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>F.</given-names>
            <surname>Pecora</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cirillo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Dell'Osa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ullberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Safiotti</surname>
          </string-name>
          ,
          <article-title>A constraint-based approach for proactive, context-aware human support</article-title>
          ,
          <source>J. of Ambient Intelligence and Smart Environments</source>
          <volume>4</volume>
          (
          <year>2012</year>
          )
          <fpage>347</fpage>
          -
          <lpage>367</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>T.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Lyu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. Q.-H.</given-names>
            <surname>Meng</surname>
          </string-name>
          ,
          <article-title>Unified intention inference and learning for humanrobot cooperative assembly</article-title>
          ,
          <source>IEEE Transactions on Automation Science and Engineering</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>J.</given-names>
            <surname>Grosinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Pecora</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Safiotti</surname>
          </string-name>
          ,
          <article-title>Robots that maintain equilibrium: Proactivity by reasoning about user intentions and preferences</article-title>
          ,
          <source>Pattern Recognition Letters</source>
          <volume>118</volume>
          (
          <year>2019</year>
          )
          <fpage>85</fpage>
          -
          <lpage>93</lpage>
          . Cooperative and Social Robots:
          <article-title>Understanding Human Activities and Intentions</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kwon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Ma</surname>
          </string-name>
          ,
          <article-title>Design and evaluation of service robot's proactivity in decision-making support process</article-title>
          ,
          <source>in: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>J.</given-names>
            <surname>Baraglia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cakmak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Nagai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. P.</given-names>
            <surname>Rao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Asada</surname>
          </string-name>
          ,
          <article-title>Eficient human-robot collaboration: when should a robot take initiative?</article-title>
          ,
          <source>The International Journal of Robotics Research</source>
          <volume>36</volume>
          (
          <year>2017</year>
          )
          <fpage>563</fpage>
          -
          <lpage>579</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>D. C.</given-names>
            <surname>Dennett</surname>
          </string-name>
          ,
          <article-title>Intentional systems</article-title>
          ,
          <source>The Journal of Philosophy</source>
          <volume>68</volume>
          (
          <year>1971</year>
          )
          <fpage>87</fpage>
          -
          <lpage>106</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>P.</given-names>
            <surname>Bremner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Dennis</surname>
          </string-name>
          , M. Fisher,
          <string-name>
            <given-names>A. F.</given-names>
            <surname>Winfield</surname>
          </string-name>
          ,
          <article-title>On proactive, transparent, and verifiable ethical reasoning for robots</article-title>
          ,
          <source>Proceedings of the IEEE</source>
          <volume>107</volume>
          (
          <year>2019</year>
          )
          <fpage>541</fpage>
          -
          <lpage>561</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>S.</given-names>
            <surname>Russell</surname>
          </string-name>
          ,
          <article-title>Human compatible: Artificial intelligence and the problem of control</article-title>
          ,
          <source>Penguin</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>D.</given-names>
            <surname>Premack</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Woodruf, Does the chimpanzee have a theory of mind?</article-title>
          ,
          <source>Behavioral and brain sciences 1</source>
          (
          <year>1978</year>
          )
          <fpage>515</fpage>
          -
          <lpage>526</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>S.</given-names>
            <surname>Buyukgoz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Grosinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chetouani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Safiotti</surname>
          </string-name>
          ,
          <article-title>Two ways to make your robot proactive: reasoning about human intentions, or reasoning about possible futures</article-title>
          ,
          <source>arXiv preprint arXiv:2205.05492</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>M.</given-names>
            <surname>Shvo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. Q.</given-names>
            <surname>Klassen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. A.</given-names>
            <surname>McIlraith</surname>
          </string-name>
          ,
          <article-title>Resolving misconceptions about the plans of agents via Theory of Mind</article-title>
          ,
          <source>in: Proceedings of the Thirty-Second International Conference on Automated Planning and Scheduling (ICAPS</source>
          <year>2022</year>
          ),
          <year>2022</year>
          . To appear.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>G. S.</given-names>
            <surname>Martins</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. Al</given-names>
            <surname>Taír</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Santos</surname>
          </string-name>
          , J. Dias,  POMDP:
          <article-title>POMDP-based user-adaptive decisionmaking for social robots</article-title>
          ,
          <source>Pattern Recognition Letters</source>
          <volume>118</volume>
          (
          <year>2019</year>
          )
          <fpage>94</fpage>
          -
          <lpage>103</lpage>
          . Cooperative and Social Robots:
          <article-title>Understanding Human Activities and Intentions</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>I. Asimov</surname>
          </string-name>
          , Runaround, Astounding science fiction
          <volume>29</volume>
          (
          <year>1942</year>
          )
          <fpage>94</fpage>
          -
          <lpage>103</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>