=Paper= {{Paper |id=Vol-2404/paper_14 |storemode=property |title=A Computational Model for Cognitive Human-Robot Interaction: An Approach Based on Theory of Delegation |pdfUrl=https://ceur-ws.org/Vol-2404/paper19.pdf |volume=Vol-2404 |authors=Filippo Cantucci,Rino Falcone |dblpUrl=https://dblp.org/rec/conf/woa/CantucciF19 }} ==A Computational Model for Cognitive Human-Robot Interaction: An Approach Based on Theory of Delegation== https://ceur-ws.org/Vol-2404/paper19.pdf
                                        Workshop "From Objects to Agents" (WOA 2019)


      A Computational Model for Cognitive
  Human-Robot Interaction: An Approach Based on
              Theory of Delegation
                           Filippo Cantucci                                                  Rino Falcone
           Institute of Cognitive Science and Technology,                   Institute of Cognitive Science and Technology,
          National Research Council of Italy, (ISTC-CNR)                   National Research Council of Italy, (ISTC-CNR)
                              Rome, Italy                                                      Rome, Italy
                      filippo.cantucci@istc.cnr.it                                       rino.falcone@istc.cnr.it




   Abstract—In this paper we present a cognitive model to               The integration of these kinds of social skills in autonomous
support reasoning and decision making on socially adaptive task         robots would naturally lead to a deeper relationship of trust
delegation and adoption. The designed model allows a robot to           between them and humans. Several cognitive architectures
dynamically modulate to dynamically modulate its own level of
collaborative autonomy, by restricting or expanding a received          have been proposed [7], [8], [9], everyone with the goal of sim-
task delegation, on the basis of several context factors as the needs   ulating human’s cognitive and behavioral features at different
of other users involved in the interaction. We exploit principles       levels of cognition: perception, learning, reasoning, planning,
underlying theory of delegation, theory of mind and BDI agent           memory and so on. Along with the ability to autonomously
modelling, in order to build a decision making system for real-         elaborate the context information, react to the changes in
world teaming between autonomous agents.
   The model has been developed by using JaCaMo framework,              the environment, make decisions about the task they are
which provides support for implementing multi-agent systems             expected to carry out by showing some level of proactiveness,
and integrates different multi-agent programming dimensions.            robots should integrate the conceptual instruments necessary
   We tested our model in a specific domain on the humanoid             to transform their autonomy into social autonomy [10].
robot Nao, widely adopted in human-robot interaction applica-
tions. The support study has established that the model provides        A. Problem and contribution
the robot with the ability to modify its social autonomy and to
handle possible collaborative conflicts due to the initiative to help      As claimed in [11], cooperation implies the definition of
the user beyond her/his request.                                        the two complementary mental attitudes of task delegation
                                                                        and task adoption linking collaborating agents. Delegation and
                     I. INTRODUCTION                                    adoption are two basic cognitive ingredients of any collabora-
   In every-day life, humans cooperate with other humans, in            tion and organization. The notion of autonomy in artificial
order to gain knowledge, achieve and share goals, following             agents, should integrate different levels of task adoption.
social norms. These are sometimes encoded as laws, some-                Indeed, after receiving a task delegated from the outside,
times as expectations. A primary research topic in cognitive            artificial agents should exploit their knowledge about the
human-robot interaction is the design of autonomous systems             environment, including other agents are interacting with them,
that can interact and cooperate proficiently with humans.               to adjust their own decision, for example by going beyond
Indeed, social robots are becoming part of daily life and are           the delegated task, or (partially or completely) changing it, or
present in a variety of environments, including hospitals [1],          again, adopting just a sub-part of it, because the context does
offices [2], schools [3], tourist facilities [4] and so on. In these    not allow a complete task achievement. Theory of delegation,
contexts, robots have to coexist and collaborate with a wide            should guide the design of the decision making process of
spectrum of users not necessarily able (or willing) to adapt            every robot that has to collaborate with humans in daily life.
their interaction level to the kind requested by a machine: the            In summary, the contribution of this research includes:
users need to deal with artificial systems whose behavior must             • the development of a declarative, knowledge-oriented,
be understandable and effective. To be effective, the interaction             plan-based computational model that relies on the prin-
between humans and robots should consider not only the                        ciples defined in the theory of delegation. The proposed
ability of the robots but also the human preferences [5]. Robots              approach provides a robot with an internal representation
have to maintain as much as possible a natural and intelligent                of itself and the actor involved in the interaction, every
interaction with humans: they should modulate their level of                  one with their own beliefs, goals, plans. In particular, the
support interpreting both the contextual situations and the                   model is a decision making system where the interaction
needs of the other agents involved in the cooperation [6], just               between the robot and the user is reproduced. Once a
like humans typically do when they interact with each other.                  user delegates a task to the robot, it can take its decision




                                                                    127
                                       Workshop "From Objects to Agents" (WOA 2019)

     about the level of task adoption, on the basis of the            pair τ = (α, g). For a complete theoretical overview of the
     environmental context and of the mental states attributed        delegation theory we refer to [11]. Let’s focus on a deep
     to the human it is interacting with. The presence, in the        level of cooperation, where the contractor can adopt a task
     robot’s mind of a self-representation, allows it to have a       delegated by the client, at different levels of effective help. In
     detailed description of its internal status, its technological   the theory of delegation, various levels of contractor’s adoption
     limits and consider them in the decision process.                are individuated:
   • A support study where the computational model has
                                                                        • Sub help: The contractor satisfies just a sub-goal of the
     been tested on a well known robotic platform. The study
                                                                          delegated task,
     has shown that the robot was able to adapt its level of
                                                                        • Literal help: the contractor adopts exactly what has been
     collaborative autonomy in adopting a task delegated from
                                                                          delegated by the client,
     the outside. The model has conferred to the robot the
                                                                        • Over help: the contractor goes beyond what has been
     capability to go beyond the simple task acceptance and to
                                                                          delegated by the client without changing the clients plan,
     handle possible collaborative conflicts due to the initiative
                                                                        • Critical help: the contractor satisfies the relevant results
     to help the user beyond its request.
                                                                          of the requested plan/action, but modifies that plan/action,
   The paper is organized as follows: section 2 describes the           • Critical-Over help: the contractor realizes an over help
theoretical models underlying our approach and the software               and in addition modifies the plan/action,
framework used for its implementation. Section 3 focus on the           • Critical-Sub help: the contractor realizes a sub help and
description of the computational model; section 4 illustrates a           in addition modifies the plan/action,
support study where a real robot cooperated with humans in              • Hyper-critical help: the contractor adopts goals or in-
a specific domain; section 5 is dedicated to conclusions and              terests of the client that the client itself did not take into
future works.                                                             account (at least, in that specific interaction with the con-
                     II. BACKGROUND                                       tractor): by doing so, the contractor neither performs the
                                                                          action/plan nor satisfies the results that were delegated.
   We briefly introduce the theory beyond our computational
model and the software framework used for its implementa-             It is important to underline that we are considering collabora-
tion.                                                                 tive robots, i.e. robots that have as their main goal the positive
                                                                      collaboration with the user (client).
A. BDI Agents
   BDI agents [12] are one of the most popular models in              C. JaCaMo Framework
agent theory [13]. Originally inspired by the theory of human
practical reasoning developed by Michael Bratman [14], BDI               JaCaMo [15] is a framework for multi-agent programming
model focuses on the role of intentions in reasoning and              that integrates three different multi-agent programming lev-
allows to characterize agents using a human-like point of             els: agent-oriented (AOP), environment-oriented (EOP) and
view. Very briefly, in the BDI model the agent has beliefs,           organization-oriented programming (OOP). Every mentioned
information representing what it perceives in the environment         level is associated to three well-known existing platforms that
and communicates with other agents, and desires, mean states          have been developed for years, separately:
of the world that the agent might to accomplish. The agent
                                                                        • Jason [16], a powerful AgentSpeak(L) [17] interpreter for
deliberates on its desires and decides to commit to one of them:
                                                                          BDI agents programming,
committed desires become intentions. To satisfy its intentions,
                                                                        • CArtAgO [18] for programming shared environment
it executes plans in the form of a course of actions or sub-
                                                                          artifacts,
goals to achieve. The behaviour of the agent is thus described
                                                                        • M oise [19] for programming multi-agent organizations.
or predicted by what it committed to carry out. An important
feature of BDI agents is the property to react to changes in          JaCaMo framework provides a powerful tool for implementing
their environment as soon as possible while keeping their pro-        our computational model, in terms of:(i) the capability to
active behaviour.                                                     represent the mental states of the real actors involved in
                                                                      the interaction as BDI agents;(ii) the possibility for agents
B. Levels of adoption about the delegated task                        of the computational model to exchange information;(iii) the
   As mentioned above, delegation and adoption are two basic          possibility to implement a shared environment where can be
ingredients of any collaboration and organization. Typically          mapped the skills of the real robot. Each of these features
cooperation works through the allocation of some task τ               allowed us to reproduce the real interaction in the decision-
(or sub-task), by a given agent A, the client, to another             making system of the robot. The development of our computa-
agent B, the contractor, via some ”request” (offer, proposal,         tional model has been based mainly on the first two platforms,
announcement, etc.) meeting some ”commitment” - bid, help,            Jason and CArtAgO. We do not exclude, in the future, to
contract, adoption and so on [11]. The task τ, the object of          exploit M oise in order to introduce organizational rules or
delegation, can be referred to an action α or to its resulting        constraints among the agents that populate the computational
goal state g. By means τ we will refer to the action/goal             model.




                                                                  128
                                        Workshop "From Objects to Agents" (WOA 2019)




                                                                                        Fig. 2. composed plan example


                                                                      the robot, with their mental states. Please note that when we
                                                                      refer to Client and Contractor, we always indicate the mental
                                                                      representations, in the model, of the interacting real agents.
                Fig. 1. Computational model overview                  Notice that the system can potentially be equipped by several
                                                                      versions of the robot itself, with different mental states. These
                                                                      versions could correspond to different contractor agents in the
   III. DESCRIPTION OF THE COMPUTATIONAL
                                                                      robot’s decision making system. We could define, for example,
                    MODEL
                                                                      a ”lazy” robot version, or a really proactive version, by giving
   In this section we illustrate the conceptual ingredients of        the different descriptions of their set of cognitive ingredients.
the implemented computational model. The main goal is to              At this stage of the work we have considered just one self-
make an artificial agent able to autonomously adapt its level of      representation of the robot, choosing a version in which it has
collaborative autonomy, when it adopts a task delegated from a        the goal to provide more help than delegated every time that
human client. We refer to the real artificial agent as a robot that   the contextual factors allow it.
is interacting with humans. We exploit the formalism provided            Generally speaking, an agent’s cognitive state can be de-
by JaCaMo, in particular by Jason for the agent programming           scribed as a set of beliefs, goals and plans. A belief β is a
and CArtAgO for the environment programming.                          grounded first-order logic formula encoding the information
   When a user delegates a task τi to the robot, the task τ f that    perceived from the environment, attributed to other agents,
the robot decides to achieve, can match with the delegated one        or provided by the communication with other agents. Further
or not. The level of τi adoption depends on the robot’s ability       knowledge can be generated, in term of new beliefs, reasoning
to map in its decision making system:                                 on simple beliefs through complex rules. A goal g is the state
   • an high-level description of the perceived current state of      of affairs that an agent wants to achieve. An agent achieves
      the environment,                                                a goal, matching to the intention it commits to pursue, by
   • a self-representation in terms of intentional system,            implementing a plan π, defined as part of its own plan library
   • the mental states of the other real agent involved in the        Π, which establishes the know-how of the agent. According
      interaction.                                                    to practical reasoning principles, plans are courses of actions
The capability of an autonomous agent to meta-represent itself        or sub-goals the agent has to carry out before achieving the
and other agents and reason about their beliefs, goals, plans,        ”top-level goal”.
intentions is known as Theory of Mind [20].                              Formally, the plan library belonging to an agent in the
                                                                      computational model
A. Conceptual ingredients of the model                                                               d   [ a
   The computational model (Figure 1) can be considered a                                     ∏=∏ ∏                                (1)
multi-agent system which provides the robot with a theory of
                                                                      is a collection of Πd composed plans and Πa abstract plans.
mind.
                                                                      Composed plans (Figure 2) represent complex hierarchical
   In particular, the model is populated by two categories of         goals that decompose into other complex sub-goals gi or
agent:                                                                actions αi . This results in a graph representation in which
   • the Contractor,                                                  edges denote plan decomposition and root nodes in the graph
   • the Client.                                                      correspond to goals or complex actions. Typically the lowest
   Agents belonging to the first category define a self-              decomposition level is formed by elementary actions, which,
representation of the robot, with their own mental attitudes,         in the case of a robot, match with its elementary perception
while agents belonging to the second one, define a represen-          and action capabilities, for example object detection, face
tation of the human clients, involved in the interaction with         recognition, object grasping, moving toward a point in the




                                                                  129
                                         Workshop "From Objects to Agents" (WOA 2019)




               Fig. 3. Jason agent reasoning cycle [16]


                                                                                       Fig. 4. Goal recognition strategy
space and so on. Instead, abstract plans are plans which can
be specialized.
  A plan for achieving gi , can be written according to the         wrapped in specific artifact’s functionalities, which become an
Jason formalism:                                                    abstraction of the elementary actions the robot can perform in
                       +!gi : ci ←− bi                    (2)       the real world. The contractor agent representing the robot,
                                                                    can exploit elementary actions to update its beliefs base or to
An agent operates by means of its own reasoning cycle (Figure       carry out complex goals or actions. The possibility to equip
3); through that, it can update its beliefs base, achieve goals     the robot with a self-representation and a model of other agent
by selecting plans whose context ci are matching with the           involved in the interaction, is really powerful and introduces a
current state of the interaction, described through the beliefs.    further important feature which can lead its decision process:
The agent acts with respect to the body bi of the selected          a human-like description of itself.
plan, which is the course of actions/sub-goals needed for
achieving the goal gi . The reasoning cycle can be extended         B. Decision making strategy
and customized, for implementing a specific reasoning logic.           As analyzed above, the contractor represents a bridge be-
Notice that is possible to write several relevant plans with        tween the real world and the computational model and allows
the same goal to achieve, but different contexts or bodies.         the latter to have an high-level description of the perceived en-
Relevant plans become applicable plans, if their context is a       vironment. Instead, the client has the main function to support
logical consequence of the agent’s belief base.                     the decision about τi adoption level. The client is profiled by
   In addition to the plans for achieving goals, an agent can       exploiting a classical approach to User Modelling [22] which
trigger plans for reacting to every change in its belief base,      can be applied to its cognitive ingredients: beliefs, goals and
corresponding to a change in the current state of the world. The    plans are mapped with respect to the domain in which the
Jason’s formalism for plans used for reacting to environment        robot is operating. While beliefs and goals of a client represent
changes is:                                                         the mental state attributed to the user, its reasoning cycle
                        +!β j : c j ←− b j                    (3)   implements a logic that makes the robot able to reason about
In this way an agent implements the two fundamental aspects         goals of the current interlocutor. In practice, modifying the
of reactiveness and proactiveness: the agent has goals which it     reasoning cycle means to adapt the architectural components
tries to achieve in the long term, while it can react to changes    shown in figure 3. For τ f computation, we implemented a
in the current state of the world. Finally, an important feature    context-dependent plan recognition [23] strategy relying on:
of Jason platform is the capability to integrate a speech-act          • representing real agents of the interaction, in the robot’s
based communication [21], which enables knowledge transfer                mind, included the robot itself,
between agents.                                                        • the capability of the agents in the computational model
   The Client and the Contractor in the computational model               to share their mental states between them by speech-act
can exploit a shared environment, programmed in CArtAgo,                  communication functionalities,
which is a collection of artifacts. Artifacts are entities mod-        • the possibility to abstract real actions in a shared simu-
elling services and resources for supporting agents activities.           lated environment available to the agents.
Artifacts have the main property to link the low-level control      Figure 4 shows the activity diagram of the strategy used by the
part of the robot with its high-level decision making system.       robot to adapt its level of task adoption. Once the interaction
Indeed, the robot is provided with its own APIs, for collecting     starts and the user delegates τi to the robot, the first step of τ f
data from sensors, and acting in the real world. APIs can be        calculation is to activate the contractor into the computational




                                                                130
                                        Workshop "From Objects to Agents" (WOA 2019)

model. This agent, with its own initial beliefs, triggers a plan
for adopting the initial task τi
        +!adoptTask(τi ,U) : true ←− send(U, τi , Rbb ).         (4)
The contractor has the intention to adopt the task τi delegated
by the user U. The plan’s body allows the contractor to send
to the agent U, τi and the beliefs stored in its belief base
Rbb . At this point, the decision process is temporarily moved
into the client U. The task τi could be completely specified
by the user or the user could delegate to the robot a task in                                 Fig. 5. Interactive map
which some entity is not declared. For example he/she could
delegate the goal ”put the red object on the table” or ”put
an object on the table”. In this case the robot has to reason           cons in extending the level of task adoption; possible conflicts
about the task specification, on the basis of the user profile          can emerge when the robot provides less or more help than
represented by the client’s beliefs, goals and plans. Already at        delegated. Conflicts can arise for several reasons [11]. For now,
this stage, the robot shows the capability to provide more help         we just start from the assumption that the user appreciates the
than delegated, requested by the task specification. Once τi is         collaborative initiative of the robot, but sometimes the robot
completely specified, the client agent exploits its reasoning           can make a mistake in classifying the user it is interacting
cycle to explore the plan library in order to find at least a           with, because of its limited perceptive skills. As we will see in
plan of which τi represents a top-level goal or a sub-goal              the next section, the computational model stem this limitation
to achieve before accomplishing a complex one. Once found,              without losing its ability to go beyond the task delegated by
plans related to τi are selected. Their context is checked with         the user.
respect to the current state of the world (remember that the
client agent can reason about beliefs sent by the contractor              IV. E XPERIMENTAL SETUP AND APPLICATION SCENARIO
agent too) and the belief attributed to the client representation.         Our computational model has been tested on a well known
Once found an applicable plan among them, the client sends              robotic platform: the humanoid robot Nao [24]. We figured a
to the contractor the task τ f , associated to the selected plan.       scenario where the Nao robot serves as an ”infoPoint assistant”
τ f can match with τi or not. This strategy allows the real robot       that could help people to get information about restaurants,
to potentially extend its proactivity realizing an over-help, or        museums, historical monuments to visit and nightclubs, in
at least a literal help. Notice that the ”action” that the client       the city of Rome. We choose this domain for three main
performs in the model is to send to the contractor the message          reasons: first of all, as mentioned in the introduction, tourism
carrying in τ f . The plan for sending τ f is:                          and hospitality companies have started to adopt robots and
                                                                        AI services in the form of chatbots, robot-concierge, self-
       +! f inalTask(τ f ) : true ←− send(Contractor, τ f )      (5)
                                                                        service information/check-in/check-out systems and so on;
The final decision about τ f the implementation is up to the            second, this domain allowed us to make experiments with a
contractor again, which tries to execute a plan. On the basis           real robot by overcoming the technological limitations related
of the current state of its belief base, the contractor chooses,        to the robotic platform (grasping issues, navigation issues).
among the relevant plans, the one applicable to the context.            Furthermore, robot as touristic assistant can figure several
The context of every plan in the contractor’s library takes             possible scenarios, of which providing information is only a
into account the beliefs describing the capabilities of the robot       part.
itself and its internal status. If an applicable plan exits, then τ f      Through the use of a simple interactive map (figure 5), the
becomes the final task to pursue: the selected plan can match           robot shows to the user where the requested point of interest
or not with the one attributed to the client and the robot can          (POI) is placed and indicates the path to the destination.
satisfy τ f modifying or not the plan of the user: in the first         It suggests the less busy way (dashed path), starting from
case it will implement a literal or an over help; in the second         the infoPoint (marked landmark) to the POI. The map is
one it will implement critical or over-critical help. If the robot      partitioned in zones, encoded by landmarks that Nao can easily
does not have the resources to execute the task calculated,             recognize and associate to integers (e.g. 68, 80, 107). Every
it will execute a sub-task of τ f , implementing a sub-help or          point of interest is associated to a particular area of the city
critical sub-help. If a plan for achieving τ f does not exist, the      populated by restaurants, museums and so on. The map is
robot starts an interaction with the user.                              interfaced to a specific artifact exploited by the contractor
   In conclusion, by exploiting the plan recognition technique          agent to make it accessible. POIs are described in the belief
already described, the robot can identify possible goals/plans          base of the contractor through expressive annotations. For
of the user, which do not necessarily match with the delegated          instance, to a restaurant can be associated a tuple of the
task. They can be goals outstanding the delegated task, because         form restaurant(name, category, location, capacity, target,
the real agent decided it can adopt the task at a different             state), where category describes the restaurant’s typology,
level of help. However, there is a trade-off between pros and           state indicates if it’s open or closed, target the audience




                                                                    131
                                       Workshop "From Objects to Agents" (WOA 2019)

                                                           τ f if 0.7 ≤ Accs ≤ 1.0
          Q1    en joyT heCity : c1 ←− f indRestaurant(laSoraLella, 68, Typical); f indPlaceToVisit(araPacis, 68, historical).
          Q2    en joyT heCity : c1 ←− f indRestaurant(laParolaccia, 68, Typical); f indPlaceToVisit(araPacis, 68, historical).

                                                           τ f if 0.0 ≤ Accs < 0.4
          Q1    en joyT heCity : c1 ←− f indRestaurant(laSoraLella, 68, Typical); f indPlaceToVisit(SantaCecilia, 68, church).
          Q2    en joyT heCity : c3 ←− f indRestaurant(AngoloDelVino, 68, Typical); f indMuseumToVisit(MuseoDal, 68, art).

                                                            τ f if 0.4 ≤ Accs < 0.7
          Q1    en joyT heCity : c1 ←− f indRestaurant(laSoraLella, 68, Typical); f indPlaceToVisit(piazzaTrilussa, 68, square).
          Q2    en joyT heCity : c1 ←− f indRestaurant(Otello, 68, Typical); f indPlaceToVisit(piazzaTrilussa, 68, square).
                                                                TABLE I
                                                         TASK ADOPTION RESULTS




for whom it is addressed (e.g. singles, couples, groups) and           in a restaurant and visiting a monument:
capacity if it is small, big or medium.
                                                                              π1 :en joyT heCity : c1 ←−
   The robot can interact with different kind of users: for
instance, it can give information to tourists and citizens. Since                 f indRestaurant(Name, Location,Category);
our goal is to demonstrate the flexibility of the computational                   f indPlaceToVisit(Name, Location,Category).
model, without loss of generality we leverage on a simplified
                                                                       This means that the robot attributes this plan to the user and
user encoding, based on colors and numbers. Tourists are
                                                                       maps it in the client agent. Notice that, in the client’s plan
encoded with a red shirt and citizens with a green one.
                                                                       library, can be attributed several plans with the same goal of
Moreover, people can have different mental states, depending
                                                                       enjoying the city, but different contexts and bodies. Last, the
on their characteristics and attributes, i.e. the age, the marital
                                                                       robot choses the relevant plan to execute as depicted in section
status and so on. In our case study we exploited the marital
                                                                       III.
status in order to classify the interlocutor as i) single, ii) in         Table 1 shows the level of τi adoption related to the situation
couple, iii) with family and iv) in group. The marital status          described above. In all cases where the delegation is univocal
is represented by a number on the shirt: 1 for singles, 2 for          (Q1 ), the robot can go beyond the delegation, without changing
couples, 3 for families and 4 for groups. In conclusion, the           the client’s plan (over-help). When the delegation is vague (Q2 )
robot can perceive the user as, for instance, a single citizen, or     the robot is still able to extend its help: indeed, it can use
a tourist on holiday with his own family. The robot can make           the few task specifications in order to find a restaurant which
mistake in perceiving the user. For mapping this perceptive            better adapts to the user, by considering the accuracy which it
process into the model, two beliefs, in the contractor agent,          has been classified. For example, when 0.0 ≤ Accs < 0.4 the
are updated when the robot detects the user:                           robot exploits the ”stereotype” of a tourist representation in
      userCategory(Uc , Accc ) and maritalStatus(S, Accs )             its decision making system and chooses a typical restaurant
                                                                       (typically a tourist wants eat in typical restaurants) targeted
The first one indicates if the user is a tourist or a citizen, the     for couples instead of single people. Vice versa it chooses a
second one indicates its marital status. The robot classifies          restaurant targeted for singles when it is almost sure that the
the user’s attributes with a certain accuracy, expressed by            user is effectively single (0.7 ≤ Accs ≤ 1.0). Finally, when it
Accc and Accs . We conducted a test in which the robot could           cannot distinguish singles from couples, it chooses a restaurant
interact with tourists or citizens with different marital status.      suitable for a generic target audience. Notice that, when the
Hereinafter we describe the scenario where the robot interacts         robot does not find any monument to visit, it still does more
with a tourist who is single and asks it to achieve the result         than delegated, by finding a museum to visit, instead of a
to find a restaurant. Moreover, we took in consideration the           monument: it realizes an over help and in addition it modifies
case where the robot was able to correctly recognize the user          the plan attributed to the user (over-critical help).
as a tourist, but it could classify its marital status at different
                                                                                   V. C ONCLUSIONS AND FUTURE WORKS
levels of accuracy Accs . The user asks to the robot:
                                                                          In this paper we presented a cognitive model which inte-
  • Q1 : ”I would like to go to La Sora Lella restaurant”              grates the concept of adjustable social autonomy as a basis for
  • Q2 : ”I would like to go to eat something in Trastevere”
                                                                       an effective human-robot interaction. Exploiting the notions
Questions imply two different τi delegation:                           of task delegation, adoption and the theory of mind, the
                                                                       computational model has proven to be really adaptive and
  •   Q1 : f indRestaurant(”LaSoraLella”, 68, ”Typical”)
                                                                       flexible, giving to the robot the capability to adjust its level of
  •   Q2 : f indRestaurant(””, 68, ””)
                                                                       help on the basis of several dimensions of the cooperation. The
In the plan library of the agent representing the real user, a         computational model is knowledge-dependent, but domain-
plan π1 is present which has the result to enjoy the city, eating      independent: the agent’s mental state can be extended, in order




                                                                   132
                                             Workshop "From Objects to Agents" (WOA 2019)

to make it applicable across a number of domains and real                        [7] L. Kajdocsi and C. R. Pozna, “Review of the most successfully used
situations.                                                                          cognitive architectures in robotics and a proposal for a new model of
                                                                                     knowledge acquisition,” in 2014 IEEE 12th International Symposium on
   Since the computational model can be exploited in order                           Intelligent Systems and Informatics (SISY). IEEE, 2014, pp. 239–244.
to build robots that have as their main goal the positive                        [8] P. Ye, T. Wang, and F.-Y. Wang, “A survey of cognitive architectures in
collaboration with the user, the next step of our work will be                       the past 20 years,” IEEE transactions on cybernetics, no. 99, pp. 1–11,
                                                                                     2018.
to introduce the concept of trust in the model. The notion of                    [9] S. Lemaignan, M. Warnier, E. A. Sisbot, A. Clodic, and R. Alami, “Arti-
trust is strictly related to delegation. More precisely, delegation                  ficial cognition for social human–robot interaction: An implementation,”
is the result of a complex mental state, described as a set of                       Artificial Intelligence, vol. 247, pp. 45–69, 2017.
beliefs, goals and decisions: in one word, trust. A possible                    [10] R. Falcone and C. Castelfranchi, “The human in the loop of a delegated
                                                                                     agent: The theory of adjustable social autonomy,” IEEE Transactions on
strategy to integrate trust in the computational model could                         Systems, Man, and Cybernetics-Part A: Systems and Humans, vol. 31,
be exploit the third multi-agent programming dimension, the                          no. 5, pp. 406–418, 2001.
organizational one, in order to define a set of behavioral con-                 [11] C. Castelfranchi and R. Falcone, “Towards a theory of delegation for
                                                                                     agent-based systems,” Robotics and Autonomous Systems, vol. 24, no.
straints that the agent belonging to the computational model                         3-4, pp. 141–157, 1998.
adopts when they reproduce the real interaction. Moreover,                      [12] A. S. Rao, M. P. Georgeff, et al., “Bdi agents: from theory to practice.”
considering that specifying plans in the representation of the                       in ICMAS, vol. 95, 1995, pp. 312–319.
real actor can be a limit, we aim at introducing of a more                      [13] M. Wooldridge and N. R. Jennings, “Agent theories, architectures, and
                                                                                     languages: a survey,” in International Workshop on Agent Theories,
dynamic approach for plan selection, more adapt to complex                           Architectures, and Languages. Springer, 1994, pp. 1–39.
and uncertain real scenarios. Finally we aim at introducing                     [15] O. Boissier, R. H. Bordini, J. F. Hübner, A. Ricci, and A. Santi,
some form of learning in order to improve the ability of the                         “Multi-agent oriented programming with jacamo,” Science of Computer
                                                                                     Programming, vol. 78, no. 6, pp. 747–761, 2013.
robot to reason about other agent’s behaviors, goals, beliefs                   [16] R. H. Bordini and J. F. Hübner, “Bdi agent programming in agentspeak
and decide what level of task adoption it will be necessary                          using jason,” in International Workshop on Computational Logic in
and more adapt to entire context of the cooperation.                                 Multi-Agent Systems. Springer, 2005, pp. 143–164.
                                                                                [17] A. S. Rao, “Agentspeak (l): Bdi agents speak out in a logical computable
                             R EFERENCES                                             language,” in European workshop on modelling autonomous agents in
                                                                                     a multi-agent world. Springer, 1996, pp. 42–55.
 [1] D. Casey, O. Beyan, K. Murphy, and H. Felzmann, “Robot-assisted            [18] A. Ricci, M. Piunti, M. Viroli, and A. Omicini, “Environment program-
     care for elderly with dementia: is there a potential for genuine end-           ming in cartago,” in Multi-agent programming. Springer, 2009, pp.
     user empowerment?” The Emerging Policy and Ethics of Human Robot                259–288.
     Interaction, 2015.                                                         [19] J. F. Hübner, J. S. Sichman, and O. Boissier, “Developing organised
 [2] M. M. Veloso, J. Biswas, B. Coltin, and S. Rosenthal, “Cobots: Robust           multiagent systems using the moise+ model: programming issues at
     symbiotic autonomous mobile service robots.” p. 4423, 2015.                     the system and agent levels,” International Journal of Agent-Oriented
 [3] T. Belpaeme, J. Kennedy, A. Ramachandran, B. Scassellati, and                   Software Engineering, vol. 1, no. 3/4, pp. 370–395, 2007.
     F. Tanaka, “Social robots for education: A review,” Science Robotics,
                                                                                [20] D. Premack and G. Woodruff, “Does the chimpanzee have a theory of
     vol. 3, no. 21, p. eaat5954, 2018.
                                                                                     mind?” Behavioral and brain sciences, vol. 1, no. 4, pp. 515–526, 1978.
 [4] S. Ivanov, C. Webster, and K. Berezina, “Adoption of robots and service
     automation by tourism and hospitality companies,” Revista Turismo &        [21] J. R. Searle, Speech acts: An essay in the philosophy of language.
     Desenvolvimento, vol. 27, no. 28, pp. 1501–1517, 2017.                          Cambridge university press, 1969, vol. 626.
 [5] B. Lubars and C. Tan, “Ask not what ai can do, but what ai                 [22] E. Rich, “User modeling via stereotypes,” Cognitive science, vol. 3,
     should do: Towards a framework of task delegability,” arXiv preprint            no. 4, pp. 329–354, 1979.
     arXiv:1902.03245, 2019.                                                    [23] S. Carberry, “Techniques for plan recognition,” User Modeling and User-
 [6] S. Rossi, F. Ferland, and A. Tapus, “User profiling and behavioral              Adapted Interaction, vol. 11, no. 1-2, pp. 31–48, 2001.
     adaptation for hri: a survey,” Pattern Recognition Letters, vol. 99, pp.   [24] D. Gouaillier, V. Hugel, P. Blazevic, C. Kilner, J. Monceaux, P. Lafour-
     3–12, 2017.                                                                     cade, B. Marnier, J. Serre, and B. Maisonnier, “The nao humanoid: a
[14] M. Bratman, Intention, plans, and practical reason. Harvard University          combination of performance and affordability,” CoRR abs/0807.3223,
     Press Cambridge, MA, 1987, vol. 10.                                             2008.




                                                                            133