=Paper= {{Paper |id=Vol-2404/paper04 |storemode=property |title=Endowing Robots with Self-Modeling Abilities for Trustful Human-Robot Interactions |pdfUrl=https://ceur-ws.org/Vol-2404/paper04.pdf |volume=Vol-2404 |authors=Cristiano Castelfranchi,Antonio Chella,Rino Falcone,Francesco Lanza,Valeria Seidita |dblpUrl=https://dblp.org/rec/conf/woa/CastelfranchiCF19 }} ==Endowing Robots with Self-Modeling Abilities for Trustful Human-Robot Interactions== https://ceur-ws.org/Vol-2404/paper04.pdf
                                       Workshop "From Objects to Agents" (WOA 2019)


 Endowing Robots with Self-Modeling Abilities for
       Trustful Human-Robot Interactions
            Cristiano Castelfranchi∗ , Antonio Chella† , Rino Falcone∗ , Francesco Lanza† , Valeria Seidita †
                                ∗ Istituto di Scienze e Tecnologie della Cognizione (ISTC-CNR),

                                    cristiano.castelfrancchi@istc.cnr.it, rino.falcone@istc.cnr.it
                                                † Università degli Studi di Palermo,

                            antonio.chella@unipa.it, francesco.lanza@unipa.it, valeria.seidita@unipa.it



   Abstract—Robots involved in collaborative and cooperative               on the environment and on the “other”. Especially, knowledge
tasks with humans cannot be programmed in all their functions.             about the capabilities of the other, about the interpretation
They are autonomous entities acting in a dynamic and often                 of the actions of the other concerning the shared goals and
partially known environment. How to interact with the humans
and the decision process are determined by the knowledge on                therefore also about the level of trust that is created towards
the environment, on the other and on itself. Also, the level of            the other. Trustworthiness is a parameter to be used for letting
trust that each member of the team places in the other is crucial          an entity decide which action to adopt or which to delegate.
to creating a fruitful collaborative relationship. We hypothesize             In our work, we are analyzing the role of trust in the human-
that one of the main components of a trustful relationship resides         robot interactions and the integrated function of self-modeling
in the self-modeling abilities of the robot. The paper illustrates
how employing the model of trust by Falcone and Castelfranchi              and theory of mind for implementing human-robot interactions
to include self-modeling skills in the NAO humanoid robot                  based on trust. In this paper, we focus on how to implement
involved in trustworthy interactions. Self-modeling skills are then        self-modeling in the NAO robot employing the BDI (belief,
implemented employing features by the BDI paradigm.                        desires, intention [15] [3]) agent paradigm and the JASON
   Index Terms—Human-Robot Interaction; Trust; Multi-agent                 framework [2] [1].
systems; BDI; JASON
                                                                              The final goal of our work is to implement interactions in
                                                                           teams of humans and robots so that collaboration is as efficient
                       I. I NTRODUCTION
                                                                           and reliable as possible. To do this, both entities involved in
   Human-robot interaction (HRI) is the discipline investigat-             the interaction need to have a certain level of confidence in
ing how to analyze and develop robots that interact with hu-               each other. Measuring trust in the other is made easier if he
mans to pursue a common objective. Interaction is the process              has full knowledge of his capabilities, or if he can understand
of working together to reach a goal and it can be viewed from              his own limitations. The more one of the two entities is aware
different points of view and has various forms, from direct                of its limitations and abilities, the more the other entity can
command and clear response to the ability of autonomously                  establish a level of confidence and create a productive and
decide how to pursue a goal. Every robot applications present              fruitful interaction. That is the founding factor of our work.
some kind of interactions with humans through explicit or                     The idea is to exploit practical reasoning in conjunction
implicit communications. In the case of autonomous robots                  with a well-known model of trust [6] [10] to let the robot
operating as a teammate towards humans, humans provide the                 create a model of its actions and capabilities, hence some
goal and the robot has to be able to maintain knowledge about              kind of self-modeling abilities. We claim that self-modeling
the environment and the tasks to perform in order to decide                is one of the essential components in trust-based interactions.
whether adopting or delegating a task or an action.                        Starting from the BDI practical reasoning cycle, we extend the
   Autonomy, proactivity, and adaptivity are the features to               deliberation process and the belief base representation in a way
decide, at each moment, which activity has to be fruitfully                that allows the robot to decompose a plan in a set of actions
performed for efficiently pursuing an objective. From a coop-              strictly associated to the knowledge useful for performing each
erative and social point of view - human-robot team interaction            action. In this way, the robot creates and maintains a model
- this means to be able to decide which action to perform by               of the “self” and can justify the results of its actions.
itself and which one to delegate to another component of the                  Justification is an essential result of self-modeling abilities
team.                                                                      application and at the same time is a useful means for
   This decision cannot be imposed during the design process,              improving trustful interactions.
for many reasons ranging from the composition of the envi-                    The rest of the paper is organized as follows: in section II we
ronment to the characteristics of the interacting entities. The            illustrate the motivations of our work along with some basic
environment is always strongly dynamic and often unknown.                  concepts from trust theory and multi-agent systems domain
   In the case of a team composed of only humans, the interac-             useful for understanding the solution proposed in section III; in
tion with a teammate is based on the level of knowledge owned              section IV we show how we employed our theory in a real case




                                                                      22
                                       Workshop "From Objects to Agents" (WOA 2019)

study; in section V we compare our work with some related
works and finally in section VI we draw some discussions and
conclusions.
             II. T HE T RUST T HEORY AND AGENTS
   Trust is a general term to explain what a human has in mind
about how to rely on others. In literature, we can retrieve more
than one definition of trust. These definitions often are partially
or entirely related one with the others.
   One of the most accepted definitions of trust is the one by
Gambetta [12]: Trust is the subjective probability by which an
individual, A, expects that another individual, B, performs a
given action on which its welfare depends.
                                                                                      Fig. 1. Level of Delegation/Adoption, Literal Help
   Trust is strongly related to the knowledge one has on the
environment and on the other. Knowledge of the environment
is often the result of some kind of measure of trust. Trust is
                                                                           composed of beliefs and goals, but it may be realized only
seen both as a mental state and as a social attitude. Trust is
                                                                           through actions. Delegation is the result of a decision taken
related to the mental process leading to the delegation. The
                                                                           by the trustor to achieve a result by involving the trustee.
degree of trust is used to rationally decide whether or not to
delegate an action to another entity, the classic “on behalf of”.             Several different levels of the delegation have been proposed
It is for this reason that we choose to use agent technology. A            in [7] and [9], they range from a situation in which the trustor
software agent [19] [20] is born to act in place of the human;             directly delegates the trustee to case in which the trustee
all the theories and technologies about agents are born and                autonomously acts on behalf of the trustor.
have evolved around this pivotal point.                                       In our work, we assume an interaction like a continuous
   We refer to the work of Falcone and Castelfranchi [6] [10]              operation of adoptions and delegations and we focus only on
[11] [8]. In [6] the authors consider:                                     the literal help shown in Fig. 1.
   • trust as mental attitude allowing to predict and evaluate                In the literal help, a client (trustor) and a contractor (trustee)
      other agents’ behaviors;                                             act together to solve a problem, the trustor asks the trustee
   • trust as a decision to rely on in other agent’s abilities;            to solve a sub-goal by communicating the trustee the set
   • trust as a behaviour, or an intentional act of entrusting.            of actions (plan) and the related result. In the literal help
Moreover, in [6], trust is considered as composed of a set of              approach, the trustee strictly adopts all the sub-goals the
different figures that take part in a trust model:                         trustor assigns to him [7] [9]. This corresponds to the notion
   • the trustor - is an “intentional entity” like a cognitive
                                                                           of behaving “on behalf of” that, as said, is one of the key
      agent based on the BDI agent model that has to pursues               ideas in the multi-agent systems paradigm. Agents’ features,
      a specific goal.                                                     such as autonomy, proactivity and rationality are a powerful
   • the trustee - is an agent that can operate into the envi-
                                                                           means that make trust-based agents ideal candidates to be used
      ronment.                                                             in applications such as human-robot interaction. By employing
   • the context - is a context where the trustee performs
                                                                           the multi-agent paradigm, we may design and develop a multi-
      actions.                                                             agent system in which a certain number of agents is deployed
   • τ - is a “causal process”. It is performed by the trustee
                                                                           in the robots involved in the application domain.
      and is composed of a couple of act α and result p, gX is                Our idea is to use the belief-desire-intention (BDI) paradigm
      surely included in p and sometimes it coincides with p.              [3]. The decision-making model underpinning BDI paradigm
   • the goal gX - is defined as GoalX (g).                                is known as practical reasoning. Practical reasoning is a rea-
   The trust function can be defined as the trust of a trustor             soning process for actions, where agents’ desires and agents’
agent in a trustee agent for a specific context to perform acts            beliefs supply the relevant factor [4]. The practical reasoning,
to realize the outcome result. The trust model is described as             in human-terms, consists of two activities:
a five-part figures relation:                                                •   deliberation and intentions;
                    T RU ST (X Y C τ gX )                      (1)           •   means-ends reasoning.

where X is the trustor agent, Y is the trustee agent. X’s goal                Each activity can be expressed as the ability to fix a behavior
or briefly gX is the most important element of this model. In              related to some intentions and deciding how to behave.
some cases, the outcome result can be identified with the goal.               All these features of a BDI agent shall faithfully reflect all
For more insights on the model of trust and the trust theory               we need to realize a system based on the trust theory.
refer to [6].                                                                 Fig. 2 shows the standard practical reasoning cycle of a BDI
   In this theory, trust is the mental counterpart of delegation.          agent. In the following sections, we illustrate how we changed
In the sense that trust denotes a specific mental state mainly             the reasoning to include self-modeling.




                                                                      23
                                         Workshop "From Objects to Agents" (WOA 2019)

         III. S ELF - MODELING USING BDI AGENTS                          concepts it needs for being completed then the performer may
   How to design and implement a team of robots that possess             know at each moment whether and why an action is going
a model of themselves, of their actions, behaviors, and abili-           wrong and then it may motivate all eventual faults.
ties? And more, how to allow robots reason about themselves                 This scenario is the result of the implementation of self-
and infer information about their activities, such as why action         ability and contributes improving the trustful interaction. In
has failed?                                                              the sense that trust, and then the attitude to adopt or delegate,
   The idea we propose is to use the multi-agent paradigm                may change accordingly. For instance, let us suppose a person
and the BDI theories and techniques for analyzing trust-based            sitting on his desk in a room having the goal of going out the
interactions among robots and humans working in a partially              room; this aim may be pursued by performing some simple
unknown environment. We propose to employ the model                      actions like for instance standing up, heading to the door,
discussed in [10] [6] and to integrate it with the traditional           opening the door with the key, going out. For each action
BDI working cycle [2] (see section II).                                  the performer uses the knowledge he owns about the external
   For employing this model of trust, we considered the robot            environment and himself, about his own capabilities: he has
as the trustee and the human as the trustor. Assuming that               to be able to stand up, he has to know that a key is necessary
the human delegates a part of his goals to the robot, the level          for opening the door and he has to possess that key and so
of trust the human has in the robot may be derived from the              on. Before and during each action the person continuously and
robot’s ability to justify the outcome of its actions, especially        iteratively checks and monitors if he can perform the action.
in the case of failure. Indeed, self-modeling is the ability to          This can be translated in: having the knowledge on all the
create a model of several features realizing the self. Among             conditions allowing an action to be undertaken and finished.
them the knowledge of owns capabilities, in the sense that the              In section II, in the trust function, the mental state of the
agent is aware of what it is able to do, and the knowledge               trust is achieved through actions, agent beliefs are implicit and
on which actions may be performed on every part of the                   do not appear as direct variables in the trust function. For the
environment. Justifying action is the result of reasoning about          purpose of this work, we made beliefs explicit so that each
actions, it is a real implementation of the self-modeling ability        action of the model corresponds to one belief. This choice
of an agent (human or robot). For doing this, we propose to              allowed us to map the theory of trust with the BDI cycle and
represent the robot’s knowledge through actions and beliefs              to regularly report the new BDI cycle to the implementation
on those actions.                                                        part including Jason.
   In particular, we claim that the module containing the                   We needed to introduce a new representation in the model
justification of an action, or of behavior, should comprise              of τ from [6].
components allowing to reason about the portion of knowledge
useful for performing that action. This has to be made for each                              T RU ST (X Y C τ gX )                             (2)
action of a complete plan. If an action is coupled with all the
                                                                                        where τ = (α, p)          and    gX ≡ p;               (3)
                                                                         By combining the trust theory model and the self-modeling
                                                                         approach, τ is a couple of a set of plans πi and the related
                                                                         results pi . Indeed, now the trust model may implement the
                                                                         BDI paradigm breaking down actions and results into a
                                                                         combination of various arrangements of plans and sub-results.




                                                                                    Fig. 3. Mapping actions onto beliefs (relation 4)

                                                                           The model of τ is formalized as:
                                                                                                     [
                                                                                                     n                              [
                                                                                                                                    n
                                                                            τ = (α, p) where α =        πi            and     p=          pi   (4)
             Fig. 2. Practical reasoning taken from [2] .
                                                                                                          i=1                       i=1




                                                                    24
                                      Workshop "From Objects to Agents" (WOA 2019)

Moreover, each atomic plan πi is the composition of action γi
and the portion of belief base Bi for pursuing it; formalized
as:
                                   [n
              πi = γi ◦ Bi ⇒ α =      (γi ◦ Bi )           (5)
                                     i=1

Bi is a portion of the initial belief base of the overall BDI
system. The ◦ operator represents the composition between
each action of a plan with a subset of the belief base (Fig. 3)
   This theoretical framework has been implemented in a real
robotic platform (the NAO-robot) exploiting Jason [2] and
CArtAgO [16] for representing the BDI agents and the virtual
environment. The environment model is created through the
implementation of a perception module using NAO. Actions,
into the real world, are performed using CArtAgO Artifact
through @Operation function.
   What happens while executing actions can be explained
by referring to the BDI reasoning cycle. Once the robotic
system has been, at a first stance, analyzed, designed and
put in execution all the agents involved in the system acquire
knowledge. They explore the belief base and all the initial
goals they are responsible for (points 1. 2. 3. 4. - Fig. 2).
Then, the module implementing the deliberation and means-                     Fig. 4. A block-diagram representation of the causal process τ .
and-reasoning (points 5. 6. 7. - Fig. 2) is enriched with a new
function. Commonly at this point, while executing the BDI
cycle, the tail of actions for each plan is elaborated to let the           Summarizing, τ is the goal that a trustor decides to assign
agent decide which action to perform. Since we are interested            to a trustee; it means that a BDI agent is assigned the
in the tail of actions and in all the knowledge useful for each          responsibility to perform all the actions γi included in τ . The
action, we add a new function:                                           BDI implementation using Jason and CArtAgO environments
                                                                         natively owns means for realizing the trust model, by implying:
                   Ac ← action(Bαi , Cap)                    (6)            • Jason Agent - is a BDI agent that allows managing the
                                                                              NAO robot through an AgentSpeak formalization and the
   where Bαi and Cap are respectively portions of belief base                 related asl file [2] with the following:
related to the action αi and the set of agent’s capability for
                                                                                – ASL Beliefs - is the portion of asl file allowing to
that action.
                                                                                   encode the agents’ knowledge base through a set of
   Agents execution and monitoring, implies the points 8. 9.
                                                                                   beliefs. The set of beliefs includes all the knowledge
10. 11. 12 of the BDI cycle, that we enriched with a new
                                                                                   about the external and the inner (the capabilities)
portion of the algorithm able to identify the impossible (I,B)
                                                                                   environment of an agent;
and ¬ succeeded(I,B) (ref. point 9.)
                                                                                – ASL Rules - is a way that we use to represent beliefs
   In this step the effective trust interaction takes place, here
                                                                                   that include norms, constraints and domain rules;
we may assume that the robot is endowed with the ability to re-
                                                                                – ASL Goals - is the asl file section devoted to encode
planning, justifying and requesting supplementary information
                                                                                   the list of goals of the application domains (the list
to the human being. Thus making the robot fully and trustfully
                                                                                   of desires in the BDI logic);
autonomous and adaptive to each kind of situation it might
                                                                                – ASL Plans - is the section devoted to encoding the
face or learn depending on its capabilities and knowledge.
                                                                                   high logic inference to do actions ;
The newly added functions, only for the case of justification,
                                                                                – ASL Actions - is the actual part of the asl file that
are shown in the following algorithm:
                                                                                   let agent commit actions hence a plan;
                                                                            • CArtAgO Artifact - let the agent perform a set of actions
 Algorithm 1:
                                                                              into the environment. The environment is represented
1 foreach αi do                                                               into CArtAgO virtual environment through all the beliefs
2       evaluate(αi );                                                        acquired by NAO’s perception module. Moreover, in init
3       J ← justify(αi ,Bαi );                                                function, all the initial beliefs are imported from the jason
4 end
                                                                              agent file;
                                                                            • CArtAgO @Operation - is used to implement the agent’s
  Fig. 4 details all the elements and the mapping process                     actions in the environment.
among beliefs, actions and plans.                                           Therefore, starting from:




                                                                    25
                                            Workshop "From Objects to Agents" (WOA 2019)

  •   a reference model of environment and agents where the                                                                      BoxInTheRight
      key point is to consider the agent (hence the robot);                                                                         Position

   • all the internal elements of agents as part of the environ-
                                                                                                           FoundBox
      ment;                                                                                                                                           ReachedPos
                                                                                                                                                         ition
   • the BDI cycle;
                                                                                                                                                                               visionPar
   • the theory of trust by Falcone and Castelfranchi; [6]                                                                    BoxGrasped
                                                                                                                                                                                ameters
                                                                                                  detect
   we implemented the trust model allowing to realizing self-
modeling abilities in the agent.
   In the following section, we validate this idea by developing                            …..
                                                                                                                approachBox                 statusBattery            holdBox

a human-robot team employing the NAO robot and one human.
 IV. VALIDATION - THE ROBOT IN ACTION USING JASON                                                                               openArms

                                                                                                                                                                           dropped
   The case study we show in this section focuses on a human-                                                         …..

robot team whose goal is to carry a certain number of objects                            Legend
from a position to another in the room. The work to be done                                                                        …..
                                                                                                                                                            batteryLevel

is intended to be collaborative and cooperative. Ideally, and                                 goal                                          batteryLimit


this is part of the continuation of the present work, both the
human and the robot know the overall goals of the system                                   Plan/Action

and communicate each other in order to commit or to delegate
some goals. In this setup, we decided to simplify the example                                Belief
and considered only the case in which the robot is assigned (by
code, thus simulating the command of the human) to pursue
a specific goal, therefore the first type of delegation shown in                             Fig. 5. A portion of the assignment tree for the case study
section II.
   The environment is composed of a set of objects marked
with the landmarks useful for the NAO to work 1 , the set                             In the following a portion of code related to this part of
of capabilities is made up basing on the NAO features, for                          the example.
instance, to be able to grasp a little box. The NAO is endowed
with the capability of discriminating the dimensions of the
box, and so on.                                                                         Algorithm 2: Portion of code that implement the τ de-
   In this simplified case there is only one agent, the one                             composition.
managing the robot, which has the responsibility of carrying                        1 +!ReachedPosition: true ← goAhead; holdBox. [τ ];
a specific object to a given position. The human, ideally the                       2 +!goAhead: batteryLimit(X) & batteryLevel(Y) &
other agent of the system, indicates the object and its position.                      Y < X ← say(“My battery is exhaust. Please let me
   Let us suppose to decompose the main goal (as shown                                 charge.”). [γ1+ ];
in Fig. 5) BoxInTheRigthPosition in three sub-goals, namely                         3 +!goAhead: batteryLimit(X) & batteryLevel(Y) &
FoundBox, BoxGrasped ReachedPosition. Let us consider the                              Y ≥ X ← execActions. [γ1− ];
sub-goal ReachedPosition, two of the actions that allow pur-                        4 B1 : batteryLimit, batteryLevel ;
suing this goal are: goAhead and holdBox2 .The NAO has to                           5 +!holdBox: dropped(X) & visionParameters(Y) &
go ahead towards the objective and contemporarily hold the                             X == f alse ← execAct(Y). [γ2+ ];
box. The beliefs associated with these actions refer to the                         6 +!holdBox: dropped(X) & visionParameters(Y) &
concepts of the knowledge base these actions affect. In this                           X == true ← say(“The box is dropped.”). [γ2− ];
case, one of the concept is the box, it has attributes like its                     7 B2 : dropped, visionParameters ;
dimension, its color, its weight, its initial position and so on.
The approach we use for describing the environment results
in a model containing all the actions that can be made on a                            It is worth to note that the model we developed does not
box, for instance holdBox, and a set of predicates representing                     change the way we implement the agent, but only adds a way
the beliefs for each object, for instance hasVisionParameters                       to match knowledge to actions.
or isDropped. They lead to the beliefs visionParameter and                             In Fig. 6 some pictures showing the execution of the case
dropped that are associated with the action holdBox through                         study with the NAO robot.
the relation number (5).
                                                                                                                      V. R ELATED W ORK
  1 All the technological implications of using the NAO robot are out of the
                                                                                       Most of the work in the literature explores the concept
scope of this section.                                                              of trust, how to implement it and how to use it, from an
  2 For space concerns in this paper we show only an excerpt of the whole
AssignmentTree diagram, so only few explanatory belies for each action are          agent society, working in an open and dynamic environment,
reported.                                                                           viewpoint. So literature mostly focuses on organizations in




                                                                               26
                                      Workshop "From Objects to Agents" (WOA 2019)

which multiple agents must interact with each other and decide           moment constrained by the fact that the trustor establishes a
which action to take based on a certain level of trust in each           level of trust by observing the other agent. However, endowing
other. In our case, while sharing the concept of an open and             the trustee with self-modeling abilities gives the trustor the
dynamic environment, we focus on the theme of man and robot              possibility to evaluate the work of the other better. In the sense
and explore the two-way role of man-robot and robot-man, of              that the trustor must not only imagine and then evaluate what
trust in the interactions between them.                                  the trustee is doing, just by his beliefs but can be enriched by
   Among the approaches in the literature that focus on trust-           the explanations that are given by the trustee.
based interactions in open and dynamic environments, here we                A different approach is proposed in [17], here the authors
briefly present and compare our approach with some existing              use a meta-analysis for establishing which features of the robot
ones that, in our opinion, embody the basic features of most             may affect trustworthy relationship form the point of view of
trust approaches.                                                        the human. The robot is considered a participant to the team
   In [14] decision making is based on trust evaluation                  but not an active part of it, some kind of resource. From this
through a decision-theoretic model that allows controlling trust         work we may outline the main difference of our approach
decision-making activities. The leading point of this works              against all the others, we consider the trustee (agent, robot or
is to make agents able to evaluate trust. Some reputation                whatever else) an active autonomous entity in the interaction.
mechanism enables trustor to make a better evaluation. Our                           VI. D ISCUSSIONS AND C ONCLUSIONS
work shares the same objectives but it focuses, we may say, at
a different level of abstraction, we endow the agent with self-             In this work, we employed the trust model by Falcone
modeling abilities to give the trustor a means for delegating            and Castelfranchi for human-robot interaction in unknown and
or making the action by himself. We propose this way as a                highly dynamic environments.
higher autonomous form of interaction and cooperation.                      The primary goal of our work is to equip the robot with the
                                                                         self-modeling ability that allows it to be aware of its skills and
   In [18] the trust model is applied to the virtual organization
                                                                         failures. In this work, we made self-modeling explicit as the
and uses a probabilistic theory that considers parameter cal-
                                                                         ability to justify oneself in the case of failure. In the future,
culated from past interactions, if some information lacks or is
                                                                         we will extend the model with the ability to ask for help when
inaccurate then the model relies on third parties. In our case
                                                                         the trustor’s requests do not fall within the trustee’s knowledge
instead, we pose the basis for giving the trustee the ability to
                                                                         and the ability to autonomously re-planning.
ask for help when it does not possess the knowledge to perform
                                                                            The trust model has been integrated with a BDI-based part
the delegated action thus always letting the possibility to the
                                                                         of the deliberation process to include self-modeling. The self-
trustor to evaluate. It is some kind of reverse logic, it is no
                                                                         modeling ability is obtained by joining the plan a BDI agent
longer the trustor who is concerned about assessing his trust
                                                                         commit to activating with the portion of knowledge base useful
in the trustee but it is the trustee who provides the means to
                                                                         for it.
do so.
                                                                            We chose and used JASON and CArtAgO because they
   In [13] is presented a trust model based on reputation,
                                                                         natively support everything that is part of the BDI theory and
here FIRE allows creating a measure for the trust that can
                                                                         besides allow us to implement, without significant changes
be used in different circumstances. This model overcomes the
                                                                         to the agent language paradigm, all the elements of the new
problem of evaluating trust in a dynamic environment where
                                                                         reference model for the environment we use.
it is difficult to consolidate the knowledge the agent has on
                                                                            The outcomes we use in the various phases are not binding;
the environment. The model we propose, instead, is at this
                                                                         we are inspired by Tropos [5] for modeling goals, actions and
                                                                         capabilities. However, we might use whatever methodological
                                                                         approach giving a view of goals and their decomposition, and
                                                                         decomposition into plans in a way useful to match with the
                                                                         related knowledge base.
                                                                            In the future, we are going to develop and implement the
                                                                         complete trust model that also implies the ability of an entity
                                                                         to understand what the other one is going to do. In this way,
                                                                         we aim at implementing human-robot interaction where each
                                                                         involved entity delegates or commits an action on the base of
                                                                         a kind of theory of mind of the other.
                                                                                                     R EFERENCES
                                                                          [1] Rafael H Bordini and Jomi F Hübner. BDI agent programming
                                                                              in agentspeak using jason. In Proceedings of the 6th international
                                                                              conference on Computational Logic in Multi-Agent Systems, pages 143–
                                                                              164. Springer-Verlag, 2005.
                                                                          [2] Rafael H Bordini, Jomi Fred Hübner, and Michael Wooldridge. Pro-
Fig. 6. The NAO working on the BoxInTheRightPosition goal and the             gramming multi-agent systems in AgentSpeak using Jason, volume 8.
justification                                                                 John Wiley & Sons, 2007.




                                                                    27
                                            Workshop "From Objects to Agents" (WOA 2019)

 [3] Michael Bratman. Intention, plans, and practical reason, volume 10.           [12] Diego Gambetta. Can we trust trust? trust: Making and breaking
     Harvard University Press Cambridge, MA, 1987.                                      cooperative relations, department of sociology, university of oxford,
 [4] Michael E Bratman. What is intention. Intentions in communication,                 2000.
     pages 15–32, 1990.                                                            [13] Trung Dong Huynh, Nicholas R Jennings, and Nigel R Shadbolt. An
 [5] Paolo Bresciani, Anna Perini, Paolo Giorgini, Fausto Giunchiglia, and              integrated trust and reputation model for open multi-agent systems.
     John Mylopoulos. Tropos: An agent-oriented software development                    Autonomous Agents and Multi-Agent Systems, 13(2):119–154, 2006.
     methodology. Autonomous Agents and Multi-Agent Systems, 8(3):203–             [14] Chris Burnett Timothy J Norman and Katia Sycara. Trust decision-
     236, 2004.                                                                         making in multi-agent systems. In Proceedings of the 22nd International
 [6] Christiano Castelfranchi and Rino Falcone. Trust theory: A socio-                  Joint Conference on Artificial Intelligence, 2011.
     cognitive and computational model, volume 18. John Wiley & Sons,              [15] Anand S Rao, Michael P Georgeff, et al. Bdi agents: from theory to
     2010.                                                                              practice. In ICMAS, volume 95, pages 312–319, 1995.
 [7] Cristiano Castelfranchi and Rino Falcone. Delegation conflicts. Multi-        [16] Alessandro Ricci, Mirko Viroli, and Andrea Omicini. Cartago: A
     agent rationality, pages 234–254, 1997.                                            framework for prototyping artifact-based environments in mas. E4MAS,
 [8] Cristiano Castelfranchi and Rino Falcone. Towards a theory of del-                 6:67–86, 2006.
     egation for agent-based systems. Robotics and Autonomous Systems,             [17] Tracy Sanders, Kristin E Oleson, Deborah R Billings, Jessie YC Chen,
     24(3-4):141–157, 1998.                                                             and Peter A Hancock. A model of human-robot trust: Theoretical model
 [9] Rino Falcone and Cristiano Castelfranchi. The human in the loop of                 development. In Proceedings of the human factors and ergonomics so-
     a delegated agent: The theory of adjustable social autonomy. IEEE                  ciety annual meeting, volume 55, pages 1432–1436. SAGE Publications
     Transactions on Systems, Man, and Cybernetics-Part A: Systems and                  Sage CA: Los Angeles, CA, 2011.
     Humans, 31(5):406–418, 2001.                                                  [18] WT Luke Teacy, Jigar Patel, Nicholas R Jennings, and Michael Luck.
[10] Rino Falcone and Cristiano Castelfranchi. Social trust: A cognitive                Travos: Trust and reputation in the context of inaccurate information
     approach. In Trust and deception in virtual societies, pages 55–90.                sources. Autonomous Agents and Multi-Agent Systems, 12(2):183–198,
     Springer, 2001.                                                                    2006.
[11] Rino Falcone and Cristiano Castelfranchi. Trust dynamics: How trust           [19] Michael Wooldridge. An introduction to multiagent systems. John Wiley
     is influenced by direct experiences and by trust itself. In Autonomous             & Sons, 2009.
     Agents and Multiagent Systems, 2004. AAMAS 2004. Proceedings of the           [20] Michael Wooldridge and Nicholas R Jennings. Intelligent agents: Theory
     Third International Joint Conference on, pages 740–747. IEEE, 2004.                and practice. The knowledge engineering review, 10(2):115–152, 1995.




                                                                              28