<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Endowing Robots with Self-Modeling Abilities for Trustful Human-Robot Interactions</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Cristiano Castelfranchi</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antonio Chella</string-name>
          <email>antonio.chella@unipa.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rino Falcone</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesco Lanza</string-name>
          <email>francesco.lanza@unipa.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Valeria Seidita</string-name>
          <email>valeria.seidita@unipa.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>cristiano.castelfrancchi@istc.cnr.it</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>rino.falcone@istc.cnr.it</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Universita` degli Studi di Palermo</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <fpage>22</fpage>
      <lpage>28</lpage>
      <abstract>
        <p>-Robots involved in collaborative and cooperative on the environment and on the “other”. Especially, knowledge tasks with humans cannot be programmed in all their functions. about the capabilities of the other, about the interpretation They are autonomous entities acting in a dynamic and often of the actions of the other concerning the shared goals and apnardtitahlley dkencoiswionnenpvriorcoensmseanret. dHeotewr mtoiniendterbayctthweitkhntohwelehdugmeaonns therefore also about the level of trust that is created towards the environment, on the other and on itself. Also, the level of the other. Trustworthiness is a parameter to be used for letting trust that each member of the team places in the other is crucial an entity decide which action to adopt or which to delegate. to creating a fruitful collaborative relationship. We hypothesize In our work, we are analyzing the role of trust in the humanthat one of the main components of a trustful relationship resides robot interactions and the integrated function of self-modeling ihno wtheemsepllfo-yminogdetlhinegmaobdileiltioesf torfutshtebyroFbaoltc.oTneheanpdapCerasitlellufsrtarnacthesi and theory of mind for implementing human-robot interactions to include self-modeling skills in the NAO humanoid robot based on trust. In this paper, we focus on how to implement involved in trustworthy interactions. Self-modeling skills are then self-modeling in the NAO robot employing the BDI (belief, implemented employing features by the BDI paradigm. desires, intention [15] [3]) agent paradigm and the JASON Index Terms-Human-Robot Interaction; Trust; Multi-agent framework [2] [1]. systems; BDI; JASON The final goal of our work is to implement interactions in teams of humans and robots so that collaboration is as efficient I. INTRODUCTION and reliable as possible. To do this, both entities involved in Human-robot interaction (HRI) is the discipline investigat- the interaction need to have a certain level of confidence in ing how to analyze and develop robots that interact with hu- each other. Measuring trust in the other is made easier if he mans to pursue a common objective. Interaction is the process has full knowledge of his capabilities, or if he can understand of working together to reach a goal and it can be viewed from his own limitations. The more one of the two entities is aware different points of view and has various forms, from direct of its limitations and abilities, the more the other entity can command and clear response to the ability of autonomously establish a level of confidence and create a productive and decide how to pursue a goal. Every robot applications present fruitful interaction. That is the founding factor of our work. some kind of interactions with humans through explicit or The idea is to exploit practical reasoning in conjunction implicit communications. In the case of autonomous robots with a well-known model of trust [6] [10] to let the robot operating as a teammate towards humans, humans provide the create a model of its actions and capabilities, hence some goal and the robot has to be able to maintain knowledge about kind of self-modeling abilities. We claim that self-modeling the environment and the tasks to perform in order to decide is one of the essential components in trust-based interactions. whether adopting or delegating a task or an action. Starting from the BDI practical reasoning cycle, we extend the Autonomy, proactivity, and adaptivity are the features to deliberation process and the belief base representation in a way decide, at each moment, which activity has to be fruitfully that allows the robot to decompose a plan in a set of actions performed for efficiently pursuing an objective. From a coop- strictly associated to the knowledge useful for performing each erative and social point of view - human-robot team interaction action. In this way, the robot creates and maintains a model - this means to be able to decide which action to perform by of the “self” and can justify the results of its actions. itself and which one to delegate to another component of the Justification is an essential result of self-modeling abilities team. application and at the same time is a useful means for This decision cannot be imposed during the design process, improving trustful interactions. for many reasons ranging from the composition of the envi- The rest of the paper is organized as follows: in section II we ronment to the characteristics of the interacting entities. The illustrate the motivations of our work along with some basic environment is always strongly dynamic and often unknown. concepts from trust theory and multi-agent systems domain In the case of a team composed of only humans, the interac- useful for understanding the solution proposed in section III; in tion with a teammate is based on the level of knowledge owned section IV we show how we employed our theory in a real case</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>study; in section V we compare our work with some related
works and finally in section VI we draw some discussions and
conclusions.</p>
    </sec>
    <sec id="sec-2">
      <title>II. THE TRUST THEORY AND AGENTS</title>
      <p>Trust is a general term to explain what a human has in mind
about how to rely on others. In literature, we can retrieve more
than one definition of trust. These definitions often are partially
or entirely related one with the others.</p>
      <p>
        One of the most accepted definitions of trust is the one by
Gambetta [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]: Trust is the subjective probability by which an
individual, A, expects that another individual, B, performs a
given action on which its welfare depends.
      </p>
      <p>
        Trust is strongly related to the knowledge one has on the
environment and on the other. Knowledge of the environment
is often the result of some kind of measure of trust. Trust is
seen both as a mental state and as a social attitude. Trust is
related to the mental process leading to the delegation. The
degree of trust is used to rationally decide whether or not to
delegate an action to another entity, the classic “on behalf of”.
It is for this reason that we choose to use agent technology. A
software agent [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] is born to act in place of the human;
all the theories and technologies about agents are born and
have evolved around this pivotal point.
      </p>
      <p>
        We refer to the work of Falcone and Castelfranchi [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]
[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. In [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] the authors consider:
• trust as mental attitude allowing to predict and evaluate
other agents’ behaviors;
• trust as a decision to rely on in other agent’s abilities;
• trust as a behaviour, or an intentional act of entrusting.
Moreover, in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], trust is considered as composed of a set of
different figures that take part in a trust model:
• the trustor - is an “intentional entity” like a cognitive
agent based on the BDI agent model that has to pursues
a specific goal.
• the trustee - is an agent that can operate into the
environment.
• the context - is a context where the trustee performs
actions.
• τ - is a “causal process”. It is performed by the trustee
and is composed of a couple of act α and result p, gX is
surely included in p and sometimes it coincides with p.
• the goal gX - is defined as GoalX (g).
      </p>
      <p>
        The trust function can be defined as the trust of a trustor
agent in a trustee agent for a specific context to perform acts
to realize the outcome result. The trust model is described as
a five-part figures relation:
where X is the trustor agent, Y is the trustee agent. X’s goal
or briefly gX is the most important element of this model. In
some cases, the outcome result can be identified with the goal.
For more insights on the model of trust and the trust theory
refer to [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>In this theory, trust is the mental counterpart of delegation.
In the sense that trust denotes a specific mental state mainly</p>
      <p>Fig. 1. Level of Delegation/Adoption, Literal Help
composed of beliefs and goals, but it may be realized only
through actions. Delegation is the result of a decision taken
by the trustor to achieve a result by involving the trustee.</p>
      <p>
        Several different levels of the delegation have been proposed
in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] and [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], they range from a situation in which the trustor
directly delegates the trustee to case in which the trustee
autonomously acts on behalf of the trustor.
      </p>
      <p>In our work, we assume an interaction like a continuous
operation of adoptions and delegations and we focus only on
the literal help shown in Fig. 1.</p>
      <p>
        In the literal help, a client (trustor) and a contractor (trustee)
act together to solve a problem, the trustor asks the trustee
to solve a sub-goal by communicating the trustee the set
of actions (plan) and the related result. In the literal help
approach, the trustee strictly adopts all the sub-goals the
trustor assigns to him [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. This corresponds to the notion
of behaving “on behalf of” that, as said, is one of the key
ideas in the multi-agent systems paradigm. Agents’ features,
such as autonomy, proactivity and rationality are a powerful
means that make trust-based agents ideal candidates to be used
in applications such as human-robot interaction. By employing
the multi-agent paradigm, we may design and develop a
multiagent system in which a certain number of agents is deployed
in the robots involved in the application domain.
      </p>
      <p>
        Our idea is to use the belief-desire-intention (BDI) paradigm
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The decision-making model underpinning BDI paradigm
is known as practical reasoning. Practical reasoning is a
reasoning process for actions, where agents’ desires and agents’
beliefs supply the relevant factor [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The practical reasoning,
in human-terms, consists of two activities:
• deliberation and intentions;
• means-ends reasoning.
      </p>
      <p>Each activity can be expressed as the ability to fix a behavior
related to some intentions and deciding how to behave.</p>
      <p>All these features of a BDI agent shall faithfully reflect all
we need to realize a system based on the trust theory.</p>
      <p>Fig. 2 shows the standard practical reasoning cycle of a BDI
agent. In the following sections, we illustrate how we changed
the reasoning to include self-modeling.</p>
    </sec>
    <sec id="sec-3">
      <title>III. SELF-MODELING USING BDI AGENTS</title>
      <p>How to design and implement a team of robots that possess
a model of themselves, of their actions, behaviors, and
abilities? And more, how to allow robots reason about themselves
and infer information about their activities, such as why action
has failed?</p>
      <p>
        The idea we propose is to use the multi-agent paradigm
and the BDI theories and techniques for analyzing trust-based
interactions among robots and humans working in a partially
unknown environment. We propose to employ the model
discussed in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and to integrate it with the traditional
BDI working cycle [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] (see section II).
      </p>
      <p>For employing this model of trust, we considered the robot
as the trustee and the human as the trustor. Assuming that
the human delegates a part of his goals to the robot, the level
of trust the human has in the robot may be derived from the
robot’s ability to justify the outcome of its actions, especially
in the case of failure. Indeed, self-modeling is the ability to
create a model of several features realizing the self. Among
them the knowledge of owns capabilities, in the sense that the
agent is aware of what it is able to do, and the knowledge
on which actions may be performed on every part of the
environment. Justifying action is the result of reasoning about
actions, it is a real implementation of the self-modeling ability
of an agent (human or robot). For doing this, we propose to
represent the robot’s knowledge through actions and beliefs
on those actions.</p>
      <p>In particular, we claim that the module containing the
justification of an action, or of behavior, should comprise
components allowing to reason about the portion of knowledge
useful for performing that action. This has to be made for each
action of a complete plan. If an action is coupled with all the
concepts it needs for being completed then the performer may
know at each moment whether and why an action is going
wrong and then it may motivate all eventual faults.</p>
      <p>This scenario is the result of the implementation of
selfability and contributes improving the trustful interaction. In
the sense that trust, and then the attitude to adopt or delegate,
may change accordingly. For instance, let us suppose a person
sitting on his desk in a room having the goal of going out the
room; this aim may be pursued by performing some simple
actions like for instance standing up, heading to the door,
opening the door with the key, going out. For each action
the performer uses the knowledge he owns about the external
environment and himself, about his own capabilities: he has
to be able to stand up, he has to know that a key is necessary
for opening the door and he has to possess that key and so
on. Before and during each action the person continuously and
iteratively checks and monitors if he can perform the action.
This can be translated in: having the knowledge on all the
conditions allowing an action to be undertaken and finished.</p>
      <p>In section II, in the trust function, the mental state of the
trust is achieved through actions, agent beliefs are implicit and
do not appear as direct variables in the trust function. For the
purpose of this work, we made beliefs explicit so that each
action of the model corresponds to one belief. This choice
allowed us to map the theory of trust with the BDI cycle and
to regularly report the new BDI cycle to the implementation
part including Jason.</p>
      <p>
        We needed to introduce a new representation in the model
of τ from [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
By combining the trust theory model and the self-modeling
approach, τ is a couple of a set of plans πi and the related
results pi. Indeed, now the trust model may implement the
BDI paradigm breaking down actions and results into a
combination of various arrangements of plans and sub-results.
(2)
(3)
Moreover, each atomic plan πi is the composition of action γi
and the portion of belief base Bi for pursuing it; formalized
as: n
πi = γi ◦ Bi ⇒ α = [ (γi ◦ Bi)
i=1
(5)
Bi is a portion of the initial belief base of the overall BDI
system. The ◦ operator represents the composition between
each action of a plan with a subset of the belief base (Fig. 3)
      </p>
      <p>
        This theoretical framework has been implemented in a real
robotic platform (the NAO-robot) exploiting Jason [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and
CArtAgO [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] for representing the BDI agents and the virtual
environment. The environment model is created through the
implementation of a perception module using NAO. Actions,
into the real world, are performed using CArtAgO Artifact
through @Operation function.
      </p>
      <p>What happens while executing actions can be explained
by referring to the BDI reasoning cycle. Once the robotic
system has been, at a first stance, analyzed, designed and
put in execution all the agents involved in the system acquire
knowledge. They explore the belief base and all the initial
goals they are responsible for (points 1. 2. 3. 4. - Fig. 2).
Then, the module implementing the deliberation and
meansand-reasoning (points 5. 6. 7. - Fig. 2) is enriched with a new
function. Commonly at this point, while executing the BDI
cycle, the tail of actions for each plan is elaborated to let the
agent decide which action to perform. Since we are interested
in the tail of actions and in all the knowledge useful for each
action, we add a new function:</p>
      <p>Ac ← action(Bαi , Cap)
(6)
where Bαi and Cap are respectively portions of belief base
related to the action αi and the set of agent’s capability for
that action.</p>
      <p>Agents execution and monitoring, implies the points 8. 9.
10. 11. 12 of the BDI cycle, that we enriched with a new
portion of the algorithm able to identify the impossible (I,B)
and ¬ succeeded(I,B) (ref. point 9.)</p>
      <p>In this step the effective trust interaction takes place, here
we may assume that the robot is endowed with the ability to
replanning, justifying and requesting supplementary information
to the human being. Thus making the robot fully and trustfully
autonomous and adaptive to each kind of situation it might
face or learn depending on its capabilities and knowledge.
The newly added functions, only for the case of justification,
are shown in the following algorithm:</p>
      <sec id="sec-3-1">
        <title>Algorithm 1:</title>
      </sec>
      <sec id="sec-3-2">
        <title>1 foreach αi do</title>
        <p>2 evaluate(αi);
3 J ← justify(αi,Bαi );
4 end</p>
        <p>Fig. 4 details all the elements and the mapping process
among beliefs, actions and plans.</p>
        <p>
          Summarizing, τ is the goal that a trustor decides to assign
to a trustee; it means that a BDI agent is assigned the
responsibility to perform all the actions γi included in τ . The
BDI implementation using Jason and CArtAgO environments
natively owns means for realizing the trust model, by implying:
• Jason Agent - is a BDI agent that allows managing the
NAO robot through an AgentSpeak formalization and the
related asl file [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] with the following:
– ASL Beliefs - is the portion of asl file allowing to
encode the agents’ knowledge base through a set of
beliefs. The set of beliefs includes all the knowledge
about the external and the inner (the capabilities)
environment of an agent;
– ASL Rules - is a way that we use to represent beliefs
that include norms, constraints and domain rules;
– ASL Goals - is the asl file section devoted to encode
the list of goals of the application domains (the list
of desires in the BDI logic);
– ASL Plans - is the section devoted to encoding the
high logic inference to do actions ;
– ASL Actions - is the actual part of the asl file that
let agent commit actions hence a plan;
• CArtAgO Artifact - let the agent perform a set of actions
into the environment. The environment is represented
into CArtAgO virtual environment through all the beliefs
acquired by NAO’s perception module. Moreover, in init
function, all the initial beliefs are imported from the jason
agent file;
• CArtAgO @Operation - is used to implement the agent’s
actions in the environment.
        </p>
        <p>
          Therefore, starting from:
• a reference model of environment and agents where the
key point is to consider the agent (hence the robot);
• all the internal elements of agents as part of the
environment;
• the BDI cycle;
• the theory of trust by Falcone and Castelfranchi; [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]
we implemented the trust model allowing to realizing
selfmodeling abilities in the agent.
        </p>
        <p>In the following section, we validate this idea by developing
a human-robot team employing the NAO robot and one human.</p>
        <p>IV. VALIDATION - THE ROBOT IN ACTION USING JASON
The case study we show in this section focuses on a
humanrobot team whose goal is to carry a certain number of objects
from a position to another in the room. The work to be done
is intended to be collaborative and cooperative. Ideally, and
this is part of the continuation of the present work, both the
human and the robot know the overall goals of the system
and communicate each other in order to commit or to delegate
some goals. In this setup, we decided to simplify the example
and considered only the case in which the robot is assigned (by
code, thus simulating the command of the human) to pursue
a specific goal, therefore the first type of delegation shown in
section II.</p>
        <p>The environment is composed of a set of objects marked
with the landmarks useful for the NAO to work 1, the set
of capabilities is made up basing on the NAO features, for
instance, to be able to grasp a little box. The NAO is endowed
with the capability of discriminating the dimensions of the
box, and so on.</p>
        <p>In this simplified case there is only one agent, the one
managing the robot, which has the responsibility of carrying
a specific object to a given position. The human, ideally the
other agent of the system, indicates the object and its position.</p>
        <p>Let us suppose to decompose the main goal (as shown
in Fig. 5) BoxInTheRigthPosition in three sub-goals, namely
FoundBox, BoxGrasped ReachedPosition. Let us consider the
sub-goal ReachedPosition, two of the actions that allow
pursuing this goal are: goAhead and holdBox2.The NAO has to
go ahead towards the objective and contemporarily hold the
box. The beliefs associated with these actions refer to the
concepts of the knowledge base these actions affect. In this
case, one of the concept is the box, it has attributes like its
dimension, its color, its weight, its initial position and so on.
The approach we use for describing the environment results
in a model containing all the actions that can be made on a
box, for instance holdBox, and a set of predicates representing
the beliefs for each object, for instance hasVisionParameters
or isDropped. They lead to the beliefs visionParameter and
dropped that are associated with the action holdBox through
the relation number (5).</p>
        <p>In the following a portion of code related to this part of
the example.</p>
        <p>Algorithm 2: Portion of code that implement the τ
decomposition.
1 +!ReachedPosition: true ← goAhead; holdBox. [τ ];
2 +!goAhead: batteryLimit(X) &amp; batteryLevel(Y) &amp;
Y &lt; X ← say(“My battery is exhaust. Please let me
charge.”). [γ1+];
3 +!goAhead: batteryLimit(X) &amp; batteryLevel(Y) &amp;</p>
        <p>Y ≥ X ← execActions. [γ1−];
4 B1: batteryLimit, batteryLevel ;
5 +!holdBox: dropped(X) &amp; visionParameters(Y) &amp;</p>
        <p>X == f alse ← execAct(Y). [γ2+];
6 +!holdBox: dropped(X) &amp; visionParameters(Y) &amp;</p>
        <p>X == true ← say(“The box is dropped.”). [γ2−];
7 B2: dropped, visionParameters ;</p>
        <p>It is worth to note that the model we developed does not
change the way we implement the agent, but only adds a way
to match knowledge to actions.</p>
        <p>In Fig. 6 some pictures showing the execution of the case
study with the NAO robot.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>V. RELATED WORK</title>
      <p>1All the technological implications of using the NAO robot are out of the
scope of this section.</p>
      <p>2For space concerns in this paper we show only an excerpt of the whole
AssignmentTree diagram, so only few explanatory belies for each action are
reported.</p>
      <p>Most of the work in the literature explores the concept
of trust, how to implement it and how to use it, from an
agent society, working in an open and dynamic environment,
viewpoint. So literature mostly focuses on organizations in
which multiple agents must interact with each other and decide
which action to take based on a certain level of trust in each
other. In our case, while sharing the concept of an open and
dynamic environment, we focus on the theme of man and robot
and explore the two-way role of man-robot and robot-man, of
trust in the interactions between them.</p>
      <p>Among the approaches in the literature that focus on
trustbased interactions in open and dynamic environments, here we
briefly present and compare our approach with some existing
ones that, in our opinion, embody the basic features of most
trust approaches.</p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] decision making is based on trust evaluation
through a decision-theoretic model that allows controlling trust
decision-making activities. The leading point of this works
is to make agents able to evaluate trust. Some reputation
mechanism enables trustor to make a better evaluation. Our
work shares the same objectives but it focuses, we may say, at
a different level of abstraction, we endow the agent with
selfmodeling abilities to give the trustor a means for delegating
or making the action by himself. We propose this way as a
higher autonomous form of interaction and cooperation.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] the trust model is applied to the virtual organization
and uses a probabilistic theory that considers parameter
calculated from past interactions, if some information lacks or is
inaccurate then the model relies on third parties. In our case
instead, we pose the basis for giving the trustee the ability to
ask for help when it does not possess the knowledge to perform
the delegated action thus always letting the possibility to the
trustor to evaluate. It is some kind of reverse logic, it is no
longer the trustor who is concerned about assessing his trust
in the trustee but it is the trustee who provides the means to
do so.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] is presented a trust model based on reputation,
here FIRE allows creating a measure for the trust that can
be used in different circumstances. This model overcomes the
problem of evaluating trust in a dynamic environment where
it is difficult to consolidate the knowledge the agent has on
the environment. The model we propose, instead, is at this
moment constrained by the fact that the trustor establishes a
level of trust by observing the other agent. However, endowing
the trustee with self-modeling abilities gives the trustor the
possibility to evaluate the work of the other better. In the sense
that the trustor must not only imagine and then evaluate what
the trustee is doing, just by his beliefs but can be enriched by
the explanations that are given by the trustee.
      </p>
      <p>
        A different approach is proposed in [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], here the authors
use a meta-analysis for establishing which features of the robot
may affect trustworthy relationship form the point of view of
the human. The robot is considered a participant to the team
but not an active part of it, some kind of resource. From this
work we may outline the main difference of our approach
against all the others, we consider the trustee (agent, robot or
whatever else) an active autonomous entity in the interaction.
      </p>
    </sec>
    <sec id="sec-5">
      <title>VI. DISCUSSIONS AND CONCLUSIONS</title>
      <p>In this work, we employed the trust model by Falcone
and Castelfranchi for human-robot interaction in unknown and
highly dynamic environments.</p>
      <p>The primary goal of our work is to equip the robot with the
self-modeling ability that allows it to be aware of its skills and
failures. In this work, we made self-modeling explicit as the
ability to justify oneself in the case of failure. In the future,
we will extend the model with the ability to ask for help when
the trustor’s requests do not fall within the trustee’s knowledge
and the ability to autonomously re-planning.</p>
      <p>The trust model has been integrated with a BDI-based part
of the deliberation process to include self-modeling. The
selfmodeling ability is obtained by joining the plan a BDI agent
commit to activating with the portion of knowledge base useful
for it.</p>
      <p>We chose and used JASON and CArtAgO because they
natively support everything that is part of the BDI theory and
besides allow us to implement, without significant changes
to the agent language paradigm, all the elements of the new
reference model for the environment we use.</p>
      <p>
        The outcomes we use in the various phases are not binding;
we are inspired by Tropos [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] for modeling goals, actions and
capabilities. However, we might use whatever methodological
approach giving a view of goals and their decomposition, and
decomposition into plans in a way useful to match with the
related knowledge base.
      </p>
      <p>In the future, we are going to develop and implement the
complete trust model that also implies the ability of an entity
to understand what the other one is going to do. In this way,
we aim at implementing human-robot interaction where each
involved entity delegates or commits an action on the base of
a kind of theory of mind of the other.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Rafael</surname>
            <given-names>H</given-names>
          </string-name>
          <string-name>
            <surname>Bordini and Jomi F Hu</surname>
          </string-name>
          <article-title>¨bner. BDI agent programming in agentspeak using jason</article-title>
          .
          <source>In Proceedings of the 6th international conference on Computational Logic in Multi-Agent Systems</source>
          , pages
          <fpage>143</fpage>
          -
          <lpage>164</lpage>
          . Springer-Verlag,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Rafael</surname>
            <given-names>H Bordini</given-names>
          </string-name>
          ,
          <article-title>Jomi Fred Hu¨bner, and Michael Wooldridge. Programming multi-agent systems in AgentSpeak using Jason</article-title>
          , volume
          <volume>8</volume>
          . John Wiley &amp; Sons,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Michael</given-names>
            <surname>Bratman</surname>
          </string-name>
          . Intention, plans, and practical reason, volume
          <volume>10</volume>
          . Harvard University Press Cambridge, MA,
          <year>1987</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Michael</surname>
            <given-names>E</given-names>
          </string-name>
          <string-name>
            <surname>Bratman</surname>
          </string-name>
          .
          <article-title>What is intention</article-title>
          . Intentions in communication, pages
          <fpage>15</fpage>
          -
          <lpage>32</lpage>
          ,
          <year>1990</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Paolo</given-names>
            <surname>Bresciani</surname>
          </string-name>
          , Anna Perini, Paolo Giorgini, Fausto Giunchiglia,
          <string-name>
            <surname>and John Mylopoulos. Tropos:</surname>
          </string-name>
          <article-title>An agent-oriented software development methodology</article-title>
          .
          <source>Autonomous Agents and Multi-Agent Systems</source>
          ,
          <volume>8</volume>
          (
          <issue>3</issue>
          ):
          <fpage>203</fpage>
          -
          <lpage>236</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Christiano</given-names>
            <surname>Castelfranchi</surname>
          </string-name>
          and
          <string-name>
            <given-names>Rino</given-names>
            <surname>Falcone</surname>
          </string-name>
          .
          <article-title>Trust theory: A sociocognitive and computational model</article-title>
          , volume
          <volume>18</volume>
          . John Wiley &amp; Sons,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Cristiano</given-names>
            <surname>Castelfranchi</surname>
          </string-name>
          and
          <string-name>
            <given-names>Rino</given-names>
            <surname>Falcone</surname>
          </string-name>
          .
          <article-title>Delegation conflicts</article-title>
          .
          <source>Multiagent rationality</source>
          , pages
          <fpage>234</fpage>
          -
          <lpage>254</lpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Cristiano</given-names>
            <surname>Castelfranchi</surname>
          </string-name>
          and
          <string-name>
            <given-names>Rino</given-names>
            <surname>Falcone</surname>
          </string-name>
          .
          <article-title>Towards a theory of delegation for agent-based systems</article-title>
          .
          <source>Robotics and Autonomous Systems</source>
          ,
          <volume>24</volume>
          (
          <issue>3-4</issue>
          ):
          <fpage>141</fpage>
          -
          <lpage>157</lpage>
          ,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Rino</given-names>
            <surname>Falcone</surname>
          </string-name>
          and
          <string-name>
            <given-names>Cristiano</given-names>
            <surname>Castelfranchi</surname>
          </string-name>
          .
          <article-title>The human in the loop of a delegated agent: The theory of adjustable social autonomy</article-title>
          .
          <source>IEEE Transactions on Systems</source>
          , Man, and
          <string-name>
            <surname>Cybernetics-Part</surname>
            <given-names>A</given-names>
          </string-name>
          : Systems and Humans,
          <volume>31</volume>
          (
          <issue>5</issue>
          ):
          <fpage>406</fpage>
          -
          <lpage>418</lpage>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Rino</given-names>
            <surname>Falcone</surname>
          </string-name>
          and
          <string-name>
            <given-names>Cristiano</given-names>
            <surname>Castelfranchi</surname>
          </string-name>
          .
          <article-title>Social trust: A cognitive approach</article-title>
          .
          <source>In Trust and deception in virtual societies</source>
          , pages
          <fpage>55</fpage>
          -
          <lpage>90</lpage>
          . Springer,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Rino</given-names>
            <surname>Falcone</surname>
          </string-name>
          and
          <string-name>
            <given-names>Cristiano</given-names>
            <surname>Castelfranchi</surname>
          </string-name>
          .
          <article-title>Trust dynamics: How trust is influenced by direct experiences and by trust itself</article-title>
          .
          <source>In Autonomous Agents and Multiagent Systems</source>
          ,
          <year>2004</year>
          .
          <article-title>AAMAS 2004</article-title>
          . Proceedings of the Third International Joint Conference on, pages
          <fpage>740</fpage>
          -
          <lpage>747</lpage>
          . IEEE,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Diego</given-names>
            <surname>Gambetta</surname>
          </string-name>
          .
          <article-title>Can we trust trust? trust: Making and breaking cooperative relations</article-title>
          , department of sociology, university of oxford,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Trung</given-names>
            <surname>Dong</surname>
          </string-name>
          <string-name>
            <given-names>Huynh</given-names>
            , Nicholas R Jennings, and
            <surname>Nigel R Shadbolt</surname>
          </string-name>
          .
          <article-title>An integrated trust and reputation model for open multi-agent systems</article-title>
          . Autonomous Agents and
          <string-name>
            <surname>Multi-Agent</surname>
            <given-names>Systems</given-names>
          </string-name>
          ,
          <volume>13</volume>
          (
          <issue>2</issue>
          ):
          <fpage>119</fpage>
          -
          <lpage>154</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Chris</given-names>
            <surname>Burnett Timothy J Norman and Katia Sycara</surname>
          </string-name>
          .
          <article-title>Trust decisionmaking in multi-agent systems</article-title>
          .
          <source>In Proceedings of the 22nd International Joint Conference on Artificial Intelligence</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Anand</surname>
            <given-names>S Rao</given-names>
          </string-name>
          , Michael P Georgeff, et al.
          <article-title>Bdi agents: from theory to practice</article-title>
          .
          <source>In ICMAS</source>
          , volume
          <volume>95</volume>
          , pages
          <fpage>312</fpage>
          -
          <lpage>319</lpage>
          ,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Alessandro</surname>
            <given-names>Ricci</given-names>
          </string-name>
          , Mirko Viroli, and
          <string-name>
            <given-names>Andrea</given-names>
            <surname>Omicini</surname>
          </string-name>
          .
          <article-title>Cartago: A framework for prototyping artifact-based environments in mas</article-title>
          .
          <source>E4MAS</source>
          ,
          <volume>6</volume>
          :
          <fpage>67</fpage>
          -
          <lpage>86</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Tracy</surname>
            <given-names>Sanders</given-names>
          </string-name>
          , Kristin E Oleson, Deborah R Billings, Jessie YC Chen, and
          <string-name>
            <surname>Peter A Hancock</surname>
          </string-name>
          .
          <article-title>A model of human-robot trust: Theoretical model development</article-title>
          .
          <source>In Proceedings of the human factors and ergonomics society annual meeting</source>
          , volume
          <volume>55</volume>
          , pages
          <fpage>1432</fpage>
          -
          <lpage>1436</lpage>
          . SAGE Publications Sage CA: Los Angeles, CA,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>WT</given-names>
            <surname>Luke Teacy</surname>
          </string-name>
          , Jigar Patel, Nicholas R Jennings, and
          <string-name>
            <given-names>Michael</given-names>
            <surname>Luck</surname>
          </string-name>
          .
          <article-title>Travos: Trust and reputation in the context of inaccurate information sources</article-title>
          .
          <source>Autonomous Agents and Multi-Agent Systems</source>
          ,
          <volume>12</volume>
          (
          <issue>2</issue>
          ):
          <fpage>183</fpage>
          -
          <lpage>198</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>Michael</given-names>
            <surname>Wooldridge</surname>
          </string-name>
          .
          <article-title>An introduction to multiagent systems</article-title>
          . John Wiley &amp; Sons,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>Michael</given-names>
            <surname>Wooldridge and Nicholas R Jennings.</surname>
          </string-name>
          <article-title>Intelligent agents: Theory and practice</article-title>
          .
          <source>The knowledge engineering review</source>
          ,
          <volume>10</volume>
          (
          <issue>2</issue>
          ):
          <fpage>115</fpage>
          -
          <lpage>152</lpage>
          ,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>