<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Towards a Complete Characterization of Epistemic Reasoning: the Notion of Trust?</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Mathematics</institution>
          ,
          <addr-line>Computer Science and Physics</addr-line>
          ,
          <institution>University of Udine</institution>
          ,
          <addr-line>Via delle Scienze 206, 33100 Udine</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Designing autonomous agents, that interact with others to perform complex tasks, has always been one of the main objective of the Arti cial Intelligence community. For such systems to be employed in complex scenarios, where the information about others is key (e.g., self-driving cars), it is necessary to de ne robust formalisms that allow each agent to act considering her beliefs on both: i) the state of the world; and ii) the other agents' perspective of it. The branch of AI that studies such formalisms is known in literature as Multi-Agent Epistemic Planning (MEP). The epistemic action-based language mA , to the best of our knowledge, is the most comprehensive tool to model MEP domains but still lacks concepts that are necessary to reason on real-world scenarios. In this paper we introduce the actions (un)trustworthy announcement and (mis)trustworthy announcement for mA . These actions increase the language's expressiveness introducing the notion of trust, therefore allowing for a more profound representation of real-world scenarios. In particular, we will provide the characterization, along with some desired properties, of the aforementioned actions' transition functions. Finally, we will discuss the importance of formalizing the concept of trust in the MEP problem.</p>
      </abstract>
      <kwd-group>
        <kwd>Epistemic Action Languages • Planning • Multi-agent • Knowledge/Belief Representation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Recently, techniques derived from the elds of automated reasoning and
knowledge representation have been heavily exploited in both our daily life and in the
industry. The natural evolution of such applications, i.e., systems that involve
hundreds of agents each acting upon her beliefs to achieve her own goals (e.g.,
self-driving cars), is going to be widely deployed in just few years. The branch of
AI interested in studying and modeling such agent-based technologies is referred
? Copyright ' 2020 for this paper by its authors. Use permitted under Creative</p>
      <p>Commons License Attribution 4.0 International (CC BY 4.0).
to as automated planning. In particular, multi-agent planning [1, 6{8, 13]
provides a powerful tool to model scenarios comprised of agents that interact with
each other. To maximize the potentials of such autonomous systems each agent
should be able to reason on both: i) her perspective of the \concrete" world; and
ii) her beliefs of the other agents' perspective of the environment|that is, their
viewpoint of the \concrete" world and of the others' perspective of it. The
planning problem in this new setting is referred to as multi-agent epistemic planning
in the literature.</p>
      <p>
        Nevertheless, as said in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] `reasoning about knowledge and beliefs is not as
direct as reasoning on the \physical" state of the world '. Already existing
epistemic action languages [
        <xref ref-type="bibr" rid="ref14 ref2 ref3 ref8">2, 3, 8, 14</xref>
        ] are able to model several families of problems
and to study their information ows but cannot comprehensively reason on
aspects like trust, dishonesty, deception, and incomplete knowledge. In order to
exploit epistemic reasoning in complex real-world scenarios, e.g., economy,
security, justice and politics, it is then necessary to increase the expressiveness of
the aforementioned languages.
      </p>
      <p>
        In this paper we expand the language mA [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], to the best of our knowledge
the most comprehensive epistemic language, with a formalization of the
concept of Trust. We do so by introducing two di erent actions that formalize the
information sharing when the idea of trust is involved:
i) (un)trustworthy announcement and;
ii) (mis)trustworthy announcement .
      </p>
      <p>In particular, i) (un)trustworthy announcement formalizes the situation when the
untrusty agents will not change their beliefs about the world no matter what the
announcer says; and ii) (mis)trustworthy announcement captures the scenarios
where the announcer, when not trusted, is believed to have a systematic faulty
perception of the announced environment's properties. This leads the untrusty
agents to believe the opposite of what has been announced.</p>
      <p>The paper is organized as follows: Section 2 will present the eld of
epistemic reasoning. The background will be then concluded with Section 3 where
we will introduce the epistemic action language mA . In Section 4 we will present
the semantics of the newly formalized actions along with some desired
properties, formally demonstrated in the Supplementary Documentation (available at
http://clp.dimi.uniud.it/sw/). Finally, in Section 5 we will discuss the impact of
the new actions and some possible future developments.</p>
      <p>
        Moreover, in the Supplementary Documentation we also provide the
formalization of the (un/mis)trustworthy announcement actions for mA [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], the
language on which mA is based on.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Epistemic reasoning</title>
      <p>
        The research on autonomous reasoners has lead, among other things, to the
formalization of the well-known planning problem [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and to the introduction
of several modal logics [
        <xref ref-type="bibr" rid="ref16 ref17 ref5">5,16,17</xref>
        ] used to describe di erent properties of the world.
Di erent logics allow diverse types of reasoning and bring with them di erent
implications in terms of expressiveness and complexity.
      </p>
      <p>
        In particular, Dynamic Epistemic Logic (DEL), the foundation of multi-agent
epistemic planning (MEP), is used to reason not only on the state of the world
but also on information change. As said in [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], `information is something relative
to a subject who has a certain perspective on the world, called an agent, and
that is meaningful as a whole, not just loose bits and pieces. This makes us
call it knowledge and, to a lesser extent, belief'. Concretely, DEL provides a
formalization that allows to model and reason about the agents' perspective
of the world and of the other agents' viewpoint (on both the world and the
others' perspective). Therefore, DEL and MEP are tools that can be exploited
when (possibly nested) knowledge/belief needs to be taken into consideration.
Some examples of such domains can be ethical reasoning, economical or political
strategies and juridic scenarios.
      </p>
      <p>
        In what follows, we will provide a short description of the basic concepts
that de ne DEL and MEP. As it is beyond the scope of this work to give an
exhaustive introduction, the interested reader can refer to [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] for a complete
characterization of such concepts.
      </p>
      <p>
        Let AG be a nite set of agents s.t. jAGj = n with n 1 and let F be a set of
propositional variables, called uents. Each world is described by a subset of
elements of F (intuitively, those that are \true"). Moreover, in epistemic logic each
agent ag 2 AG is associated to an epistemic modal operator Bag that represents
the knowledge/belief of ag herself. Finally, epistemic group operators E and
C are also introduced in epistemic logic. Intuitively, E and C represent the
knowledge/belief of a group of agents and the common knowledge/belief of ,
respectively. To be more precise, as in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], we have that:
De nition 1 (Fluent formula). A uent formula is a propositional formula
built using uents in F as propositional variables and the propositional operators
^; _; ); :. A uent atom is a formula composed of just an element f 2 F ; a
uent literal is either a uent atom f 2 F or its negation :f.
      </p>
      <p>With a slight abuse of notation, we will refer to uent literals simply as uents.
De nition 2 (Belief formula). A belief formula is de ned as follows:
{ A uent formula is a belief formula;
{ If ' is a belief formula and ag 2 AG, then Bag' is a belief formula;
{ If '1; '2 and '3 are belief formulae, then :'3 and '1 op '2 are belief
formulae, where op 2 f^; _; )g;
{ If ' is a belief formula and ; =6 AG then E ' and C ' are belief
formulae.</p>
      <p>Example 1. Let us consider the formula Bag1 Bag2 '. This formula expresses that
the agent ag1 believes that the agent ag2 believes that ' is true. The formula
Bag1 :' expresses that the agent ag1 believes that ' is false.
Let us also introduce the notion of multi-agent epistemic planning domain.
Intuitively, an epistemic planning domain contains all the necessary information
to de ne a planning problem in a multi-agent epistemic scenario.
De nition 3 (Multi-agent epistemic planning domain). We de ne a
multiagent epistemic domain as the tuple D = hF ; AG; A; 'i; 'gi where:
{ F is the set of all the uents of D;
{ AG is the set of the agents of D;
{ A represents the set of all the actions of D;
{ 'i is the belief formula that describes the initial conditions of the planning
process; and
{ 'g is the belief formula that represents the goal condition.</p>
      <p>Moreover, from now on, with the term action instance we will indicate an element
of the set AI = A AG. Intuitively, an action instance ahagi identi es the
execution of the action a by the agent ag.</p>
      <p>Given a domain D we will refer to its components through the parenthesis
operator. For instance to access the elements F and AG of D we will use the
more compact notation D(F ) and D(AG), respectively.</p>
      <p>
        Furthermore, we will indicate a state of an epistemic planning domain as
e-state. Intuitively, an e-state contains all the information needed to encode
both the concrete properties of the world and the knowledge/belief relations.
The language mA , derived by the language mA [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] (based on the \classical"
Kripke structures ), expresses the idea of e-state through possibilities [
        <xref ref-type="bibr" rid="ref11 ref8">8, 11</xref>
        ]. In
the following section, we will provide a short introduction for mA .
3
      </p>
      <p>
        The Epistemic action language mA
Let us brie y introduce the epistemic action language mA [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Let us note that
the fundamental concepts of the language are inherited from mA , the action
language rstly introduced in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] on which mA is based.
      </p>
      <p>First, we need to de ne the three di erent types of action used by mA to
model the e-states update:
{ World-altering action (also called ontic): used to modify certain properties
(i.e., uents) of the world;
{ Sensing action: used by an agent to re ne her beliefs about the world;
{ Announcement action: used by an agent to a ect the beliefs of other agents.
The action language also allows to specify, for each action instance ahagi, the
observability relation of each agent. Namely, an agent x may be fully observant
(x 2 F ), partially observant (x 2 P ), or oblivious (x 2 O) w.r.t. ahagi. If an
agent is fully observant, then she is aware of both the execution of the action
instance and its e ects; she is partially observant if she is only aware of the
action execution but not of the outcomes; she is oblivious if she is ignorant of
the execution of the action. More precisely, given an action instance ahagi, a
uent literal f, a uent formula and the belief formula ', the syntax of mA
is de ned as follows:
executable a if ': captures the executability conditions ;
a causes f if ': captures the e ects of ontic actions;
a determines f if ': captures the e ects of sensing actions;
a announces if ': captures the e ects of announcement actions;
ag observes a if ': captures fully observant agents for an action; and
ag aware of a if ': captures partially observant agents for a given action.
Notice that if we do not state otherwise, an agent will be considered oblivious.
Finally, statements of the form initially ' and goal ' capture the initial and
goal conditions, respectively.</p>
      <p>
        The language mA , introduced in [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ], bases the e-states representation on
the idea of possibility, rstly de ned in [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Possibilities are non-well-founded
objects and, therefore, exploit concepts such as recursion and bisimulation. In
particular, the former is used to de ne the idea of e-state update while the
latter is needed to capture the idea of e-state equality. Due to space constraints we
will illustrate only the main ideas and intuitions behind the semantics of mA
addressing the reader to [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] and [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] for a complete introduction to
possibilities and mA , respectively. Let us now introduce more formally the concept of
possibility.
      </p>
      <sec id="sec-2-1">
        <title>De nition 4 (Possibility [11]).</title>
        <p>{ A possibility u is a function that assigns to each uent f 2 F a truth value
u(f) 2 f0; 1g and to each agent ag 2 AG an information state u(ag) = ;
{ An information state is a set of possibilities.</p>
        <p>Intuitively, a possibility u allows to capture the concept of e-state by: i) encoding
a possible world through the truth values u(f) 8f 2 F ; and ii) capturing the
beliefs of an agent ag 2 AG thanks to the assignment of information states
u(ag). Since possibilities are non-well-founded objects, the concepts of state and
possible world collapse. In fact, a possibility contains both the information of
a possible world and the information about the agents' beliefs (represented by
other possibilities).</p>
        <p>
          De nition 5 (Entailment w.r.t. possibilities [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]). Let the belief formulae
'; '1; '2, a uent f, an agent ag, a (non-empty) group of agents , and a
possibility u be given.
1. u j= f if u(f) = 1;
2. u j= :' if u 6j= ';
3. u j= '1 _ '2 if u j= '1 or u j= '2;
4. u j= '1 ^ '2 if u j= '1 and u j= '2;
5. u j= Bag' if for each v 2 u(ag) it holds that v j= ';
6. u j= E ' if for all ag 2 it holds that u j= Bag' ;
7. u j= C ' if u j= Ek ' for every k 0, where E0 ' = ' and Ek+1' =
        </p>
        <p>E (Ek ').</p>
        <p>
          For the sake of readability we will omit the complete speci cation of the ontic,
sensing and annoumcement transition function. The interested reader is referred
to [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ].
(un/mis)Trustworthy announcement
Following we will provide a formal de nition of the actions (un)trustworthy
announcement and (mis)trustworthy announcement that capture two scenarios
where the concept of trust in uences the communication between agents. That
is, an agent can or cannot trust what another agent is telling her and act
consequently. We will provide a formal de nition of e-state update for these actions for
mA . The expression `ag t announces/m announces a if '' is the syntax to
indicate that the agent ag is executing an (un/mis)trustworthy announcement .
        </p>
        <p>In de ning the actions we consider a static and globally visible version of
`trust' that can be formalized with a simple function T : AG AG 7! f0; 1g.
For the sake of readability we will consider only the case where T is a static
and globally visible function. Let us notice that having T to be dynamic is
easily achievable. In particular, we just need to de ne how T may vary, e.g.,
making the function depending on some uents value. For the sake of simplicity
let us imagine T to be xed and not dependent from the plan execution. On
the other hand, making T not globally visible|i.e., each agent knows her own
version of the trust function|is not straightforward. The problem arise when two
agents have di erent views of the same trust relation leading to the generation
of non-consistent beliefs, an open problem in the MEP community. We leave the
investigation of this scenario as future work.</p>
        <p>To clarify the e-state update after the execution of the new actions we will
also present a graphical representation of the transition function application.</p>
        <p>
          The examples of execution will be based on a variation of the Grapevine
domain [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. Let us now present this new domain, referred to as Trust Grapevine:
Domain 1 (Trust Grapevine) n 2 agents are located in k 2 rooms. Each
agent knows j 0 secrets. An agent can move freely to each other room, and
she can share a \secret" with the agents that are in the room with her. Moreover
the agents will be aware of the execution of announcements made in adjacent
rooms without actually knowing the truth value of the announced uent. Each
agent can or cannot trust (or mistrust) what another agent shares.
Let us notice that since the idea of trust is involved each agent, in order to learn
a secret, needs to witness an announcement of agents that she trusts, making
the newly presented domain slightly more intricate than the original Grapevine.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>4.1 (un)Trustworthy announcement</title>
        <p>We can now introduce the transition function of the action (un)trustworthy
announcement for mA . Intuitively, this action models an announcement where
the listening agents can or cannot trust the announcer. That is: i) the trusty
agents will update their belief consistently with what has been announced; and
ii) the untrusty1 ones will maintain their beliefs about the world and will update
1 The agents that are fully observant w.r.t. announcement but that do not trust the
announcer.
their perspective on the beliefs of the trusty agents. Let us recall that the sets
Fa; Pa; Oa represent the set of fully observant, partially observant and oblivious
agents w.r.t. to the execution of an action instance ahagi, respectively.</p>
        <p>Let a domain D, its set of action instances D(AI), and the set S of all the
possibilities reachable from D('i) with a nite sequence of action instances be
given. The transition function : D(AI) S ! S [ f;g for the (un)trustworthy
announcement relative to D is de ned as follows.</p>
        <p>De nition 6 (mA (un)trustworthy announcement transition function).
Allow us to use the compact notation u(F ) = ff j f 2 D(F ) ^ u j= fg [ f:f j f 2
D(F ) ^ u 6j= fg for the sake of readability. Let an action instance ahagi 2 D(AI)
where agent ag 2 D(AG) announces the uent formula and a possibility u 2 S
be given.</p>
        <p>If a is not executable in u, then</p>
        <p>(a; u) = ; otherwise (a; u) = u0, where:
with
(a; w) = w0 such that</p>
        <p>
          Intuitively this transition function allows, through the use of and , to model
the idea that the untrusty agents maintain their beliefs while knowing that the
trusty ones updated their point of view of the \physical world" (and viceversa).
An example of execution As mentioned above, we will provide a
graphical representation of the newly introduced transition function. Following [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], we
will represent a possibility as a graph where the nodes correspond the possible
worlds while the edges encode the beliefs of the agents. The thicker node
represents the pointed possibility. To extract the point of view of the agents from
a graph we need to follow the entailment rules (De nition 5) starting from the
pointed possibility. Let us now brie y describe the example initial state (based
on Domain 1). Since we are only interested in showing how to e-state update
works we will omit the actions and goal description.
        </p>
        <p>Oann a.</p>
        <p>Example 2 (Five Agents Trust Grapevine).</p>
        <p>The example has ve agents: A, B, C, D and E;</p>
        <p>A, B, C are located in the same room (room 1) while D is in a room (room 2)
adjacent to room 1 and E is located in room 3, not adjacent to room 1;
Agents B and D trust A while C and E do not;
Agent A knows secret a;
The value of secret a is true;</p>
        <p>Initially everyone knows the position of each agent and that only A knows
the value of secret a.</p>
        <p>Let us now present a graphical representation of the above described initial state,
in Figure 1.</p>
        <p>In Figure 2 instead we represent the e-state generated after the execution of
the (un)trustworthy announcement action instance announce secret ahAi (ann a
for brevity). In ann a A announces the value of secret a. Let us note that from
the position of the agents we know that A, B, C 2 Fann a, D 2 Pann a and E 2
fEg</p>
        <p>fC,Dg
In De ntion 6 we assumed that an agent agi, that does not trust the announcer,
will not change her beliefs about what has been announced. That is, an untrusty
agent will not change her perspective on the \physical" state of the world. Let
us notice that this type of trust captures the idea that, for the untrusty agents,
the announcer is not reliable and the information she is providing is not worth
taking into consideration as it can be not accurate.</p>
        <p>Depending on the scenario it could be necessary to model a stronger
concept of untrust. In particular, it could be required to design an (un)trustworthy
announcement such that the untrusty agents will believe the contrary of what
has been announced (while still believing that the announcer believes what she
announced). We will call this type of action (mis)trustworthy announcement .
The formalization of such variation of the action presented in De ntion 6 is as
follows.</p>
        <p>De nition 7 (mA (mis)trustworthy announcement transition
function). Let an action instance ahagi 2 D(AI) where agent ag 2 D(AG)
announces the uent formula and a possibility u 2 S be given.
If a is not executable in u, then (a; u) = ; otherwise (a; u) = u0, where:
with (a; w) = w0 such that
and (a; w) = w0 such that</p>
        <p>Let us note that the transition functions introduced in De nitions 6 and 7
only di er in the speci cation of and for the untrusty fully observant
agents. This di erence is needed to represent the fact that in the case of
(un)trustworthy announcement the untrusty agents maintain their beliefs while
in the (mis)trustworthy one they will believe the opposite of what has been
announced.</p>
        <p>An example of execution As for the (un)trustworthy announcement , we will
provide an example of (mis)trustworthy announcement execution. The initial
state is identical the one introduced in Example 2. The only di erence is that
now the action announce secret ahAi (or ann a for brevity) is a (mis)trustworthy
announcement instead of a (un)trustworthy announcement . The initial state is,
therefore, represented in Figure 1 while the e-state obtained after the execution
of the (mis)trustworthy announcement is shown in Figure 3.</p>
        <p>fA,B,Dg
fEg</p>
        <p>
          fC,Dg
In [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] are listed some useful properties that correctly capture certain intuitions
concerning the e ects of the various types of actions in mA . Similarly, in what
follows, we will provide some properties that the e-state update, after executing
the (un/mis)trustworthy announcement , meets. Due to space constraint we will
provide the formal demonstrations that these properties hold in the
Supplementary Documentation2. As usual, we will indicate the sets of partially observant
and oblivious agents (w.r.t. the action instance ahagi) with Pa and Oa,
respectively. Moreover, we will indicate the set of trusty fully observant agents with
Fa while will indicate the set of untrusty fully observant with Ua.
2 Available at http://clp.dimi.uniud.it/sw/
Proposition 1 ((un)Trustworthy announcement properties). Let ahagi
be an (un)trustworthy announcement action instance where ag t announces .
Let e be an e-state and let e0 be its updated version (that is, (a; e) = e0), then
in mA it holds that:
1. e0 j= CFa ;
3. e0 j= CPa (CFa _ CFa : );
2. e0 j= CUa (CFa );
4. e0 j= CFa[Ua (CPa (CFa _ CFa : ));
5. for every agent y 2 Ua, e0 j= By =By: =(:By ^ :By: ) i
        </p>
        <p>e j= By =By: =(:By ^ :By: );
6. for every agent y 2 Oa and a belief formula ', e0 j= By' i e j= By'; and
7. for every pair of agents x 2 Fa [ Ua [ Pa and y 2 Oa and a belief formula ',
if e j= BxBy' then e0 j= BxBy'.</p>
        <p>The properties presented in Proposition 1 try to capture some fundamental
aspects of an (un)trustworthy announcement action. Intuitively:
1. Captures the idea that all the trusty fully observant agents should believe
i) what has been announced; and ii) that all the other trusty fully observant
agents believe what has been announced and so on ad in nitum (that is why
we use the C operator).
2. Models the fact that all the untrusty agents believe that all the trusty ones
have common belief on what has been announced.
3. Captures that the partially observants believe that the trusty fully observants
have common knowledge on what has been announced while the partially
observants themselves do not know the announced value.
4. States that the fully observant agents have common knowledge on the
previous property.
5. Models the idea that all the untrusty agents do not modify their beliefs about
the announced values.
6. Captures the fact that the oblivious agents do not change their beliefs.
7. States that the observant agents (trusty, untrusty and partial) believe that
the oblivious agents did not change their beliefs.</p>
        <p>As we did for the (un)trustworthy announcement , let us identify some properties
also for the (mis)trustworthy announcement action.</p>
        <p>Proposition 2 ((mis)Trustworthy announcement properties). Let ahagi
be a (mis)trustworthy announcement action instance where ag m announces
. Let e be an e-state and let e0 be its updated version (that is, (a; e) = e0), then
in mA properties 1; 2; 3; 4; 6 and 7 of Proposition 1 hold. In addition,
a. e0 j= CUa : ;
c. e0 j= CPa (CUa _ CUa : );</p>
        <p>b. e0 j= CFa (CUa : );
Proposition 2 describes the core ideas behind a (mis)trustworthy announcement
action. While properties 1; 2; 3; 4; 6 of Proposition 1 have already been described,
the intuitive meaning of the remaining ones is as follows.
a. Captures the idea that all the untrusty fully observant agents should believe
i) the contrary of what has been announced; and ii) that all the other untrusty
fully observant agents believe the negation of what has been announced and
so on ad in nitum (that is why we use the C operator).
b. Models the fact that all the trusty agents believe that all the untrusty ones
have common belief on the negation of what as been announced.
c. Captures that the partially observants believe that the untrusty fully
observant have common knowledge on what has been announced, while the
partially observant themselves do not know the announced value.
5</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Conclusions and Future Works</title>
      <p>In this paper we introduced the notion of trust in the eld of multi-agent
epistemic planning. In particular, we provided a formalization for two actions, i.e.,
(un)trustworthy announcement and (mis)trustworthy announcement , that model
two di erent scenarios of information sharing when the concept of trust is
involved. The former action captures the idea that whenever an agent does not
trust another she considers the announcer as an unreliable source of
information, and therefore does not change her beliefs about the world. The latter,
on the other hand, describes the situation where the untrusty agents will
believe the contrary of what has been announced while still believing that the
announcer believes what she announced. Both of the newly presented actions
have been formalized for, at the best of our knowledge, the most comprehensive
epistemic action-based language: mA . In particular, in Section 4 we presented
the transition functions of the actions along with their desired properties
(formally demonstrated in the Supplementary Documentation).</p>
      <p>
        As already mentioned, the idea of trust is presented as static and globally
visible. While making it dynamic would not increase the \di culty" of the
transition functions, allowing each agent to have her own point of view on the trust
relations would require a redesign of the e-state updates. In particular, to
formalize this type of trust, the idea of non-consistent belief is necessary. Since such
concept is still an open issue in the MEP community, we leave the formalization
of e-state update when trust depends on the agents' point of view as future work.
Finally, another concept that arises when trust is taken into consideration is the
idea of lies. Modeling this concept would require major modi cations of De
ntion 6 and, as for the dynamic version of trust, the idea of non-consistent belief.
Capturing subtle concepts such as lies and misconception is not straightforward
and will provide a contribution on its own. The di culty of characterizing such
ideas derives from the complexity of devising a transition function that correctly
captures all the possible nested beliefs of the domain's agents. We, therefore,
leave the investigation of lies as future work. A more immediate future work is
the introduction of the new actions in EFP 2.0 [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and PLATO [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], a C++ solver
(based on mA and mA ) and an ASP solver (based on mA ) respectively.
PLATO in particular, given its nature of logical reasoner, may provide a more
suitable environment to implement and test the newly introduced actions.
      </p>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <p>The author wishes to thank Dovier Agostino, Pontelli Enrico and Burigana
Alessandro for the illuminating discussions on epistemology and the anonymous
Reviewers for their comments that allowed to improve the presentation.</p>
      <p>This research is partially supported by the Universita di Udine PRID
ENCASE project, and by GNCS-INdAM 2017{2020 projects.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Allen</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zilberstein</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Complexity of decentralized control: Special cases</article-title>
          .
          <source>In: Advances in Neural Information Processing Systems</source>
          . pp.
          <volume>19</volume>
          {
          <issue>27</issue>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Baral</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gelfond</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pontelli</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Son</surname>
            ,
            <given-names>T.C.:</given-names>
          </string-name>
          <article-title>An action language for multi-agent domains: Foundations</article-title>
          .
          <source>CoRR abs/1511</source>
          .
          <year>01960</year>
          (
          <year>2015</year>
          ), http://arxiv.org/abs/1511.01960
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Bolander</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Andersen</surname>
            ,
            <given-names>M.B.</given-names>
          </string-name>
          :
          <article-title>Epistemic planning for single-and multiagent systems</article-title>
          .
          <source>Journal of Applied Non-Classical Logics</source>
          <volume>21</volume>
          (
          <issue>1</issue>
          ),
          <volume>9</volume>
          {
          <fpage>34</fpage>
          (
          <year>2011</year>
          ). https://doi.org/10.1016/
          <fpage>0010</fpage>
          -
          <lpage>0277</lpage>
          (
          <issue>83</issue>
          )
          <fpage>90004</fpage>
          -
          <lpage>5</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Burigana</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fabiano</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dovier</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pontelli</surname>
          </string-name>
          , E.:
          <article-title>Modelling multiagent epistemic planning in asp</article-title>
          .
          <source>Theory and Practice of Logic Programming</source>
          <volume>20</volume>
          (
          <issue>5</issue>
          ),
          <volume>593</volume>
          {
          <fpage>608</fpage>
          (
          <year>2020</year>
          ). https://doi.org/10.1017/S1471068420000289, https://doi.org/10.1017/S1471068420000289
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Chagrov</surname>
            ,
            <given-names>A.: Modal</given-names>
          </string-name>
          <string-name>
            <surname>Logic</surname>
          </string-name>
          . Oxford University Press (
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>De Weerdt</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Clement</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Introduction to planning in multiagent systems</article-title>
          .
          <source>Multiagent and Grid Systems</source>
          <volume>5</volume>
          (
          <issue>4</issue>
          ),
          <volume>345</volume>
          {
          <fpage>355</fpage>
          (
          <year>2009</year>
          ). https://doi.org/10.3233/MGS2009-0133
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Dovier</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Formisano</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pontelli</surname>
          </string-name>
          , E.:
          <article-title>Autonomous agents coordination: Action languages meet CLP() and Linda</article-title>
          .
          <source>Theory and Practice of Logic Programming</source>
          <volume>13</volume>
          (
          <issue>2</issue>
          ),
          <volume>149</volume>
          {
          <fpage>173</fpage>
          (
          <year>2013</year>
          ). https://doi.org/10.1016/S0004-
          <volume>3702</volume>
          (
          <issue>00</issue>
          )
          <fpage>00031</fpage>
          -X
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Fabiano</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Burigana</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dovier</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pontelli</surname>
          </string-name>
          , E.:
          <article-title>EFP 2.0: A multiagent epistemic solver with multiple e-state representations</article-title>
          .
          <source>In: Proceedings of the Thirtieth International Conference on Automated Planning and Scheduling</source>
          , Nancy, France,
          <source>October 26-30</source>
          ,
          <year>2020</year>
          . pp.
          <volume>101</volume>
          {
          <fpage>109</fpage>
          . AAAI Press (
          <year>2020</year>
          ), https://aaai.org/ojs/index.php/ICAPS/article/view/6650
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Fabiano</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riouak</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dovier</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pontelli</surname>
          </string-name>
          , E.:
          <article-title>Non-well-founded set based multiagent epistemic action language</article-title>
          .
          <source>In: Proceedings of the 34th Italian Conference on Computational Logic. CEUR Workshop Proceedings</source>
          , vol.
          <volume>2396</volume>
          , pp.
          <volume>242</volume>
          {
          <fpage>259</fpage>
          .
          <string-name>
            <surname>Trieste</surname>
          </string-name>
          ,
          <string-name>
            <surname>Italy</surname>
          </string-name>
          (June 19-21
          <year>2019</year>
          ), http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2396</volume>
          /paper38.pdf
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Fagin</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Halpern</surname>
            ,
            <given-names>J.Y.</given-names>
          </string-name>
          :
          <article-title>Reasoning about knowledge and probability</article-title>
          .
          <source>Journal of the ACM (JACM) 41(2)</source>
          ,
          <volume>340</volume>
          {
          <fpage>367</fpage>
          (
          <year>1994</year>
          ). https://doi.org/10.1145/174652.174658
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Gerbrandy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Groeneveld</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          :
          <article-title>Reasoning about information change</article-title>
          .
          <source>Journal of Logic, Language and Information</source>
          <volume>6</volume>
          (
          <issue>2</issue>
          ),
          <volume>147</volume>
          {
          <fpage>169</fpage>
          (
          <year>1997</year>
          ). https://doi.org/10.1023/A:1008222603071
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Kominis</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ge</surname>
            <given-names>ner</given-names>
          </string-name>
          , H.:
          <article-title>Beliefs in multiagent planning: From one agent to many</article-title>
          .
          <source>In: Proceedings of the International Conference on Automated Planning and Scheduling</source>
          , ICAPS. pp.
          <volume>147</volume>
          {
          <issue>155</issue>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Lipovetzky</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ge</surname>
            <given-names>ner</given-names>
          </string-name>
          , H.:
          <article-title>Best- rst width search: Exploration and exploitation in classical planning</article-title>
          .
          <source>In: Proceedings of the Thirty-First AAAI Conference on Arti cial Intelligence</source>
          . pp.
          <volume>3590</volume>
          {
          <fpage>3596</fpage>
          . San Francisco, California,
          <source>USA (February 4-9</source>
          <year>2017</year>
          ), http://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14862
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Muise</surname>
            ,
            <given-names>C.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Belle</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Felli</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McIlraith</surname>
            ,
            <given-names>S.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pearce</surname>
            ,
            <given-names>A.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sonenberg</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Planning over multi-agent epistemic states: A classical planning approach</article-title>
          .
          <source>In: Proc. of AAAI</source>
          . pp.
          <volume>3327</volume>
          {
          <issue>3334</issue>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Russell</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Norvig</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Arti cial Intelligence: A Modern Approach</article-title>
          . Prentice Hall Press, Upper Saddle River, NJ, USA, 3rd edn. (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Smullyan</surname>
            ,
            <given-names>R.R.</given-names>
          </string-name>
          :
          <article-title>First-order logic</article-title>
          , vol.
          <volume>43</volume>
          . Springer Science &amp; Business
          <string-name>
            <surname>Media</surname>
          </string-name>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Van Ditmarsch</surname>
            , H., van Der Hoek,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kooi</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Dynamic epistemic logic</article-title>
          , vol.
          <volume>337</volume>
          . Springer Science &amp; Business
          <string-name>
            <surname>Media</surname>
          </string-name>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>