<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Non-Well-Founded Set Based Multi-Agent Action Language?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Francesco Fabiano</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Idriss Riouak</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Agostino Dovier</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Enrico Pontelli</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Computer Science Department, New Mexico State University</institution>
          ,
          <addr-line>Las Cruces NM</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Dipartimento DMIF, Universita di Udine</institution>
          ,
          <addr-line>Udine</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>As the research in multi-agent domain continues to grow it is becoming more and more important to investigate the agents' relations in such systems: not only to reason about agents' perception of the world but also about agents' knowledge of her and others' knowledge. This type of study is referred as epistemic reasoning. In certain domains, e.g., economy, security, justice and politics, reasoning about others' beliefs could lead to winning strategies or help in changing a group of agents' view of the world. In this work we formalize the epistemic planning problem where the state description is based on non-well-founded set theory. The introduction of such semantics would permit to characterize the planning problem in terms of set operations and not in term of reachability, as the state-ofthe-art suggests. Doing so we hope to introduce a more clear semantics and to establish the basis to exploit properties of set based operations inside multi-agent epistemic planning.</p>
      </abstract>
      <kwd-group>
        <kwd>Epistemic Reasoning Planning Multi-agent Action languages Non-well-founded sets Possibilities</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Multi-agent planning and epistemic knowledge have recently gained attention
from several research communities. E cient autonomous systems that can
reason in these domains could lead to winning strategies in various elds such as
economy [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], security [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], justice [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ], politics [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and can be exploited by
selfdriving cars and other autonomous devices that can control several aspects of
our daily life.
      </p>
      <p>
        Epistemic planners are not only interested in the state of the world but also
in the knowledge (or beliefs) of the agents. Some problems can be expressed
through less expressive languages and need less powerful, and usually faster,
planners. For example [
        <xref ref-type="bibr" rid="ref18 ref24">18, 24</xref>
        ] dealing with problems where dynamic common
knowledge and unbounded nested knowledge are respectively not needed. On the
other hand, to the best of our knowledge, only few systems [
        <xref ref-type="bibr" rid="ref21 ref22">21, 22</xref>
        ] can reason
? This research is partially supported by the Universita di Udine PRID ENCASE
project, and by GNCS-INdAM 2017 and 2019 projects.
about epistemic knowledge in multi-agent domains without these limitations, i.e.,
      </p>
      <p>
        C
ounsintghethfeulllanegxuteangte oLfALG ACpGr,esbeansteedthineirSeccotniocnep2t. oSfucshtastyestoenmKs,rtihpkaet csatrnucretuarseosn.
Using a Kripke structure as state has a negative impact on the performances
of the planner. First of all, to store all the necessary states, solvers require a
high amount of memory. Moreover to perform operations, such as entailment
or the application of the transition function, the states have been represented
explicitly. That is why, as the research on epistemic reasoning advances [
        <xref ref-type="bibr" rid="ref2 ref5 ref7">2, 5, 7</xref>
        ],
it is interesting to analyze alternative representations for the states that could
lead to more e cient operations on the search-space.
      </p>
      <p>In this work we formalized the epistemic planning problem where the states
are represented by possibilities (introduced in Section 4) which are based on
non-well-founded set theory. This representation will allow us to describe the
language through set-based operations and also to exploit some of the results
from this eld, such as the concept of bisimulation, to add important features
to the multi-agent epistemic (MEP) community.</p>
      <p>The paper is organized as follows: Section 2 will present the concept of
epistemic planning. In Section 3 we will introduce what, in our opinion, is the most
complete action language for MEP that bases its states on the concept of Kripke
structure. The background will be then concluded with Section 4 where we will
describe possibilities, an interesting approach that combines non-well-founded
set theory with epistemic logic. In Section 5 we will introduce our semantics,
based on possibilities, of mA (an action language for MEP) and we will show
a comparison with the state-of-the-art action language mA . We will nally
conclude in Section 6 with future works.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Epistemic reasoning</title>
      <p>
        In this section we will brie y introduce the background that is necessary to
understand multi-agent epistemic reasoning. As de ned in [
        <xref ref-type="bibr" rid="ref16 ref27">16, 27</xref>
        ], where the
concept of multi-agent propositional epistemic logic is fully explored, the
epistemic logic is the logic of knowledge and belief that di erent agents have about
the world and about the beliefs of each other.
      </p>
      <p>
        Let AG be a set of agents and let F be a set of propositional variables,
called uents. We have that each world is described by a subset of elements of
F (intuitively, those that are \true" in the world); we also refer to each world
as an interpretation. For each agent ag 2 AG we associate a modal operator
Bag (where B stands for belief) and we represent the beliefs of an agent as belief
formulae in a logic extended with these operators. Moreover, group operators are
also introduced in epistemic logic, such as E and C ; that intuitively represent
the belief of a group of agents and the common knowledge of respectively.
To be more precise, as in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], we have that
De nition 1 ( uent formulae, atoms and literals). A uent formula is a
propositional formula built using the propositional variables in F and the
traditional propositional operators ^; _; ); :; etc. We will use &gt; and ? to indicate
True and False, respectively. A uent atom is a formula composed by just an
element f 2 F , instead a uent literal is either a uent atom f 2 F or its negation
:f.
      </p>
      <p>De nition 2 (belief formula). A belief formula is de ned as follow:
{ A uent formula is a belief formula;
{ let ' be belief formula and ag 2 AG, then Bag' is a belief formula;
{ let '1; '2 and '3 be belief formulae, then :'3 and '1 op '2 are belief
formulae, where op 2 f^; _; )g;
{ all the formulae of the form E ' or C ' are belief formulae, where ' is
itself a belief formula and ; 6= AG.</p>
      <p>
        C the language of the belief formulae over the sets F and
Let us denote with LAG
AG. On the other hand we de ne LAG as the language over beliefs formulae that
does not allow the use of C. In [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], it is pointed out how these two languages
di er in expressiveness and in complexity.
      </p>
      <p>Example 1. Let us consider the formula Bag1 Bag2 '. This formula expresses that
the agent ag1 believes that the agent ag2 believes that ' is true, instead, Bag1 :'
expresses that the agent ag1 believes that ' is false.</p>
      <p>
        The classical way of providing a semantics for the language of epistemic logic is
in terms of Kripke models [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]:
De nition 3 (Kripke structure). A Kripke structure is a tuple hS; ; B1;. . . ; Bni,
such that:
{ S is a set of worlds;
{ : S 7! 2F is a function that associates an interpretation of F to each
element of S;
{ for 1 i n, Bi S S is a binary relation over S.
      </p>
      <p>
        A pointed Kripke structure is a pair (M; s) where M is a Kripke structure as
de ned above, and s 2 S, where s represents the real world. As in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], we will
refer to a pointed Kripke structure (M; s) as a state.
      </p>
      <p>
        Following the notation of [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], we will indicate with M [S]; M [ ]; and M [i] the
components S; , and Bi of M , respectively.
      </p>
      <p>De nition 4 (entailment w.r.t. a Kripke structure). Given the belief
formulae '; '1; '2, an agent agi, a group of agents , a Kripke structure M =
hS; ; B1; :::; Bni, and the worlds s,t 2 S:
(i ) (M; s) j= ' if ' is a uent formula and (s) j= ';
(ii ) (M; s) j= Bagi ' if for each t such that (s; t) 2 Bi it holds that (M; t) j= ';
(iii ) (M; s) j= :' if (M; s) 6j= ';
(iv ) (M; s) j= '1 _ '2 if (M; s) j= '1 or (M; s) j= '2;
(v ) (M; s) j= '1 ^ '2 if (M; s) j= '1 and (M; s) j= '2;
(vi ) (M; s) j= E ' if (M; s) j= Bagi ' for all agi 2 ;
(vii ) (M; s) j= C ' if (M; s) j= Ek ' for every k</p>
      <p>Ek+1' = E (Ek ').
0, where E0 ' = ' and</p>
      <p>
        We will no further describe the properties of the Kripke structures since those
are not strictly needed to describe the contribution of this paper. The reader who
has interest in a more detailed description can refer to [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
3
      </p>
      <sec id="sec-2-1">
        <title>The action language mA</title>
        <p>
          The mA [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] action language is a generalization of the single-agent action
languages, extensively studied in the literature [
          <xref ref-type="bibr" rid="ref26 ref8">8, 26</xref>
          ], to the case of multi-agent
domains for epistemic planning.
        </p>
        <p>The language has a declarative, English-like syntax and an event model based
semantics which permits to reason about beliefs.</p>
        <p>The semantics of mA is based on the assumption that agents are truthful.
The language is built over a signature (AG; F ; A), where AG is a nite set of
agent identi ers, F is a set of uents, and A is a set of actions.</p>
        <p>
          We will introduce only the basics features of mA . The remaining details
about the language can be found in [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. The following will be used as working
example through the paper:
Example 2 (Three Agents and the Coin Box). Three agents, A, B, and C, are
in a room where in the middle there is a box containing a coin. It is common
knowledge that:
{ None of the agents know whether the coin lies heads or tails up;
{ The box is locked and one needs a key to open it;
{ Agent A has the key of the box;
{ In order to learn whether the coin lies heads or tails up, an agent can peek
into the box, but this require the box to be open;
{ If one agent is looking at the box and a second agents peeks into the box,
then the rst agent will observe this fact and will be able to conclude that
the second agent knows the status of the coin. On the other hand, the rst
agent's knowledge about which face of the coin is up does not change.
{ Distracting an agent causes her to not look at the box;
{ Signaling an agent to look at the box causes such agent to look at the box;
{ Announcing that the coin lies heads or tails up will make this common
knowledge among the agents that are listening.
        </p>
        <p>Agent A would like to know whether the coin lies heads or tails up. She would
also like to let agent B knowing that she knows this fact. However, she would
like to keep this information secret from C.</p>
        <p>This can be achieved by: i) Distracting C from looking at the box; ii) Signaling
B to look at the box if B is not looking at it; iii) Opening the box; and iv) Peeking
into the box.</p>
        <p>For this domain, we have that AG = fA; B; Cg, while the set of uent F
consists of:
{ heads: the coin lies heads up;
{ key(ag): agent ag has the key of the box;
{ opened: the box is open; and
{ look(ag): agent ag is looking at the box.</p>
        <p>Let ag 2 AG, the set of actions A comprises:
{ open: an agent opens the box;
{ peek: an agent peeks into the box;
{ signal(ag): an agent signals to agent ag to look at the box;
{ distract(ag): an agent distracts agent ag;
{ shout tails: an agent announces that the coin lies tails up.</p>
        <p>
          In [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], the authors distinguished between three types of actions in the following
way (some examples of action execution can be found in Appendix):
{ World-altering action (also called ontic): used to modify certain properties
(i.e., uents) of the world, e.g., the action open or distract(ag) of
Example 2.
{ Sensing action: used by an agent to re ne her beliefs about the world, e.g.,
the action peek.
{ Announcement action: used by an agent to a ect the beliefs of other agents.
e.g., in Example 2 the action shout tails.
        </p>
        <p>Given an action instance a 2 AI, where AI is the set of all the possible
action instances A AG, a uent literal f 2 F , a uent formula and a belief
formula ' we can quickly introduce the syntax adopted in mA .</p>
        <p>Executability conditions are captured by statements of the form:
For ontic actions we have:
Sensing actions statements have the form expressed by
Finally announcement actions are expressed as follows:
executable a if '
a causes f if '
a determines f
a announces :</p>
        <p>In multi-agent domains the execution of an action might change or not the
beliefs of an agent. This because, in such domains, each action instance associates
an observability relation to each agent. For example the agent C that becomes
oblivious as distracted by the agent A, is not able to see the execution of the
action openhAi. On the other hand, watching an agent executing a sensing or an
announcement action can change the beliefs of who is watching, e.g., the agent
B, who is watching the agent A sensing the status of the coin, will know that A</p>
        <sec id="sec-2-1-1">
          <title>Action type</title>
        </sec>
        <sec id="sec-2-1-2">
          <title>World-altering</title>
        </sec>
        <sec id="sec-2-1-3">
          <title>Sensing</title>
        </sec>
        <sec id="sec-2-1-4">
          <title>Announcement</title>
        </sec>
        <sec id="sec-2-1-5">
          <title>Full observers</title>
        </sec>
        <sec id="sec-2-1-6">
          <title>Oblivious</title>
          <p>knows the status of the coin without knowing the status herself. In Table 1 are
summarized the possible observability relations for each type of action. Partial
observability for World-altering action is not admitted as, whenever an agent is
aware of the execution of an ontic action, she must knows its e ects on the world
as well.</p>
          <p>
            For brevity we address the reader to [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ] for the de nition of the transition
function in mA .
4
          </p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Non-well-founded sets and Possibilities</title>
      <p>
        This section de nes the concept of possibility (originally introduced in [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]),
based on non-well-founded set theory. This section aims to provide the reader
with enough information to understand what possibilities are.
      </p>
      <p>
        For a more informative introduction the reader is addressed to [
        <xref ref-type="bibr" rid="ref1 ref16 ref6">1, 6, 16</xref>
        ] and
to [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] for a logic programming point of view on non-well-founded sets and their
equivalence.
4.1
      </p>
      <p>
        Non-well-founded set theory fundamentals
We start by giving some fundamental de nitions of non-well-founded set theory.
First of all a well-founded set is described in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] as follows:
De nition 5 (well-founded set). Let E be a set, E0 one of its elements, E00
any element of E0, and so on. A descent is the sequence of steps from E to E0,
E0 to E00, etc. : : : A set is well-founded (or ordinary) when it only gives rise to
nite descents.
      </p>
      <p>Well-founded set theory states that all the sets in the sense of De nition 5 can
be represented in the form of graphs, called pictures, (as shown in Figure 1). To
formalize this concept of `picture of a set' however it is necessary to introduce
the concept of decoration:
De nition 6 (decoration and picture).</p>
      <p>{ A decoration of a graph G=(V; E) is a function that assigns to each node
n 2 V a set n in such a way that the elements of n are exactly the sets
assigned to successors of n, i.e., n = f n0 j (n; n0) 2 Eg.
{ If is a decoration of a pointed graph (G; n), then (G; n) is a picture of the
set n.
(a) Pictures of von Neumann ordinals where 0 = ;;
1 = f;g; 2 = f;; f;gg; 3 = f;; f;g; f;; f;ggg.
(b) Alternative Pictures of von</p>
      <sec id="sec-3-1">
        <title>Neumann ordinals 2 and 3.</title>
        <p>Moreover, in well-founded set theory, it holds the Mostovski's lemma: \each
well-founded graph3 is a picture of exactly one set".</p>
        <p>
          On the other hand in [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] a non-well-founded, or extraordinary set in the sense
of Mirimano , is a set that respects De nition 7.
        </p>
        <p>
          De nition 7 (non-well-founded set). A set is non-well-founded (or
extraordinary) when among its descents there are some which are in nite.
In fact, when the Foundation Axiom4 is substituted by the Anti-Foundation
Axiom (AFA), expressed by Aczel in [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] as \Every graph has a unique
decoration", the following consequences become true:
{ Every graph is a picture of exactly one set (AFA as is formulated in [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]);
{ non-well-founded sets exist given that a non-well-founded pointed graph has
to be a picture of a non-well-founded set.
(a) Standard picture .
(b) Unfolding of the picture of .
3 A well-founded graph is a graph that doesn't contain an in nite path n ! n0 !
n00 ! : : : of successors.
4 Expressed in [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] as \Only well-founded graphs have decorations".
        </p>
        <p>
          A quick example of this representation can be derived by the set = f g
(Figure 2). We can, in fact, informally de ne this set by the (singleton) system
of equations x = fxg. Systems of equations and their solutions are described
more formally as follows in [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]:
De nition 8 (system of equations). For each class of atoms5 X a system of
equation in X is a class of equations x = X, where x 2 X and X X , such that
contains exactly one equation x = X for each x 2 X . A solution to a system
of equations is a function that assigns to each x 2 (X )6 a set x such that
x = f y j y 2 Xg, where x = X is an equation of . If is the solution to a
system of equations , then the set f x j x 2 (X )g is called the solution set of
that system.
        </p>
        <p>
          Since both graphs and systems of equations are representations for
non-wellfounded sets, it is natural to investigate their relationships. In particular it is
interesting to point out how from a graph G=(V; E) it is possible to construct a
system of equations and vice versa. The nodes in G, in fact, can be the set of
atoms (X ) and, for each node v 2 V , an equation is represented by v = fv0 j
(v; v0) 2 Eg. Since each graph has a unique decoration, each system of equations
has a unique solution. This is also true when we consider bisimilar systems of
equations. In fact we can collapse them into their minimal representation thanks
to the concept of maximum bisimulation as introduced in [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. Bisimilar labeled
graphs (or Kripke structures) have therefore a unique solution as well since we
collapse their representations into the minimal one. This idea will be further
expanded in Section 5.2.
4.2
        </p>
        <p>
          Possibilities
Let us introduce the notion of possibility as in [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]:
De nition 9 (possibilities). Let AG be a set of agents and F a set of
propositional variables:
{ A possibility u is a function that assigns to each propositional variable f 2 F
a truth value u(f) 2 f0; 1g and to each agent ag 2 AG an information state
u(ag) = .
{ An information state is a set of possibilities.
        </p>
        <p>In Section 5.1 we will use this concept to describe a `state' of the planning
problem. The intuition behind this idea is that a possibility u is a possible
interpretation of the world and of the agents' beliefs; in fact u(f) speci es the
truth value of the uent f in u and u(A) is the set of all the interpretations the
agent A considers possible in u.</p>
        <p>Moreover a possibility can be pictured as a decoration of a labeled graph
and therefore as a unique solution to a system of equations for possibilities. A
5 Objects that are not sets and have no further set-theoretic structure.
6 (X ) denotes the class of atoms X in which is described.
possibility represents the solution to the minimal system of equations in which
all bisimilar systems of equations are collapsed; that is the possibilities that
represent decorations of bisimilar labeled graphs are bisimilar and can be
represented by the minimal one. This shows that the class of bisimilar labeled graphs
and, therefore, of bisimilar Kripke structures, used by mA as states, can be
represented by a single possibility.</p>
        <p>De nition 10 (equations for possibilities). Given a set of agents AG and
a set of propositional variables F , a system of equations for possibilities in a
class of possibilities X is a set of equations such that for each x 2 X there exists
exactly one equation of the form x(f) = i, where i 2 f0; 1g, for each f 2 F , and
of the form x(ag) = X, where X X , for each ag 2 AG.</p>
        <p>A solution to a system of equations for possibilities is a function that assigns
to each atom x a possibility x in such a way that if x(f) = i is an equation then
x(f) = i, and if x(ag) = is an equation, then x(ag) = f y j y 2 g.
5</p>
        <sec id="sec-3-1-1">
          <title>The action language mA</title>
          <p>
            The research on multi-agent epistemic domain, both in logic and planning,
already comprehends several theoretical studies [
            <xref ref-type="bibr" rid="ref13 ref14 ref16 ref17 ref2 ref23 ref27 ref28 ref5 ref7">2, 5, 7, 13, 14, 16, 17, 23, 27, 28</xref>
            ] and
also a variety of solvers [
            <xref ref-type="bibr" rid="ref18 ref19 ref21 ref22 ref24 ref29">18,19,21,22,24,29</xref>
            ] even if, at the best of our knowledge,
only [
            <xref ref-type="bibr" rid="ref13 ref21">13, 21</xref>
            ] can reason without limitations on domains described by LACG .
          </p>
          <p>Anyway multi-agent epistemic solvers still have to reason on domains where
the number of uents and/or agents is limited. This is to reduce the length
of the planning problems solution that otherwise would require an excessive
quantity of resources (i.e., time and memory). For these reasons the demand of
computational resources needed (for example in respect to classical planning) is
one of the central problem in MEP.</p>
          <p>
            To reduce this gap several approaches can be used: i) as in [
            <xref ref-type="bibr" rid="ref18 ref19 ref24 ref29">18, 19, 24, 29</xref>
            ] the
planning domain can be limited to a less expressive class of problems. On the
other hand, when generality is required, ii) heuristics can e ectively reduce the
resolution process, as shown in [
            <xref ref-type="bibr" rid="ref21">21</xref>
            ]. Finally another approach to follow could
be iii) to consider alternative representations to Kripke structures; this is what
mA tries to do.
          </p>
          <p>Changing the structure is especially important because di erent state
representation can lead to a better use of the resources, and to exploit properties of
the new structure to introduce important functionalities; e.g., mA could rely
on the concept of bisimulation to introduce the notion of visited states, being
bisimulation an equality criteria for non-well-founded sets.
5.1</p>
          <p>State
As main contribution we introduce a modi ed version of mA , called mA . The
di erence is in how a state is de ned: in mA a state is represented as a Kripke
structure while in mA a state is a possibility (Section 4.2). For simplicity, we
maintain the syntax used in mA .</p>
          <p>
            The strict connection of these two structures is highlighted in [
            <xref ref-type="bibr" rid="ref16">16</xref>
            ] from the
fact that a solution to a system of equations for possibilities (De nition 10)
represents a decoration for a labeled graph, which is essentially a Kripke model.
In [
            <xref ref-type="bibr" rid="ref16">16</xref>
            ], it is also expressed that a \possibility corresponds with a whole class of
bisimilar, but structurally di erent, Kripke models".
          </p>
          <p>Let us de ne the usage of possibilities as states in a MEP domain where the
set of agents is AG and the set of uents is F . A possibility is a function that
assigns to each propositional variable f 2 F a truth value u(f) 2 f0; 1g and to
each agent ag 2 AG a set of possibilities u(ag).</p>
          <p>A state in MEP has to encode the truth value of the uents, as in classical
planning, and also the beliefs of the agents about uents and beliefs themselves.
We de ned in Section 3 how Kripke structures represent these information. In
Figure 3 we use a possibility as a system of equations to encode a state (of the
domain in Example 2).</p>
          <p>An equation is represented in the form u = f(ag1; ); (ag2; 0); : : : ; f; f0; : : : g
where ag1, ag2 2 AG, ; 0 are sets of possibilities and f; f0 2 F . When we write
(ag; ) we mean that, in u, the agent ag believes that the possibilities in are
plausible. On the other hand only if a uent f is present in the equation this
means that the uent itself is true in u.
8&gt;w = f(ag; fw; w0g); (C; fv; v0g); look(ag); key(A); openedg
&gt;&lt;&gt;w0 = f(ag; fw; w0g); (C; fv; v0g); look(ag); key(A); opened; headsg
&gt;v = f(A; fv; v0g); (B; fv; v0g); (C; fv; v0g); look(ag); key(A)g
&gt;:&gt;v0 = f(A; fv; v0g); (B; fv; v0g); (C; fv; v0g); look(ag); key(A); headsg
where ag 2 fA; Bg:
(a) System of equations for possibilities.</p>
          <p>A,B
C</p>
          <p>C
A,B,C
w0
C
v0</p>
          <p>A,B</p>
          <p>A,B,C
(b) Decoration of the pointed
labeled graph (G; w).</p>
          <p>It is clear that a possibility correspond to a decoration of a pointed labeled
graph and therefore to a unique Kripke model up to bisimulation. The
representation through possibilities allows, in our opinion, a more clear and concise view
of the state. That is because each state is represented by a single possibility; e.g.,
Figure 3 is represented by w= f(ag; fw; w0g); (C; fv; v0g); look(ag); key(A); openedg
where ag 2 fA; Bg.
One of the reasons we chose to use possibilities as states is to exploit this new
structure to introduce new functionalities to MEP. In fact having the states
described as possibilities, which are strongly related to non-well-founded sets,
help us to introduce the concept of visited states, a core idea in planning.</p>
          <p>As said before, a possibility in the sense of decoration, represents all the
Kripke structures bisimilar to the decoration itself. This means that with
possibilities we can exploit bisimulation to capture the idea of equality between
states. In fact, given two bisimilar decorations (or labeled graphs), these, even
with structural di erences, are represented by the same possibility. On the other
hand this is not true when it comes to Kripke structures. This idea is best
described through graphical representation; therefore we will use Figure 4 to
explain this concept.</p>
          <p>As an example, let us introduce two new actions: flip and tell(ag). The
rst one is an ontic action, where an agent inverts the position of the coin; the
observability of this action depends on the uents looking. On the other hand,
tell(ag) means that an agent announces to ag the position in which she thinks
the coin lies while all the other agents are oblivious.</p>
          <p>Assuming that the coin lies tails up and given a sequence of action instances
= peekhBi; distract(B)hCi; fliphCi; tell(B)hCi7 we show in Figure 4a the
result of applying in a slightly modi ed initial state of Example 2 where
CfA;B;Cg(opened ^:look(A)). In Figure 4b, it is represented a Kripke structure
that has structural di erences in respect to the one in Figure 4a. This means
that these two Kripke structures represent two di erent states in mA even if
they are intuitively the same. On the contrary, if we think in term of possibilities,
both the Kripke structures of Figure 4 are represented by possibility
r1=f(A; fs0; s1g); (B; fr1g); (C; fr1g); look(C); key(A); opened; headsg.
B, C
A, B, C
q0
A
s0</p>
          <p>r1</p>
          <p>A A
A</p>
          <p>A, B, C
s1</p>
          <p>B, C</p>
          <p>A, B, C
(a) The resulting Kripke
structure after the execution of .</p>
          <p>A,B,C
s0</p>
          <p>A,B,C</p>
          <p>A A
r1
s1</p>
          <p>B,C</p>
          <p>A,B,C
(b) A Kripke structure bisimilar
to the one in Figure 4a.</p>
          <p>
            Thanks to the use of possibilities we can capture bisimilar decoration while
the same were considered di erent in mA and, therefore, check if a state has
7 We recall that in [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ] an action instance is represented as actionhagi where ag is the
agent that executes action.
(i ) u j= f if u(f) = 1;
(ii ) u j= : if u 6j= ;
(iii ) u j= 1 _ 2 if u j= 1 or u j= 2;
(iv ) u j= 1 ^ 2 if u j= 1 and u j= 2.
(v ) u j= Bag' if for each v 2 u(ag), v j= ';
(vi ) u j= :' if u 6j= ';
(vii ) u j= '1 _ '2 if u j= '1 or u j= '2;
(viii ) u j= '1 ^ '2 if u j= '1 and u j= '2;
(ix ) u j= E ' if u j= Bag' for all ag 2 ;
(x ) u j= C ' if u j= Ek ' for every k
          </p>
          <p>E (Ek ').</p>
          <p>0, where E0 ' = ' and Ek+1' =
already been visited. Next we de ne the concept of entailment and
transition function in mA .
nally the
We now introduce the concept of entailment w.r.t. possibilities.</p>
          <p>
            De nition 11 (entailment w.r.t. possibilities). Given the belief formulae
'; '1; '2, a uent f, an agent ag, a group of agents , and a possibility u:
Finally we introduce the transition function for mA . In de ning this transition
function we made some assumptions: i) we consider that the given initial state
also speci es which world is the pointed one (as said in [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ]), this allow us to relax
the description of in the case of announcements or sensing actions. Moreover
ii) we do not take into consideration the case in which an agent can have false
beliefs8 because is still an open question how to deal with them in MEP; we will
try to address this problem in future works. The transition function relative to
the domain D is D : AI ! [f;g where is the set of all the possibilities.
          </p>
          <p>For the sake of readability, the abbreviations used in the following de nition
are explained at the end of it.</p>
          <p>De nition 12 (transition function for mA ). Let a 2 AI, and a possibility
u be given. If a is not executable in u then D(a; u) = ; otherwise if a is executable
in u :</p>
          <p>v(f) = caused(a)
and
(v(ag) = u(ag)</p>
          <p>v(ag) = Sw2u(ag) D(a; w)
{</p>
          <p>D(a; u) = v if a is an ontic action instance and
(v(f) = u(f)
if f 6= caused(a)
if f = caused(a)
if ag 2 OD
if ag 2 FD
8 An agent ag has a false belief about ' in state s if s j= ' and s j= Bag:'.
{</p>
          <p>D(a; u) = v if a sensed the uent f
8
&gt;;
&gt;
&gt;&lt;v(ag) = u(ag)
&gt;&gt;v(ag) = Sw2u(ag) D(a; w)
&gt;:v(ag) = Sw2u(ag)( D(a; w) [
{</p>
          <p>D(a; u) = v if a announces the uent formula
8
&gt;;
&gt;
&gt;&lt;v(ag) = u(ag)
&gt;&gt;v(ag) = Sw2u(ag) D(a; w)
&gt;:v(ag) = Sw2u(ag)( D(a; w) [
where:</p>
          <p>D(:a; w))
D(:a; w))
if sensed(a)[f] 6= u(f)
if ag 2 OD and sensed(a)[f] = u(f)
if ag 2 FD and sensed(a)[f] = u(f)
if ag 2 PD and sensed(a)[f] = u(f)
if u 6j=
if ag 2 OD and u j=
if ag 2 FD and u j=
if ag 2 PD and u j=
{ caused(a) is the uent modi ed by the action instance a;
{ FD, PD, OD identify the fully observant, partially observant and oblivious agents
respectively;
{ sensed(a)[f] represents the truth value of the uent f determined by a = u(f);
{ :a describes the action that senses the opposite of a; namely sensed(a)[:f].
Note that sensing and announcement actions generate the empty set when the
action e ects do not respect the uents truth values of the possibility where
the action is executed. That is because epistemic action (i.e., announcement or
sensing) cannot change the state of the world but only the beliefs of the agent.</p>
          <p>As example we present the execution of the sequence distract(C)hAi; openhAi
on the initial state of Example 2 encoded by the possibility u in Figure 5a.
(u</p>
          <p>= f(ag; fu; u0g); look(ag); key(A)g
u0 = f(ag; fu; u0g); look(ag); key(A); headsg
(a) The initial state u.</p>
          <p>(v</p>
          <p>= f(ag; fv; v0g); look(A); look(B); key(A)g
v0 = f(ag; fv; v0g); look(A); look(B); key(A); headsg
(b) The state v after executing distract(C)hAi.
(w</p>
          <p>= f(A; fw; w0g); (B; fw; w0g); (C; fv; v0g); look(A); look(B); key(A); openedg
w0 = f(A; fw; w0g); (B; fw; w0g); (C; fv; v0g); look(A); look(B); key(A); opened; headsg
(c) The state w after executing distract(C)hAi;openhAi.
In this paper we investigated an alternative to Kripke structures as
representation for multi-agent epistemic planning states. Doing so we presented mA , an
action language for MEP based on possibilities, introducing MEP in
non-wellfounded set theory. Moreover we exploited possibilities to de ne a stronger
concept of equality on states collapsing all the bisimilar states into the same
possibility. Finally, as with mA is more direct to have an implicit state-representation,
using possibilities helps in reducing the search-space dimension.</p>
          <p>
            In the near future we intend to implement a planner for mA and to study
alternative representations. In particular we plan to: i) exploit more set-based
operations: especially for the entailment of group operators; ii) formalize the
concept of non-consistent belief for mA ; iii) investigate more thoroughly the
connection between Kripke structures and non-well-founded sets; iv) examine
the concept of bisimulation as equality between epistemic states; v) and nally
consider other alternatives to Kripke structures,e.g., OBDD s [
            <xref ref-type="bibr" rid="ref11 ref9">9, 11</xref>
            ].
and mA
          </p>
          <p>comparison
For a more clear comparison we show the execution, on both mA and mA ,
of the action instances sequence c = distract(C)hAi; openhAi; peekhAi, that
leads to the desired goal, in the domain expressed in Example 2. With this we
want to give a graphical explanation of both the transition functions and
statespace de ned by the two languages. Each state in mA will be represented by a
Kripke structure while in mA will be a possibility (expanded to its respective
system of equations for clarity).</p>
          <p>The observability relations of each action instance in
following Table:
c is expressed in the
FD
PD
OD
distract(C)hAi</p>
          <p>A, B, C
openhAi</p>
          <p>A, B
C
peekhAi</p>
          <p>A
B
C</p>
          <p>The initial state, based on Example 2, is de ned by the conditions:
intially C(key(A))
intially C(:key(B))
intially C(:key(C))
intially C(:opened)
intially C(:Bagheads ^ :Bag:heads) for ag 2 fA; B; Cg
intially C(look(ag)) for ag 2 fA; B; Cg
intially :heads</p>
          <p>Finally the goal that both the state in Figure 9 entail is expressed with the
following formulae:</p>
          <p>BA:heads ^ BA(BB(BAheads _ BA:heads))
BC[
BB(BAheads _ BA:heads) ^ (:BBheads ^ :BB:heads)
^</p>
          <p>(:Bagheads ^ :Bag:heads)]
A; B; C</p>
          <p>A; B; C
s0</p>
          <p>A; B; C</p>
          <p>s1
M0[ ](s0) = flook(ag); key(A)g
M0[ ](s1) = flook(ag); key(A); headsg
where ag 2 fA; B; Cg.</p>
          <p>(u = f(ag; fu; u0g); look(ag); key(A)g</p>
          <p>u0 = f(ag; fu; u0g); look(ag); key(A); headsg
where ag 2 fA; B; Cg:
(a) The intial Kripke structure (M0; s0).</p>
          <p>(b) The initial possibility u.
M1[ ](p0) =flook(A); look(B); key(A)g
M1[ ](p1) =flook(A); look(B); key(A); headsg
(v = f(ag; fv; v0g); look(A); look(B); key(A)g</p>
          <p>v0 = f(ag; fv; v0g); look(A); look(B); key(A); headsg
where ag 2 fA; B; Cg
(a) The Kripke stucture (M1; p0), obtained after the execution
of distract(C)hAi in (M0; s0) (Figure 6a).
(b) Possibility v, obtained after the execution of
distract(C)hAi in u (Figure 6b).
A; B; C
M2[ ](q0) = flook(ag); key(A); openedg M2[ ](p0) = M1[ ](p0)
M2[ ](q1) = flook(ag); key(A); opened; headsg M2[ ](p1) = M1[ ](p1)
where ag 2 fA; Bg:
(a) The Kripke stucture (M2; q0), obtained after the execution of
openhAi in (M1; p0) (Figure 7a).</p>
          <p>M3[ ](r0) = flook(ag); key(A); openedg M3[ ](p0) = M1[ ](p0)
M3[ ](r1) = flook(ag); key(A); opened; headsg M3[ ](p1) = M1[ ](p1)
where ag 2 fA; Bg:
(a) The Kripke structure (M3; q0), obtained after the execution of
peekhAi in (M2; q0).
where ag 2 fA; Bg and v; v0; are the possibilities represented in Figure 7b:
(b) Possibility w, obtained after the execution of openhAi in v (Figure 7b).</p>
          <p>= f(A; fzg); (B; fz; z0g)(C; fv; v0g); look(ag); key(A); openedg
z0 = f(A; fz0g); (B; fz; z0g)(C; fv; v0g); look(ag); key(A); opened; headsg
where ag 2 fA; Bg and the possibilities v, v0 are represented in Figure 7b.
(b) Possibility z, obtained after the execution of peekhAi in w (Figure 8b).</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Aczel</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Non-well-founded sets</article-title>
          .
          <source>CSLI Lecture Notes</source>
          ,
          <volume>14</volume>
          (
          <year>1988</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Aucher</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bolander</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Undecidability in epistemic planning</article-title>
          . pp.
          <volume>27</volume>
          {
          <issue>33</issue>
          (
          <year>2013</year>
          ), cited By 27
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Aumann</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brandenburger</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , et al.:
          <article-title>Epistemic conditions for Nash equilibrium</article-title>
          .
          <string-name>
            <surname>ECONOMETRICA-EVANSTON</surname>
            <given-names>ILL</given-names>
          </string-name>
          -
          <volume>63</volume>
          ,
          <issue>1161</issue>
          {
          <fpage>1161</fpage>
          (
          <year>1995</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Balliu</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dam</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Le</given-names>
            <surname>Guernic</surname>
          </string-name>
          , G.:
          <article-title>Epistemic temporal logic for information ow security</article-title>
          .
          <source>In: Proceedings of the ACM SIGPLAN 6th Workshop on Programming Languages and Analysis for Security</source>
          . pp.
          <volume>6</volume>
          :
          <issue>1</issue>
          {6:
          <fpage>12</fpage>
          . PLAS '11,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA (
          <year>2011</year>
          ). https://doi.org/10.1145/2166956.2166962
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Baral</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gelfond</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pontelli</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Son</surname>
            ,
            <given-names>T.C.:</given-names>
          </string-name>
          <article-title>An action language for multi-agent domains: Foundations</article-title>
          .
          <source>CoRR abs/1511</source>
          .
          <year>01960</year>
          (
          <year>2015</year>
          ), http://arxiv.org/abs/1511.01960
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Barwise</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Etchemendy</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>The liar: An essay on truth and circularity</article-title>
          . Oxford University Press (
          <year>1987</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Bolander</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jensen</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schwarzentruber</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Complexity results in epistemic planning</article-title>
          .
          <source>vol. 2015-January</source>
          , pp.
          <volume>2791</volume>
          {
          <issue>2797</issue>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Bolander</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Andersen</surname>
            ,
            <given-names>M.B.</given-names>
          </string-name>
          :
          <article-title>Epistemic planning for single-and multi-agent systems</article-title>
          .
          <source>Journal of Applied Non-Classical Logics</source>
          <volume>21</volume>
          (
          <issue>1</issue>
          ),
          <volume>9</volume>
          {
          <fpage>34</fpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Bryant</surname>
          </string-name>
          , R.E.:
          <article-title>Symbolic boolean manipulation with ordered binary-decision diagrams</article-title>
          .
          <source>ACM Computing Surveys (CSUR) 24(3)</source>
          ,
          <volume>293</volume>
          {
          <fpage>318</fpage>
          (
          <year>1992</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10. Carbonell Jr, J.G.:
          <article-title>Politics: Automated ideological reasoning</article-title>
          .
          <source>Cognitive Science</source>
          <volume>2</volume>
          (
          <issue>1</issue>
          ),
          <volume>27</volume>
          {
          <fpage>51</fpage>
          (
          <year>1978</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Cimatti</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roveri</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Conformant planning via symbolic model checking</article-title>
          .
          <source>Journal of Arti cial Intelligence Research</source>
          <volume>13</volume>
          ,
          <volume>305</volume>
          {
          <fpage>338</fpage>
          (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Dovier</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Logic programming and bisimulation</article-title>
          . In: Vos,
          <string-name>
            <given-names>M.D.</given-names>
            ,
            <surname>Eiter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Lierler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Toni</surname>
          </string-name>
          ,
          <string-name>
            <surname>F</surname>
          </string-name>
          . (eds.)
          <source>ICLP. CEUR Workshop Proceedings</source>
          , vol.
          <volume>1433</volume>
          .
          <string-name>
            <surname>CEUR-WS.org</surname>
          </string-name>
          (
          <year>2015</year>
          ), http://ceur-ws.
          <source>org/</source>
          Vol-1433
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13. van Eijck,
          <string-name>
            <surname>J.</surname>
          </string-name>
          :
          <article-title>Dynamic epistemic modelling</article-title>
          .
          <source>CWI. Software Engineering [SEN]</source>
          (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Fagin</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Halpern</surname>
            ,
            <given-names>J.Y.</given-names>
          </string-name>
          :
          <article-title>Reasoning about knowledge and probability</article-title>
          .
          <source>Journal of the ACM (JACM) 41(2)</source>
          ,
          <volume>340</volume>
          {
          <fpage>367</fpage>
          (
          <year>1994</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Gerbrandy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Groeneveld</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          :
          <article-title>Reasoning about information change</article-title>
          .
          <source>Journal of Logic, Language and Information</source>
          <volume>6</volume>
          (
          <issue>2</issue>
          ),
          <volume>147</volume>
          {
          <fpage>169</fpage>
          (
          <year>1997</year>
          ). https://doi.org/10.1023/A:1008222603071
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Gerbrandy</surname>
            ,
            <given-names>J.D.</given-names>
          </string-name>
          : Bisimulations on planet Kripke.
          <source>Inst. for Logic</source>
          , Language and Computation, Univ. van Amsterdam (
          <year>1999</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Van der Hoek</surname>
          </string-name>
          , W., Meyer, J.:
          <article-title>A complete epistemic logic for multiple agents</article-title>
          .
          <source>In: Epistemic Logic and the Theory of Games and Decisions</source>
          , pp.
          <volume>35</volume>
          {
          <fpage>68</fpage>
          . Springer (
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fang</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wan</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , Liu,
          <string-name>
            <surname>Y.</surname>
          </string-name>
          :
          <article-title>A general multi-agent epistemic planner based on higher-order belief change</article-title>
          . pp.
          <volume>1093</volume>
          {
          <issue>1101</issue>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Kominis</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ge</surname>
            <given-names>ner</given-names>
          </string-name>
          , H.:
          <article-title>Beliefs in multiagent planning: From one agent to many</article-title>
          .
          <source>In: Proc. ICAPS</source>
          . pp.
          <volume>147</volume>
          {
          <issue>155</issue>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Kripke</surname>
            ,
            <given-names>S.A.</given-names>
          </string-name>
          :
          <article-title>Semantical considerations on modal logic</article-title>
          .
          <source>Acta Philosophica Fennica</source>
          <volume>16</volume>
          (
          <year>1963</year>
          ),
          <volume>83</volume>
          {
          <fpage>94</fpage>
          (
          <year>1963</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Le</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fabiano</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Son</surname>
            ,
            <given-names>T.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pontelli</surname>
          </string-name>
          , E.:
          <article-title>Efp and pg-efp: Epistemic forward search planners in multi-agent domains</article-title>
          . In: Twenty-Eighth International Conference on
          <source>Automated Planning and Scheduling</source>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <article-title>: Multi-agent epistemic planning with common knowledge</article-title>
          .
          <source>In: IJCAI</source>
          . pp.
          <year>1912</year>
          {
          <year>1920</year>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Moore</surname>
            ,
            <given-names>R.C.</given-names>
          </string-name>
          :
          <article-title>Reasoning about knowledge and action</article-title>
          .
          <source>In: Readings in Arti cial Intelligence</source>
          , pp.
          <volume>473</volume>
          {
          <fpage>477</fpage>
          .
          <string-name>
            <surname>Elsevier</surname>
          </string-name>
          (
          <year>1981</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Muise</surname>
            ,
            <given-names>C.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Belle</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Felli</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McIlraith</surname>
            ,
            <given-names>S.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pearce</surname>
            ,
            <given-names>A.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sonenberg</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Planning over multi-agent epistemic states: A classical planning approach</article-title>
          .
          <source>In: Proc. of AAAI</source>
          . pp.
          <volume>3327</volume>
          {
          <issue>3334</issue>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Prakken</surname>
          </string-name>
          , H.:
          <article-title>Logical tools for modelling legal argument: a study of defeasible reasoning in law</article-title>
          , vol.
          <volume>32</volume>
          . Springer Science &amp; Business
          <string-name>
            <surname>Media</surname>
          </string-name>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Russell</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Norvig</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Arti cial Intelligence: A Modern Approach</article-title>
          . Prentice Hall Press, Upper Saddle River, NJ, USA, 3rd edn. (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Van Ditmarsch</surname>
            , H., van Der Hoek,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kooi</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Dynamic epistemic logic</article-title>
          , vol.
          <volume>337</volume>
          . Springer Science &amp; Business
          <string-name>
            <surname>Media</surname>
          </string-name>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Van Harmelen</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lifschitz</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Porter</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Handbook of knowledge representation</article-title>
          , vol.
          <volume>1</volume>
          .
          <string-name>
            <surname>Elsevier</surname>
          </string-name>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Wan</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fang</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          :
          <article-title>A complete epistemic planner without the epistemic closed world assumption</article-title>
          . In: IJCAI (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>