<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Dynamic Epistemic Logic for Implicit and Explicit Beliefs</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Fernando R. Vel a´zquez-Quesada</string-name>
        </contrib>
      </contrib-group>
      <fpage>65</fpage>
      <lpage>83</lpage>
      <abstract>
        <p>The dynamic turn in Epistemic Logic is based on the idea that notions of information should be studied together with the actions that modify them. Dynamic epistemic logics have explored how knowledge and beliefs change as consequence of, among others, acts of observation and upgrade. Nevertheless, the omniscient nature of the represented agents has kept finer actions outside the picture, the most important being the action of inference. Following proposals for representing non-omniscient agents, recent works have explored how implicit and explicit knowledge change as a consequence of acts of observation, inference, consideration and even forgetting. The present work proposes a further step towards a common framework for representing finer notions of information and their dynamics. We propose a combination of existing works in order to represent implicit and explicit beliefs. Then, after adapting definitions for the actions of upgrade and retraction, we discuss the action of inference on beliefs, analyzing its differences with respect to inference on knowledge and proposing a rich system for its representation.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>
        Epistemic Logic [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] and its possible worlds semantics is a powerful and
compact framework for representing an agent’s information. Their dynamic
versions [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] have emerged to analyze not only information in its knowledge
and belief versions, but also the actions that modify them. Nevertheless,
agents represented in this framework are logically omniscient, that is, their
information is closed under logical consequence. This property, useful in
some applications, hides finer reasoning actions that are crucial in some
others, the most important being that of inference.
      </p>
      <p>
        Based on the awareness approach of [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], several works have explored
dynamics of information for non-omniscient agents. In a propositional
dynamic logic (PDL) style, some of them have explored how the act of inference
modifies an agent’s explicit knowledge [15; 22]. In a dynamic epistemic style,
some others have explored how the acts of observation, inference,
consideration and forgetting affect implicit and explicit knowledge [5; 18; 10; 13].
      </p>
      <p>The present work follows the previous ones, now focussing on the notion
of beliefs. We combine approaches of the existing literature, proposing a
setting for representing the notions of implicit and explicit belief (Section
2). Then we look into the dynamics of these notions; first, by adapting
existing proposals to define the actions of explicit upgrade (explicit revision)
and retraction (Section 3), and second, by discussing the action of inference
on beliefs and its differences with inference on knowledge, and by proposing a
rich system for its representation (Section 4).
2</p>
      <p>Modelling implicit and explicit beliefs
This section recalls a framework for implicit and explicit information and
a framework for beliefs. By combining them, we will get our model for
representing implicit/explicit beliefs. But before going into their details, we
recall the framework on which all the others are based.</p>
      <p>
        Epistemic Logic. The frameworks of this section are based on that of
Epistemic Logic (EL; [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]). Given a set of atomic propositions P, the EL
language extends the propositional one with formulas of the form ϕ: “the
agent is informed about ϕ”. Though there are several possibilities, the classical
semantic model for EL-formulas are Kripke models, tuples M = W, R, V with
W a non-empty set of possible worlds, V : W → ℘(P) an atomic valuation
function indicating which atomic propositions are true at each world, and
R ⊆ (W × W) an accessibility relation indicating which worlds the agent
considers possible from each one of them.
      </p>
      <p>Formulas are evaluated on pointed models (M, w) with M a Kripke model
and w ∈ W a given evaluation point. Boolean connectives are interpreted as
usual; the key clause is the one for ϕ, indicating that the agent is informed
about ϕ at w iff ϕ is true in all the worlds the agent considers possible from w:
ϕ iff for all u ∈ W, Rwu implies (M, u)
ϕ
2.1</p>
      <p>
        Implicit and explicit information
Non-omniscient agents. The formula (ϕ → ψ) → ( ϕ → ψ) is valid in
Kripke models: the agent’s information is closed under logical consequence.
This becomes obvious when we realize that each possible world stands for
a maximally consistent set of formulas. So if both (ϕ → ψ) and ϕ hold at
world w, both ϕ → ψ and ϕ are true in all worlds R-reachable from w. But
then ψ also holds in all such worlds, and therefore ψ holds at w. Usually
the discussion revolves around whether this is a reasonable assumption
for ‘real’ agents. Even computational agents may not have this property,
since they may lack of resources (space and/or time) to derive all the logical
consequences of their information [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        One of the most influential solutions to this omniscience problem is
awareness logic [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. This approach follows the idea of making a difference
between implicit (potential) information, what the agent can eventually get, and
explicit information, what the agent actually has [23; 25; 24; 27]. The main
observation is that, in order to have explicit information about some formula
ϕ, besides having it as implicit information, the agent should be aware of ϕ.
      </p>
      <p>Syntactically, awareness logic extends the EL language with formulas
of the form A ϕ: “the agent is aware of ϕ”. Semantically, it extends Kripke
models with a function A that assigns a set of formulas to the agent in each
possible world. The new formulas are evaluated in the following way:</p>
      <p>Implicit information about ϕ is defined as ϕ, but explicit information
is defined as ϕ ∧ A ϕ. Although implicit information is still closed under
logical consequence, explicit information is not. This follows from the fact
that, different from the possible worlds, the A-sets do not need to have any
closure property; in particular, {ϕ → ψ, ϕ} ⊆ A(w) does not imply ψ ∈ A(w).
Agents with reasoning abilities. Still, though a ‘real’ agent’s information
does not need to be closed under logical consequence, it does not need to
be static. The more interesting approach for us is that in which the agent
can extend her explicit information by the adequate actions. But, which are
these actions and what does the agent needs in order to perform them?</p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], the author proposes a framework in which the actions available
to the agent are different rules (e.g., modus ponens, conjunction
elimination), each one of them represented by a relation between worlds that should
be faithful to the rule’s spirit (e.g., the modus ponens relation should connect
worlds with an implication and its antecedent with worlds augmented with
the consequent). This yields an agent that does not need to be omniscient,
but still is able to perform inferences.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] the author goes one step further: a rule cannot be used by an
agent unless the rule itself is also part of her explicit information. For
example, for two worlds to be connected by the modus ponens relation, the
initial one should have not only an implication and its antecedent, but also
the modus ponens rule itself.
      </p>
      <p>The combination of the mentioned ideas have produced models for
representing implicit and explicit knowledge ([5; 28; 13; 18; 10] among others).
But the notion of belief is different, as we discuss in the next subsection.
2.2</p>
      <p>
        Modelling beliefs
The KD45 approach. For modelling knowledge in EL, it is usually asked
for the accessibility relation R to be at least reflexive (making ϕ → ϕ valid:
if the agent knows ϕ, then ϕ is true), and often to be also transitive and
euclidean (giving the agent full positive and negative introspection). Beliefs
can be represented in a similar way, now asking for R to satisfy weaker
properties, the crucial one following the idea that, though beliefs do not
need to be true, we can expect them to be consistent. This is achieved by
asking for the relation to be serial, making the D axiom ¬ ⊥ valid. Full
introspection is usually assumed, yielding the classical KD45 approach.
Belief as what is most plausible. But beliefs are different from knowledge.
Intuitively, we do not believe something because it is true in all possible
situations; we believe it because it is true in those we consider most likely
to be the case [19; 26]. This idea has led the development of variants of
Kripke models [12; 4; 3]. Here we recall the plausibility models of [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>A plausibility model is a Kripke model in which the accessibility
relation, denoted now by ≤, is interpreted as a plausibility relation ordering
possible worlds. This relation is assumed to be a preorder (a reflexive
and transitive relation). Moreover, since the idea is to define the agent’s
beliefs as what is true in the most plausible worlds from the evaluation
point, ≤ should satisfy an important extra property: for any possible world
w, the set of worlds that are better than w among those comparable to
it should have maximal worlds. In order to state this property formally,
denote by Vw the set of worlds comparable to w (its comparability class:
Vw := {u | w ≤ u or u ≤ w }) and by Max≤(U) the set of ≤-maximal worlds of
U (Max≤(U) := {v ∈ U | for all u ∈ U, u ≤ v }). Then, in a plausibility model,
the accessibility relation ≤ is asked to be a locally well-preorder: a reflexive
and transitive relation such that, for each comparability class Vw and for
every non-empty U ⊆ Vw, Max≤(U) ∅. Note how the existence of
maximal elements in every U ⊆ Vw implies the already required reflexivity, but
also connectedness inside Vw. In particular, if two worlds w2 and w3 are more
plausible than a given w1 (w1 ≤ w2 and w1 ≤ w3), then these two worlds
should be ≤-related (w2 ≤ w3 or w3 ≤ w2 or both).</p>
      <p>Interestingly, the agent’s indistinguishability relation can be derived
from the plausibility one. If two worlds are ≤-related, then though the
agent considers one of them more plausible than the other, she cannot
discard one of them when the other one is given. In other words, worlds
that are ≤ related are in fact epistemically indistinguishable.</p>
      <p>For the language we have two options.1 We can extend the propositional
language with formulas of the form Bϕ, semantically interpreted as
for all u ∈ W, u ∈ Max≤(R≤(w)) implies (M, u)
where R≤(w) := {u ∈ W | w ≤ u}.
ϕ,
1In fact, the mentioned works, [12; 4; 3], use the notion of conditional belief as the primitive
one, rather than plain belief. We have chosen to stick with the notion of plain belief through
the present notes, leaving an analysis of the notions of implicit/explicit conditional beliefs
for further work.
The second option is to use a standard modal language with [≤] standing
for the relation ≤, and then define beliefs in terms of it. Given the properties
of ≤ (in particular, reflexivity, transitivity and connectedness), it is not hard
to see that ϕ is true in the most plausible worlds from w iff w can see a better
world from which all successors are ϕ worlds. This yields the following
definition for “the agent believes ϕ”:</p>
      <p>Bϕ :=
Our framework for representing implicit and explicit beliefs combines the
mentioned ideas. The language has two components: formulas and rules.
Formulas are given by a propositional language extended, first, with
modalities ≤ and , and second, with fourlmas of the form A ϕ and R ρ, where
ϕ is a formula and ρ a rule. Rules, on the other hand, are pairs consisting
of a set of formulas, the rule’s premises, and a single formula, the rule’s
conclusion. The formal definition of our language is as follows.
Definition 2.1 (Language L). Given a set of atomic propositions P, formulas
ϕ and rules ρ of the plausibility-access language L are given, respectively, by
ϕ ::= p | A ϕ | R ρ | ¬ϕ | ϕ ∨ ψ |
ρ ::= ({ϕ1, . . . , ϕnρ }, ψ)
ϕ | ≤
ϕ
where p ∈ P. Formulas of the form A ϕ are read as “the agent has
acknowledged that formula ϕ is true”, and formulas of the form R ρ as “the agent has
acknowledged that rule ρ is truth-preserving”. For the modalities, ≤ ϕ is read
as “there is a more plausible world where ϕ holds”, and ϕ as “there is an
epistemically indistinguishable world where ϕ holds”. Other boolean connectives
as well as the box modalities [ ] and [≤] are defined as usual. We denote
by L f the set of formulas of L, and by Lr its set of rules.</p>
      <p>Though rules are usually presented as schemas, our rules are defined
as particular instantiations (e.g., the rule ({p ∧ q}, p) is different from the
rule ({q ∧ r}, q)). Since they will be applied in a generalized modus ponens
form (if the agent has all the premises, she can derive the conclusion), using
concrete formulas avoids details of instantiation, therefore facilitating the
definition. When dealing with them, the following definitions will be useful.
Definition 2.2. Given a rule ρ, we will denote its set of premises by pm(ρ), its
conclusion by cn(ρ), and its translation (an implication whose antecedent is
the finite conjunction of ρ’s premises and whose consequent is ρ’s
conclusion) by tr(ρ).</p>
      <p>For the semantic model, we will extend the described plausibility
models with two functions.
Definition 2.3 (Plausibility-access model). With P the set of atomic
propositions, a plausibility-access (PA) model is a tuple M = W, ≤, V, A, R where
W, ≤, V is a plausibility modeolver P and
• A : W → ℘(L f ) is the access set function, assigning to the agent a set of
formulas of L in each possible world,
• R : W → ℘(Lr) is the rule set function, assigning to the agent a set of
rules of L in each possible world.</p>
      <p>Functions A and R can be seen as valuations with a particular range,
assigning to the agent a set of formulas and a set of rules at each possible
world, respectively. Moreover, recall that two worlds that are ≤ related are
epistemically indistinguishable, so we define as the union of ≤ and its
converse ( := ≤ ∪ ≥): the agent cannot distinguish between two worlds if
she considers one of them more plausible than the other.</p>
      <p>A pointed plausibility-access model (M, w) is a plausibility-access model
with a distinguished world w ∈ W.</p>
      <p>
        Here it is important to emphasize our interpretation of the A-sets.
Different from [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] and [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], we do not interpret them as “the formulas the agent
is aware of at world w”, but rather as “the formulas the agent has acknowledged
as true at world w”, closer to the ideas in [15; 22; 18].
      </p>
      <p>Now the semantic evaluation. The modalities ≤ and are
interpreted via their correspondent relation in the usual way, and formulas of
the form A ϕ and R ρ are interpreted with our two new functions.
Definition 2.4 (Semantic interpretation). Let (M, w) be a pointed PA model
with M = W, ≤, V, A, R . Atomic propositionsand boolean operators are
interpreted as usual. For the remaining cases,
(M, w)
(M, w)
(M, w)
(M, w)</p>
      <p>A ϕ
R ρ
≤
iff ϕ ∈ A(w)
iff ρ ∈ R(w)
ϕ iff there is a u ∈ W such that w ≤ u and (M, u)
ϕ iff there is a u ∈ W such that w u and (M, u)
ϕ
ϕ</p>
      <p>
        For characterizing valid formulas, an important observation is that a
locally well-preorder is a locally connected and conversely well-founded
preorder [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Then, by standard results on canonicity and modal
correspondence (Chapter 4 of [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]), the axiom system of Section 2.6 of [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] (Table 1) is
also sound and (weakly) complete for our language L with respect to
‘nonstandard’ plausibility-access models: those in which ≤ is reflexive, transitive
and locally connected (axioms T≤, 4≤ and LC) and the symmetric
extension of ≤ (axioms T , 4 , B and Inc). But such models also have the finite
model property (with respect to formulas in our language), so completeness
Prop
ϕ for ϕ a propositional tautology MP
      </p>
      <p>If ϕ → ψ and
ϕ, then
ψ
K≤
Dual≤ ≤ ϕ ↔ ¬[≤] ¬ϕ
Nec≤ If ϕ, then [≤] ϕ</p>
      <p>[≤] (ϕ → ψ) → ([≤] ϕ → [≤] ψ)
T≤
4≤
LC
Inc
with respect to plausibility-access models follows from the fact that every
strict preorder is conversely well-founded.</p>
      <p>Note how the axiom system does not have axioms for formulas of the
form A ϕ and R ρ. This is because, as mentioned before, such formulas are
simply special atoms for the dedicated valuation functions A and R.
Moreover, we have not asked for the A- and R-sets to have any special closure
property and there is no restriction in the way they interact with each other.2
Just like axiom systems for Epistemic Logic do not require special axioms
describing the behaviour of atomic propositions (unless, of course, they
have special properties, like q being true every time p is, characterized by
p → q), our system does not require special axioms for these special atoms.
More precisely, in the canonical model construction, we only need to define
access and rule sets in the proper way:</p>
      <p>A(w) := {ϕ ∈ L f | A ϕ ∈ w}</p>
      <p>
        R(w) := {ρ ∈ Lr | R ρ ∈ w}
Then, formulas of the form A ϕ and R ρ also satisfy the crucial Truth Lemma,
and completeness follows. Again, see Chapter 4 of [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] for details.
      </p>
      <sec id="sec-1-1">
        <title>2.3.1 Implicit and explicit beliefs</title>
        <p>
          It is time to define the notions of implicit and explicit beliefs. Our
definitions, shown in Table 2, combine ideas from [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] and [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. Note how the
agent believes the formula ϕ (the rule ρ) implicitly iff ϕ (tr(ρ)) is true in the
most plausible worlds, but in order to believe it explicitly, the agent should
also acknowledge ϕ (ρ) as true (truth-preserving) in these ‘best’ worlds.
        </p>
        <p>Explicit beliefs are implicit beliefs, witness the following validities:
BExϕ → BImϕ</p>
        <p>
          BExρ → BImρ
2In [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ], the authors explore and characterize several closure properties of A-sets.
The agent implicitly believes formula ϕ
The agent explicitly believes formula ϕ
        </p>
      </sec>
      <sec id="sec-1-2">
        <title>The agent implicitly believes rule ρ The agent explicitly believes rule ρ</title>
        <p>BImϕ :=
BExϕ :=
BImρ :=
BExρ :=
≤ [≤]ϕ ∧ A ϕ
≤ [≤] tr(ρ)
≤ [≤]tr(ρ) ∧ R ρ</p>
        <p>
          A possibly more interesting point is the following. An agent in [17; 10]
is non-omniscient due to lack of attention; she does not need to be aware
of every formula. On the other hand, our agent is aware of all formulas,
but still she is non-omniscient because she does not need to be aware that a
formula is true. This may seem a small difference, but the interpretation of
the A sets determines the reasonable operations over them. An agent can
become aware of any formula at any time, so any formula can be added
to the A-sets without further requirement [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. On the other hand, it is a
stretch to assume that an agent can recognize as true any formula at any
moment; it is more reasonable to ask for some derivation device, which in
this work will be a rule application [15; 22; 18].
        </p>
        <p>We finish this section by mentioning some properties of implicit and
explicit beliefs about formulas. (Rules behave in a similar way.) Implicit
beliefs are closed under logical consequence: if the most plausible worlds
satisfy both ϕ → ψ and ϕ, then they also satisfy ψ. But explicit beliefs do
not need to have this property because the A-sets do not need to have any
closure property: having ϕ and ϕ → ψ does not guarantee to have ψ.</p>
        <p>Though ≤ is reflexive, neither implicit nor explicit beliefs have to be
true because the real world does not need to be among the most plausible
ones. Nevertheless, reflexivity makes implicit (and therefore explicit) beliefs
consistent. Every world has at least one ≤-successor, so ¬BIm⊥ is valid.</p>
        <p>Implicit beliefs are positively and negatively introspective. This is the
case because the notion of ‘most plausible worlds’ is global inside the same
comparability class. For positive introspection, if the set of maximal worlds
contains only ϕ-worlds (BImϕ), so does the maximal from the maximal ones
(BImBImϕ). And for negative introspection, if there is a ¬ϕ-world u in the
maximal worlds, (¬BImϕ), then u is also in those maximal from the maximal
ones (BIm¬BImϕ). But this does not extend to explicit beliefs, again because
the A-sets do not need to have any closure property. Having ϕ does not
guarantee to have BExϕ (so BExϕ → BExBExϕ is not valid), and not having ϕ
does not guarantee to have ¬BExϕ (so ¬BExϕ → BEx¬BExϕ is not valid).</p>
        <p>Dynamics part one: upgrade and retraction
We have a framework for representing implicit and explicit beliefs. We now
look at their dynamics by introducing two actions that modify them.
The χ-upgrade operation [4; 3] modifies the plausibility relation ≤ to put
the χ-worlds at the top, therefore revising the agent’s beliefs. Here we have
two possibilities, depending on whether it also adds χ to the A-sets (explicit
upgrade) or not (implicit upgrade). Here is the definition of the first case.
Definition 3.1 (Explicit upgrade). Let M = W, ≤, V, A, R
χ a formula in L. The PA model Mχ⇑+ = W, ≤, V, A , R
the plausibility relation and in the access set function:
be a PA model and
differs from M in
≤ := (≤; χ?) ∪ (¬χ?; ≤) ∪ (¬χ?; ; χ?)
A (w) := A(w) ∪ {χ}
for every w ∈ W
Note how the upgrade operation is functional: for every model M it returns
one and only one model Mχ⇑+ .</p>
        <p>The new plausibility relation is given in a PDL style: we have w ≤ u iff
(1) w ≤ u and u is a χ-world, or (2) w is a ¬χ-world and w ≤ u, or (3) w u,
w is a ¬χ-world and u is a χ-world. There are other possible definitions for
≤ [4; 3], and the chosen one, so-called radical upgrade, is just an example of
what can be defined.</p>
        <p>The operation preserves models in the intended class.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>Proposition 1. If M is a PA model, then so is Mχ⇑+ .</title>
      <p>We extend the language to express the effect of an explicit upgrade;
formulas of the form χ ⇑+ ϕ are read as “it is possible to performan explicit
χ-upgrade after which ϕ holds”. There is no precondition for this action (the
agent can perform an explicit upgrade whenever she wants), so the semantic
interpretation is as follows.</p>
      <sec id="sec-2-1">
        <title>Definition 3.2. Let (M, w) be a pointed PA model: (M, w)</title>
        <p>Note how the operation puts on top those worlds that are χ-worlds in
the original model, but they do not need to be χ-ones after the upgrade. The
plausibility relation changes, therefore changing the truth-value of formulas
containing the modalities for ≤ and/or and, in particular, changing the
agent’s beliefs. This is not strange at all, and in fact it corresponds to the
well-known Moore-like sentences (“p is the case and you do not know it”) in
Public Announcement Logic that become false after being announced, and
therefore cannot be known.</p>
        <p>Nevertheless, the operation behaves as expected for propositional
formulas. The operation does not change valuations, so if χ is purely
propositional, the operation will put current χ-worlds on top, and they will still be
χ-worlds after the operation so the agent will believe χ.</p>
        <p>
          The validities in our new language can be axiomatized by using reduction
axioms, valid formulas that indicate how to translate a formula with the
new modality χ ⇑+ into a provably equivalent one without them. Then,
completeness follows from the completeness of the basic system. We refer
to [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] for an extensive explanation of this technique.
        </p>
        <p>Theorem 1. The axiom system of Table 1 together with axioms and rules of Table
3 (with the always true formula) provide a sound and (weakly) complete axiom
system for formulas in the language L plus the explicit upgrade modality with
respect to plausibility-access models.</p>
        <p>χ+⇑ p ↔ p
χ+⇑ ¬ϕ ↔ ¬ χ+⇑ ϕ
χ+⇑ (ϕ ∨ ψ) ↔ χ ⇑+ ϕ ∨ χ+⇑ ψ
χ+⇑ ≤ ϕ ↔
χ+⇑ ϕ ↔
From ϕ infer
≤χ ∧ χ ⇑+ ϕ ∨ ¬χ ∧ ≤</p>
        <p>+ χϕ⇑
χ+⇑ ϕ
χ+⇑ A χ ↔
χ+⇑ A ϕ ↔ A ϕ for ϕ
χ+⇑ R ρ ↔ R ρ</p>
        <p>χ
χ+⇑ ϕ ∨ ¬χ ∧
(χ ∧ χ+ ⇑ ϕ)</p>
        <p>
          The reduction axioms simply indicate how each kind of formula is
affected by the explicit upgrade operation. For example, χ ⇑+ p ↔ p states
that atomic propositions are not affected, and both χ ⇑+ A χ ↔ and
χ ⇑+ A ϕ ↔ A ϕ for ϕ χ together state that χ and only χ is added to the
A-sets. The interesting axiom is the one for the plausibility modality ≤ . It
is obtained with techniques from [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], and simply translates the three-cases
PDL definition of the new plausibility relation: after an upgrade with χ
there is a ≤-reachable world where ϕ holds iff before the operation (1) there
is a ≤-reachable χ-world that will become ϕ after the upgrade, or (2) the
current is a ¬χ-world that can ≤-reach another that will turn into a ϕ-one
after the operation, or (3) the current is a ¬χ-world that can -reach another
that is χ and will become ϕ after the upgrade. Similar reduction axioms
have been presented in [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] in the context of preference upgrade.
3.2
        </p>
        <p>Retraction
But there are also situations in which the agent simply retracts some explicit
belief, that is, she does not acknowledge it as true anymore. This is achieved
simply by removing the formula from the A-sets.</p>
        <p>Definition 3.3 (Retraction). Let M = W, ≤, V, A, R
formula in L. The PA model M−χ = W, ≤, V, A, R
access set function, given for every w ∈ W as
be a PA model and χ a
differs from M just in the</p>
        <p>A (w) := A(w) \ {χ}</p>
        <p>Again, the retraction operation is functional. Moreover, it does not
modify ≤, so it preserves plausibility models.</p>
        <p>This operation is represented in the language by formulas of the form
−χ ϕ, read as “it is possible to retract χ and after it ϕ holds”. Just like an
upgrade, no precondition is needed.</p>
      </sec>
      <sec id="sec-2-2">
        <title>Definition 3.4. Let (M, w) be a pointed PA model: (M, w)</title>
        <p>Theorem 2. The axiom system of Table 1 together with axioms and rules of Table
4 (with ⊥ the always false formula) provide a sound and (weakly) complete axiom
system for formulas in the language L plus the retraction modality with respect to
plausibility-access models.</p>
        <p>−χ p ↔ p
−χ ¬ϕ ↔ ¬ −χ ϕ
−χ (ϕ ∨ ψ) ↔ −χ ϕ ∨ −χ ψ
−χ ≤ ϕ ↔ ≤ −χ ϕ
−χ ϕ ↔ −χ ϕ
From ϕ infer
−χ ϕ
−χ A χ ↔ ⊥</p>
        <p>−χ A ϕ ↔ A ϕ for ϕ
−χ R ρ ↔ R ρ
χ</p>
        <p>
          Here the key axioms are −χ A χ ↔ ⊥ and −χ A ϕ ↔ A ϕ for ϕ
stating that χ and only χ is removed from the A-sets. Similar reduction
axioms have been presented in [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] in the context of dynamics of awareness.
χ,
        </p>
        <p>
          Dynamics part two: inference on beliefs
We now turn into the main part of our work. In this section we analyze
rule-based inference on beliefs. We start by recalling the case of rule-based
inference on knowledge [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ].
        </p>
        <p>The definition of implicit and explicit knowledge are simpler than those
for beliefs since they depend directly on all the worlds the agent considers
possible. The agent knows ϕ implicitly when it holds in all the worlds she
considers possible, and she knows ϕ explicitly when she also recognizes it
as true in all such worlds. The definitions of explicit knowledge about a
rule ρ is given in a similar way.</p>
        <p>KImϕ := [ ] ϕ
KImρ := [ ] tr(ρ)</p>
        <p>KExϕ := [ ] (ϕ ∧ A ϕ)</p>
        <p>KExρ := [ ] (tr(ρ) ∧ R ρ)</p>
        <p>The action of inference on knowledge with rule σ is defined as an operation
that adds σ’s conclusion to the A-set of those worlds where the agent knows
explicitly σ and its premises (KExσ ∧ KExpm(σ)). More precisely, if M is a
plausibility-access model with access set function A, then the operation of
σ-inference on knowledge produces the model M →σK , differing from M just
in the access set function A , which is given by</p>
        <p>A (w) :=  A(w) ∪ {cn(σ)}
 A(w)
if (M, w)
otherwise</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>KExσ ∧ KExpm(σ)</title>
      <p>A new modality →σK is introduced to express the effects of this
operation, and its semantic definition is given by</p>
      <p>But take a closer look at the inference on knowledge operation. What
it actually does is to discard all worlds where KExσ ∧ KExpm(σ) holds, and
replace them with copies that are almost identical, the only difference being
their A-sets that, after the operation, will have cn(σ). And this is
reasonable because, under the assumption that knowledge is true information,
inference based on a known (therefore truth-preserving) rule with known
(therefore true) premises is simply deductive reasoning: the premises are
true and the rule preserves the truth, so the conclusion should be true. In
fact, inference based on a known rule with known premises is the act of
recognizing two things. First, since the applied rule is truth-preserving and
its premises are true, its conclusion must be true; and second, situations
where the premises are true but the conclusion is not are not possible.</p>
      <p>
        The case of beliefs is different, as suggested in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. An inference on
beliefs is based on a rule that is believed to be truth-preserving, but that it
is not necessarily so. Even though it is reasonable to consider a situation in
which the premises and the conclusion hold, the agent should not discard
a situation where the premises hold but the conclusion does not.
      </p>
      <p>Our proposal is the following. An inference on beliefs should create
two copies of each world where the rule and the premises are believed:
an exact copy of the original one, and another extending it by adding the
rule’s conclusion to it. But not only that. The agent believes that the rule is
truth-preserving and the premises are true, so the extended world should
be more plausible than the ‘conclusionless’ one.</p>
      <p>
        But, how to create copies of a possible world? We can use the action
models and product update of the so-called BMS approach [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
4.1
      </p>
      <p>
        Plausibility-access action models
The main idea behind action models [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] is that actions can be represented
with a model similar to that used for representing the static situation. In
other words, just as the agent can be uncertain about which one is the real
world, she can also be uncertain about which action has taken place. Then,
the uncertainty of the agent after an action is a combination of her uncertainty
about the situation before the action and her uncertainty about the action itself.
      </p>
      <p>
        This idea has been extended in two different directions: in order to
deal with plausibility models [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and in order to deal with non-omniscient
multi-agent situations [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Our proposal combines and extends these two
ideas, now with the aim of dealing with single-agent inference on beliefs.
We start by defining the structures that will represent this kind of actions.
Definition 4.1 (Plausibility-access action model). A plausibility-access action
model is a tuple A = S, , Pre, PosA, PosR where
•
      </p>
      <p>
        S, , Pre is a plausibility actionmodel [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] with S a finite non-empty
set of events, a plausibility relation on S (with the same requirements
as those for a plausibility-access model) and Pre : S → L f a
precondition function indicating the requirement for each event to be executed.
• PosA : (S × ℘(L f )) → ℘(L f ) is the new access set function, which will
allow us to define the access set of the agent in the model that will
result from applying this action.
• PosR : (S × ℘(Lr)) → ℘(Lr) is the new rule set function, which will
allow us to define the rule set of the agent in the model that will result
from applying this action.
      </p>
      <p>Just as before, the plausibility relation defines an equivalence relation by
putting it together with its converse: ≈ := ∪ . A pointed
plausibility-access action model (A, s) has a distinguished event s ∈ S.</p>
      <p>Examples of plausibility-access action models will be shown in Section
4.2. But first we will define the plausibility-access model that results from
an action model application as well as the formula that will represent this
operation and its semantic interpretation.
• W := {(w, s) ∈ (W × S) | (M, w) |= Pre(s)}
• (w1, s1) ≤ (w2, s2) iff
s1 ≺ s2 and w1
w2 or s1 ≈ s2 and w1 ≤ w2
• V (w, s) := V(w)
• A (w, s) := PosA(s, A(w))
• A (w, s) := PosR(s, R(w))</p>
      <p>
        Note how the set of worlds of the new plausibility-access model is given
by the restricted cartesian product of W and S; a pair (w, s) will be a world in
the new model iff event s can be executed at world w. The new plausibility
order follows the so-called ‘Action-priority’ rule [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], making (w2, s2) more
plausible than (w1, s1) iff either s2 is strictly more plausible than s1 and
w1, w2 are indistinguishable, or else s1, s2 are indistinguishable and w2 is
more plausible than w1.
      </p>
      <p>
        Now, for the valuations of the new worlds. First, a new world inherits
the atomic valuation of its static component, that is, an atom p holds at (w, s)
iff p holds at w. The cases for access sets gives us full generality: the access
set of world (w, s) is given by the function PosR with the event s and the
access set of w as parameters [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The case for rule sets is similar.
      </p>
      <p>It is not hard to verify that the product update operation preserves
plausibility-access models.</p>
      <p>Proposition 2. If M is a plausibility-access model and A a plausibility-access
action model, then M ⊗ A is a plausibility-access model.</p>
      <p>In order to express how product updates affect the agents’ information,
we extend our language with modalities for each pointed
plausibility-access action model (A, s), allowing us to build formulas of the form A, s ϕ,
whose semantic interpretation is given below.</p>
      <p>Definition 4.3. Let (M, w) be a pointed PA model and let (A, s) be a pointed
PA action model with Pre its precondition function.</p>
      <p>Pre(s) and (M ⊗ A, (w, s))</p>
      <p>ϕ
4.2</p>
      <p>Plausibility-access action models for basic inference
The action of inference on knowledge can be represented with
plausibilityaccess action models.</p>
      <p>Definition 4.4 (Inference on knowledge). Let σ be a rule. The action of
inference on knowledge is given by the pointed PA action model (A →σK , s)
whose definition (left) and relevant diagram (right) are given by
• S := {s}
•
:= {(s, s)}</p>
      <p>• Pre(s) := KExσ ∧ KExpm(σ)
• PosA(s, X) := X ∪ {cn(σ)}
• PosR(s, Y) := Y
s</p>
      <p>X ∪ {cn(σ)}</p>
      <p>But now we can represent more. Following our previous discussion,
here is the action model for basic inference on beliefs.</p>
      <p>Definition 4.5 (Basic inference on beliefs). Let σ be a rule. The action of
basic inference on belief is given by the pointed PA action model (A →σB , s1)
whose definition is
• S := {s1, s2}
•</p>
      <p>:= {(s1, s1), (s1, s2), (s2, s2)}
•  PPrree((ss12)) ::== PPrreeBBσσ</p>
      <sec id="sec-3-1">
        <title>The precondition is that the agent believes explicitly the rule and its premises, that is,</title>
        <p>PreBσ := BExσ ∧ BExpm(σ)</p>
      </sec>
      <sec id="sec-3-2">
        <title>The relevant diagram appears on the right.</title>
        <p>4.3</p>
        <p>
          Extended inference: an exploration
•  PPoossAA((ss12,, XX)) ::== XX ∪ {cn(σ)}
•  PPoossRR((ss12,, YY)) ::== YY
s2 X ∪ {cn(σ)}
Plausibility-access action models allow us to represent more than what
we have discussed. As observed in [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], a plausibility relation generates a
Grove’s system-of-spheres, that is, several layers of possible events ordered
according to their plausibility. The presented action models for basic
inference on beliefs are just those models with two layers, each one of them
having just one event, and with the most plausible one being the extended
one. But we do not have to restrict ourselves to such kind of inferences.
Action models with more than two layers allow
us to represent inference based on rules with
more than one conclusion. The action model
on the right has three layers, each one
containing one event. Event s1 preserves access sets,
s2 extends them with the first conclusion and s3
extends them with both conclusions.
s2 X ∪ {cn1(σ)}
s1 X
s1
X
        </p>
        <p>X ∪ {cn1(σ)}
s2
s3
X ∪ {cn2(σ)}</p>
        <p>s4
X ∪ {cn1(σ), cn2(σ)}</p>
      </sec>
      <sec id="sec-3-3">
        <title>And we can do more by using layers</title>
        <p>with more than one world, like the action
model on the left that allows the agent to
have cn2(σ) without having cn1(σ).</p>
        <p>So far our examples have one characteristic in common. The new access
set function is monotone, reflecting the optimism of the agent with respect
to the conclusion: events that extend A-sets are always more plausible.
Definition 4.6 (Plausibility-access action models for optimistic inference).
Plausibility-access action models in which, for every event s1, s2,
s1
s2 implies</p>
        <p>PosA(s1, X) ⊆ PosA(s2, X)
are called action models for optimistic inference.</p>
        <p>But then we can also consider the opposite case. Models with an
antimonotone new access set function reflect the pessimism of the agent with
respect to the conclusion: events that extend A-sets are always less plausible.
Definition 4.7 (Plausibility-access action models for pessimistic inference).
Plausibility-access action models in which, for every event s1, s2,
s1
s2 implies</p>
        <p>PosA(s1, X) ⊇ PosA(s2, X)
are called action models for pessimistic inference.</p>
        <p>Of course these two classes do not cover all possibilities.
Plausibility-access action models allow us to represent many different and complex
inferences whose detailed study has to be left for further work.
4.4</p>
        <p>
          Brief discussion on completeness
The reduction axioms of [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] are inherited by our system. In particular, the
following one states the way the plausibility relation changes:
A, s
≤
ϕ ↔Pre(s) ∧
        </p>
        <p>A, sϕ ∨
≤</p>
        <p>
          A, s ϕ
s s
s≈s
But when looking for reduction axioms for access and rule set formulas,
PosA and PosR pose a problem. The reason is that they allow the new
access and rule sets to be arbitrary sets. Compare this with other product
update definitions. The one of [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] can change the atomic valuation, but
the set of worlds in which a given atomic proposition will be true should
be given by a formula of the language; the one of [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] can change the
relation in a point-wise way, but the new relation is given in terms of the
previous ones by using only regular operations. Our current efforts focus on
particular definitions expressive enough to describe our desired inferences
and restricted enough to get the needed reduction axioms.
5
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusions and further work</title>
      <p>We have presented a framework for representing implicit and explicit
beliefs. We have also provided representations of three actions that modify
them, starting with those of explicit upgrade and retraction but, more
important, discussing intuitive ideas and proposing a rich framework for
representing the action of inference on beliefs.</p>
      <p>
        There are parts of this work that deserve further exploration, the most
appealing being the study of the different kind of inferences that we can
represent with plausibility-access action models. We have defined those
for inference on knowledge and basic inference on beliefs, and we have
briefly explored some others, but our structures can represent much more.
Another interesting extension is to look at dynamics of rules, that is, to
look for reasonable actions that extend not only the rules the agent knows
[
        <xref ref-type="bibr" rid="ref28">28</xref>
        ], but also the rules she believes. These two studies will not be complete
without the appropriate axiom system for our product update definition.
Finally we mention a third direction: the study of a multi-agent setting,
including not only the addition of more agents to the picture, but also the
analysis of implicit/explicit versions of multi-agent notions, like common
knowledge and common beliefs.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>T.</given-names>
            <surname>Ågotnes</surname>
          </string-name>
          and N. Alechina, editors.
          <source>Special issue on Logics for Resource Bounded Agents</source>
          ,
          <year>2009</year>
          .
          <source>Journal of Logic, Language and Information</source>
          ,
          <volume>18</volume>
          (
          <issue>1</issue>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Baltag</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Moss</surname>
          </string-name>
          , and
          <string-name>
            <surname>S. Solecki.</surname>
          </string-name>
          <article-title>The logic of public announcements, common knowledge and private suspicious</article-title>
          .
          <source>SEN-R9922</source>
          ,
          <string-name>
            <surname>CWI</surname>
          </string-name>
          , Amsterdam,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Baltag</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Smets</surname>
          </string-name>
          .
          <article-title>A qualitative theory of dynamic interactive belief revision</article-title>
          . In G. Bonanno, W. van der Hoek, and M. Wooldridge, editors,
          <source>Logic and the Foundations of Game and Decision Theory (LOFT7)</source>
          , volume
          <volume>3</volume>
          of Texts in Logic and Games, pages
          <fpage>13</fpage>
          -
          <lpage>60</lpage>
          . AUP,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>J. van Benthem.</surname>
          </string-name>
          <article-title>Dynamic logic for belief revision</article-title>
          .
          <source>Journal of Applied NonClassical Logics</source>
          ,
          <volume>17</volume>
          (
          <issue>2</issue>
          ):
          <fpage>129</fpage>
          -
          <lpage>155</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>J. van Benthem.</surname>
          </string-name>
          <article-title>Merging observation and access in dynamic logic</article-title>
          .
          <source>Journal of Logic Studies</source>
          ,
          <volume>1</volume>
          (
          <issue>1</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>17</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>J. van Benthem. Logic</surname>
          </string-name>
          , mathematics, and
          <article-title>general agency</article-title>
          . In P. Bour,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rebuschi</surname>
          </string-name>
          , and L. Rollet, editors,
          <source>Festschrift for Gerhard Heinzmann</source>
          .
          <article-title>Laboratoire d'histoire des ceinces</article-title>
          et de la philosophie,
          <source>Nancy</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>J. van Benthem</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. van Eijck</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Kooi</surname>
          </string-name>
          .
          <article-title>Logics of communication and change</article-title>
          .
          <source>Information and Computation</source>
          ,
          <volume>204</volume>
          (
          <issue>11</issue>
          ):
          <fpage>1620</fpage>
          -
          <lpage>1662</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>J. van Benthem</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Kooi</surname>
          </string-name>
          .
          <article-title>Reduction axioms for epistemic actions</article-title>
          . In R. Schmidt,
          <string-name>
            <given-names>I.</given-names>
            <surname>Pratt-Hartmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Reynolds</surname>
          </string-name>
          , and H. Wansing, editors,
          <source>Advances in Modal Logic (Technical Report UMCS-04-09-01)</source>
          , pages
          <fpage>197</fpage>
          -
          <lpage>211</lpage>
          . University of Manchester,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J. van Benthem and F.</given-names>
            <surname>Liu</surname>
          </string-name>
          .
          <article-title>Dynamic logic of preference upgrade</article-title>
          .
          <source>Journal of Applied Non-Classical Logics</source>
          ,
          <volume>17</volume>
          (
          <issue>2</issue>
          ):
          <fpage>157</fpage>
          -
          <lpage>182</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>J. van Benthem</surname>
            and
            <given-names>F. R.</given-names>
          </string-name>
          <string-name>
            <surname>Vela</surname>
          </string-name>
          <article-title>´zquez-Quesada. Inference, promotion, and the dynamics of awareness</article-title>
          . PP-2009
          <source>-43</source>
          , ILLC, Universiteit van Amsterdam,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>P.</given-names>
            <surname>Blackburn</surname>
          </string-name>
          , M. de Rijke,
          <string-name>
            <given-names>and I. Venema. Modal</given-names>
            <surname>Logic</surname>
          </string-name>
          . Cambridge University Press,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>O.</given-names>
            <surname>Board</surname>
          </string-name>
          .
          <article-title>Dynamic interactive epistemology</article-title>
          .
          <source>Games and Economic Behavior</source>
          ,
          <volume>49</volume>
          (
          <issue>1</issue>
          ):
          <fpage>49</fpage>
          -
          <lpage>80</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>H. van Ditmarsch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Herzig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Marquis</surname>
          </string-name>
          .
          <article-title>Introspective forgetting</article-title>
          .
          <source>Synthese (KRA)</source>
          ,
          <volume>169</volume>
          (
          <issue>2</issue>
          ):
          <fpage>405</fpage>
          -
          <lpage>423</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>H. van Ditmarsch</surname>
            , W. van der Hoek, and
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Kooi</surname>
          </string-name>
          .
          <source>Dynamic Epistemic Logic</source>
          , volume
          <volume>337</volume>
          of Synthese Library Series. Springer,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>H. N.</given-names>
            <surname>Duc</surname>
          </string-name>
          .
          <article-title>Resource-Bounded Reasoning about Knowledge</article-title>
          .
          <source>PhD thesis</source>
          , Institut f u¨r Informatik, Universita¨t Leipzig, Leipzig, Germany,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>J. van Eijck</surname>
          </string-name>
          and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          .
          <article-title>Propositional dynamic logic as a logic of belief revision</article-title>
          . In W. Hodges and
          <string-name>
            <surname>R. J. G</surname>
          </string-name>
          . B. de Queiroz, editors,
          <source>WoLLIC</source>
          , volume
          <volume>5110</volume>
          <source>of LNCS</source>
          , pages
          <fpage>136</fpage>
          -
          <lpage>148</lpage>
          . Springer,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>R.</given-names>
            <surname>Fagin</surname>
          </string-name>
          and
          <string-name>
            <given-names>J. Y.</given-names>
            <surname>Halpern</surname>
          </string-name>
          . Belief, awareness, and
          <article-title>limited reasoning</article-title>
          .
          <source>Artificial Intelligence</source>
          ,
          <volume>34</volume>
          (
          <issue>1</issue>
          ):
          <fpage>39</fpage>
          -
          <lpage>76</lpage>
          ,
          <year>1988</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>D.</given-names>
            <surname>Grossi</surname>
          </string-name>
          and
          <string-name>
            <given-names>F. R.</given-names>
            <surname>Vela</surname>
          </string-name>
          <article-title>´zquez-Quesada. Twelve Angry Men: A study on the fine-grain of announcements</article-title>
          . In X. He,
          <string-name>
            <given-names>J. F.</given-names>
            <surname>Horty</surname>
          </string-name>
          , and E. Pacuit, editors,
          <source>LORI</source>
          , volume
          <volume>5834</volume>
          <source>of LNCS</source>
          , pages
          <fpage>147</fpage>
          -
          <lpage>160</lpage>
          . Springer,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>A.</given-names>
            <surname>Grove</surname>
          </string-name>
          .
          <article-title>Two modellings for theory change</article-title>
          .
          <source>Journal of Philosophical Logic</source>
          ,
          <volume>17</volume>
          (
          <issue>2</issue>
          ):
          <fpage>157</fpage>
          -
          <lpage>170</lpage>
          ,
          <year>1988</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>J. Y.</given-names>
            <surname>Halpern</surname>
          </string-name>
          , editor.
          <source>Proceedings of the 1st Conference on Theoretical Aspects of Reasoning about Knowledge</source>
          , Monterey, CA,
          <year>March 1986</year>
          , San Francisco, CA, USA,
          <year>1986</year>
          . Morgan Kaufmann Publishers Inc.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>J.</given-names>
            <surname>Hintikka</surname>
          </string-name>
          .
          <article-title>Knowledge and Belief: An Introduction to the Logic of the Two Notions</article-title>
          . Cornell University Press, Ithaca,
          <string-name>
            <surname>N.Y.</surname>
          </string-name>
          ,
          <year>1962</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>M.</given-names>
            <surname>Jago</surname>
          </string-name>
          .
          <article-title>Rule-based and resource-bounded: A new look at epistemic logic</article-title>
          . In T. Ågotnes and N. Alechina, editors,
          <source>Proceedings of the Workshop on Logics for Resource-Bounded Agents</source>
          , pages
          <fpage>63</fpage>
          -
          <lpage>77</lpage>
          , Malaga, Spain,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>K.</given-names>
            <surname>Konolige</surname>
          </string-name>
          .
          <article-title>Belief and incompleteness</article-title>
          .
          <source>Technical Report 319, SRI</source>
          ,
          <year>1984</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>G.</given-names>
            <surname>Lakemeyer</surname>
          </string-name>
          .
          <article-title>Steps towards a first-order logic of explicit and implicit belief</article-title>
          .
          <source>In Halpern [20]</source>
          , pages
          <fpage>325</fpage>
          -
          <lpage>340</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>H. J.</given-names>
            <surname>Levesque</surname>
          </string-name>
          .
          <article-title>A logic of implicit and explicit belief</article-title>
          .
          <source>In Proc. of AAAI-84</source>
          , pages
          <fpage>198</fpage>
          -
          <lpage>202</lpage>
          , Austin, TX,
          <year>1984</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>K.</given-names>
            <surname>Segerberg</surname>
          </string-name>
          .
          <article-title>The basic dynamic doxastic logic of AGM</article-title>
          . In M.-
          <string-name>
            <given-names>A.</given-names>
            <surname>Williams</surname>
          </string-name>
          and H. Rott, editors,
          <source>Frontiers in Belief Revision, number 22 in Applied Logic Series</source>
          , pages
          <fpage>57</fpage>
          -
          <lpage>84</lpage>
          . Kluwer Academic Publishers,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>M. Y.</given-names>
            <surname>Vardi</surname>
          </string-name>
          .
          <article-title>On epistemic logic and logical omniscience</article-title>
          .
          <source>In Halpern [20]</source>
          , pages
          <fpage>293</fpage>
          -
          <lpage>305</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>F. R.</given-names>
            <surname>Vela</surname>
          </string-name>
          <article-title>´zquez-Quesada. Inference and update</article-title>
          .
          <source>Synthese (KRA)</source>
          ,
          <volume>169</volume>
          (
          <issue>2</issue>
          ):
          <fpage>283</fpage>
          -
          <lpage>300</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>