<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Talking Your Way into Agreement: Belief Merge by Persuasive Communication</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alexandru Baltag</string-name>
          <email>Alexandru.Baltag@comlab.ox.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sonja Smets</string-name>
          <email>S.J.L.Smets@rug.nl</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Computing Laboratory, Oxford University</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Dept. of Artificial Intelligence, and Dept. of Philosophy, University of Groningen, &amp;, IEG, Oxford University</institution>
        </aff>
      </contrib-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
    </sec>
    <sec id="sec-2">
      <title>We investigate the issue of reaching doxastic</title>
      <p>agreement among the agents of a group by
“sharing” information via successive acts of sincere,
persuasive public communication within the group.</p>
      <p>
        As usually considered in Social Choice theory,
the problem of preference aggregation is to find
a natural and fair “merge” operation (subject to
various naturalness or fairness conditions), for
aggregating the agents’ preferences into a single group
preference. Depending on the stringency of the
required fairness conditions, one can obtain either
an Impossibility theorem (e.g Arrows theorem [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ])
or a classification of the possible types of reasonable
merge operations [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>In this paper we propose a more “dynamic”
approach to this issue. Dynamically speaking,
“merging” preference relations means finding an action
or a sequence of actions (a protocol) that, when
applied to any arbitrary multi-agent preference model,
produces a new model in which all the agents’
preference relations are the same. When the new
relations are the result of a specific merge operation,
we say that we have “realized” this operation via
the given (sequence of) action(s). One would like to
know what types of merges are realizable by using
only specific types of preference-changing actions.</p>
      <p>In a doxastic/epistemic setting, the agents
preference relations are interpreted as “doxastic
preferences” or “doxastic plausibility” orders. These
encode the agents beliefs, but in fact they capture
all their doxastic-epistemic attitudes: their
“knowledge” (in the sense of absolutely certain,
unrevisable, irrevocable knowledge, i.e. the epistemic
concept mostly used in Logic, Computer Science
and Economics), their “strong beliefs” and “safe
beliefs” (also known as “defeasible knowledge”, i.e.
the epistemic concept used mostly by philosophers
and researchers in Belief Revision theory), as well
as their “conditional beliefs” (encoding their
“beliefrevision strategy”, i.e. their contingency plans for
belief change). In other words, an agent’s doxastic
preference structure capture all her “information”:
both her “hard” (absolutely certain, infallible)
information and her “soft” (potentially fallible)
information. In this context, a preference merge operation
corresponds to a way of combining the agents
information into a single “group information”.</p>
      <p>Similarly, preference-changing actions can be
interpreted in a doxastic setting as acts of
communication or persuasion. But not every
preferencechanging action can be understood in this way: there
has to be a specific relation between one agent’s (the
speaker’s) prior preferences before the action and
the whole’s group’s posterior preferences. Actions
in which this relation holds will be instances of
sincere and persuasive public communication.</p>
      <p>
        An announcement of some information P is said
to be “public” when it is common knowledge that
this particular message P is announced and that all
the agents are adopting the same attitude towards
the (plausibility of the) announcement: they all
adopt the same opinion about the reliability of this
information. Depending on the specific common
attitude, there are three main possibilities that have after a persuasive communication, all agents reach
been discussed in the literature: 1) the informa- a partial agreement, namely with respect to the
tion P is certainly true: it is common knowledge specific information that has been communicated.
that the message is necessarily truthful; (2) the In a cooperative setting, the goal of “sharing”
announcement is strongly believed by all agents to doxastic information is reaching “agreement” with
be true: it is common knowledge that everybody respect to all the (relevant) issues. Indeed, the
strongly believes that the speaker tells the truth; natural stopping point of iterated sharing is when
(3) the announcement is (simply) believed: it is nothing is left for further sharing or persuading; i.e.
common knowledge that everybody believes (in the complete agreement. Any further sincere persuasive
simple, “weak” sense) that the speaker tells the communication is redundant at that stage: it can no
truth. These three alternatives correspond to three longer change any agent’s doxastic structure. This
forms of “learning” a public announcement, forms happens exactly when all the agents’ relevant
doxasdiscussed in [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] in a Dynamic Epistemic tic attitudes towards all issues are exactly the same.
Logic context: “update” 1 !P , “radical upgrade” (Which attitude is relevant depends again of the
⇑ P and “conservative upgrade” ↑ P . Under various type of communication: “knowledge” for updates,
names, they have been previously proposed in the “belief” for conservative upgrades, “strong belief”
literature on Belief Revision, e.g. by Rott [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] and for radical upgrades). This means that the agents’
Boutilier [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] , and in the literature on dynamic (relevant) accessibility relations (i.e. respectively,
semantics for natural language by Veltman [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ]. The the knowledge relations, the belief structure or the
first operation (update) models a “truthful public strong belief structure) became identical: we say
announcements” of “hard” information; the other that these structures have “merged” into one.
two are models of “soft” public announcements. So we arrive in a natural way at the main issue
“Sincerity” of a communication act can be de- addressed in this paper: the “dynamic merge” of
fined as sharing of information that was already doxastic structures by sincere persuasive public
“accepted” by the speaker (before the act). The communication. In particular, we investigate the
meaning of “acceptance” depends on the form realizability of merge operations via (1) updates,
of communication: as we’ll see, for updates with (2) radical upgrades and (3) conservative upgrades.
“hard” information, acceptance means “knowledge” We show that, in the first case, only the epistemic
(in the irrevocable sense), while for upgrades with structures (given by the “hard” knowledge relations)
“soft” information, acceptance just means some type can be merged; and moreover, the only form of
of “belief” or “strong belief” (depending on whether realizable merge is in this case the so-called
“parthe upgrade is “conservative” or “radical”). But, as allel merge” [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], given by the intersection of all
a general concept, prior acceptance requires that preference relations. Epistemically, this corresponds
the speaker’s own doxastic structure should not be to the familiar concept of “distributed knowledge”.
changed by her sincere communication. The realizability result is constructive, it comes
“Persuasiveness” requires that the communicated with a specific announcement-based protocol for
information becomes commonly “accepted” by all realizing this merge. This is essentially the
algothe agents (in the same sense of “acceptance” that rithm in van Benthem’s paper “One is a Lonely
the speaker has adopted): this means that, after the Number” [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]: the agents announce “all they know”,
act, everybody commonly exhibits the same doxastic in no particular order. In the second case (radical
attitude as the speaker (knowledge, belief or strong upgrade), the “defeasible knowledge” structures are
belief) towards the communicated information. So, merged, but in fact this implies that all the other
doxastic attitudes become the same: the agents’
1Note that in Belief Revision, the term “belief update” is used whole “doxastic preference” structures are merged.
for a totally different operation (the Katzuno-Mendelzon update[
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]), The natural analogue of the above-mentioned
prowhile what we call “update” is known as “conditioning”. We choose tocol for radical upgrades realizes now a different
tboutfowlelowwanhterteo twhaerntetrhmeinreoalodgery augsaeidnstinanDyypnoasmsiibcleEcpoisntfeumsiiocnLsowgiitch, type of merge (“priority merge”, itself a natural
the KM update. epistemic modification of the other basic type of
merge considered in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], the “lexicographic merge”). (relative) priority merge. In section V we present
Finally, in the case of conservative upgrades, only the protocols for dynamic realizations of parallel
the (simple) belief structures (given by the doxastic merge and priority merge, giving counterexamples
relations) can be merged. Moreover, priority merge that point out the differences between them. We
is realizable via the natural analogue of the same end with a short note and an open question in our
protocol above for conservative upgrades. Conclusions section.
      </p>
      <p>
        This surface similarity between the three cases is
pleasing, but in fact it hides deeper dissimilarities. II. PLAUSIBILITY STRUCTURES AND DOXASTIC
As we mentioned, the realizable merge is unique in ATTITUDES
the first case. This is not true in the other cases: a In this section, we review some basic notions and
whole class of merge operations can be realized by results from [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. We use finite “plausibility” frames,
radical or conservative upgrades. Moreover, in the in the sense of our papers [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
first case, the order in which the announcements is These kind of semantic structures are the natural
irrelevant, while in the other cases the order matters: multi-agent generalizations of structures that are
if the upgrades are performed in a different order standard, in one form or another, in Belief
Revithan the one prescribed in the protocol, then dif- sion: Halpern’s “preferential models” [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ], Spohn’s
ferent merge operations may be realized! Finally, in ordinal-ranked models [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], Board’s “belief-revision
the first case, the merge may be realized by allowing structures” [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], Grove’s “sphere” models [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ].
Unonly one announcement by each agent (of “all she like the settings in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], we restrict here to the
knows”). But this is not true in the other cases: the finite case, for reasons of simplicity.
agents may have to make many soft announcements,
including announcing facts that may already be
entailed by their previous announcements!
      </p>
      <p>For a given set A of labels called “agents”, a
(finite, multi-agent) plausibility frame is just a finite,
multi-agent Kripke frame (S, Ra)a∈A in which the</p>
      <p>
        Some of the questions we address in this paper accessibility relations Ra ⊆ S × S are usually
came to our attention after hearing a presentation denoted by ≤a, are called “plausibility orders” or
by J. van Benthem on “The Social Choice Behind “doxastic preference” relations, and are assumed
Belief Revision” at the workshop “Dynamic Logic to be locally connected preorders. Here, a “locally
Montreal” in 2007 [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Van Benthem’s view was connected preorder” ≤⊆ S × S is a reflexive and
that belief dynamics in itself can be captured as transitive relation such that: if s ≤ t and s ≤ w
a form of preference merge (between the prior then either t ≤ w or w ≤ t; and if t ≤ s and
doxastic preferences and the on-going doxastic pref- w ≤ s then either t ≤ w or w ≤ t. See [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] for
erences about the new information). One can see a justification and motivation for these conditions.2
that our approach here is actually the dual of the per- We use the notation s ∼a t for the comparability
spective adopted in [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]: implementing preference relation with respect to ≤a (i.e. s ∼a t iff either
merge dynamically by successive belief revisions, s ≤a t or t ≤a s), s &lt;a t for the corresponding strict
instead of understanding belief revision in terms of order relation (i.e. s &lt;a t iff s ≤a t but t 6≤a s), and
preference merge. s ∼=a t for the corresponding indifference relation
(i.e. s ∼=a t iff both s ≤a t and t ≤a s). When
      </p>
      <p>
        In the next section we introduce the necessary using the Ra notation for the preference relations
background on different notions of knowledge, be- ≤a, we also use the notations R&lt;, Ra∼ and Ra=∼ to
lief and other doxastic attitudes. The main focus denote the corresponding strict orader, comparability
will be on the semantics, which is given via pref- and indifference relations &lt;a, ∼a and ∼=a.
erence models. In section III, we introduce the In a plausibility frame, the comparability relations
main concepts of belief dynamics, following the ∼a are equivalence relations, hence they induce
parwork in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] on joint upgrades, titions. We denote by s(a) := {t ∈ S : s ∼a t} the
as models for “sincere, persuasive public
announcements”. In section IV we present three natural merge 2In the infinite case, one has to add a well-foundedness condition,
operations: parallel merge, lexicographic merge and obtaining “locally well-preordered” relations.
∼a-partition cell of s, comprising all a’s epistemic so that the agent only compares the plausibility of
alternatives for s. Finally, we use →a to denote the states that are epistemically indistinguishable: so we
“best alternative” or “most preferred” relation →a, are not concerned here with counterfactual beliefs
given by: s →a t iff t ∈ s(a) and t ≥a t0 for all (going against the agent’s knowledge), but only with
t0 ∈ s(a). conditional beliefs (if given new evidence that must
Plausibility Models A (finite, multi-agent, pointed) be compatible with prior knowledge). So BaQP is
plausibility model is a structure S = (S, ≤a read “agent a believes P conditional on Q ” and
, k·k, s0)a∈A, consisting of a plausibility frame means that, if a would receive some further (certain)
(S, ≤a)a∈A together with a valuation map k·k : Φ → information Q (to be added to what she already
P (S), mapping every element p of some given set Φ knows) then she would believe that P was the case.
of “atomic sentences” into a set of states kpk ⊆ S, So conditional beliefs BaQ give descriptions of the
and together with a designated state s0 ∈ S, called agent’s plan (or commitments) about what would
the “actual state”. she believe (about the current state) if she would
(Common) Knowledge and (Conditional) Belief learn some new information Q. To quote J. van
Given a plausibility model S, sets P, Q ⊆ S of Benthem in [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], conditional beliefs are “static
prestates, an agent a ∈ A and some group G ⊆ A, encodings” of the agent’s potential belief changes
swBeafoPdrea:fi=llnet{:s∈be∈sPtS}a,P: Kb=easPtMas:a(=ax)≤{a⊆Ps P∈:}=,SB{:saQsP∈(aP):=⊆:{tPs≤}∈a, i(snia.yet.hseththfeaacmteBoosaQft Ppnleawhuosilindbfsloer)imffQaP-stitoathneo.slTd(htsheaiantbaaorlevlectohdneefisi“nsbitteeisnottn”
(SEwb:hGebPreestEa:=(ksG0(PaT)a∩:∈=GQBP)aP⊆an,PdC}Ek,GkEnPk+G1 P::==:=ETTknG∈a∈N(EGEKkkGnaGnPPP), cbwoeisnthtdisattia’ostneasklnt)hobwaetlleaiedrfegeBe)p.aiPsIntehmpoailcrdtasilcliuyfflapPro,shasoibsldilmes pfionleraa(lnl.
othne), EbP := EbAP , and CkP :=G CkAP . lKatriiopnkRe M⊆oSda×liStieasnFdosretanPy
⊆binSa,rtyheacccoersrseisbpilointydirnegKripke modality is given by:
Interpretation. The elements of S represent the
“possible worlds”, or possible states of a system:
possible descriptions of the real world. The correct [R]P := {s ∈ S : ∀t (sRt ⇒ t ∈ P )}
description of the real world is given by the “actual We think of sets P ⊆ S as propositions and write
state” s0. The atomic sentences p ∈ Φ represent s |= P instead of s ∈ P .
“ontic” (non-doxastic) facts , that might hold or It is easy to see that belief is the Kripke modality
not in a given state. The valuation tells us which Ba = [→a] for the “best alternative” relation →a
facts hold at which worlds. For each agent a, the defined above. Similarly, knowledge is the Kripke
equivalence relation ∼a represents the agent a’s modality for the epistemic relation Ka = [ ∼a].
epistemic indistinguishability relation, inducing a’s Safe belief as “defeasible knowledge” The Kripke
information partition; s(a) is the state s’s infor- modality for the plausibility relation 2a := [≤a]
mation cell with respect to a’s partition: if s were was called “safe belief ” in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], and “the
preferthe real state, then agent a would consider all the ence modality” in [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. It was also considered by
states t ∈ s(a) as “epistemically possible”. KaP Stalnaker in [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ], as a formalization of Lehrer’s
is the proposition “agent a knows P ”: observe that notion of “defeasible knowledge”. According to
this is indeed the same as Aumann’s partition-based this so-called defeasibility theory of knowledge, a
definition of knowledge. The plausibility relation belief counts as “knowledge” if it is stable under
≤a is agent a’s “doxastic preference” relation: belief revision with any true information. Indeed,
her plausibility order between her “epistemically the safe belief modality has the property that it is
possible” states. So we read s ≤a t as “agent a conditionally believed under any true condition:
considers t at least as plausible as s (though the two
are epistemically indistinguishable)”. This is meant s |= 2aQ iff: s |= BaP Q for all P such that s |= P.
to capture the agent’s (conditional) beliefs about the For this reason, we’ll refer to 2 using either of
state of the system. Note that s ≤a t implies s ∼a t, the terms “safe belief” and “defeasible knowledge”.
In contrast, the knowledge concept captured by the that you know” is the same as “believing”, by the
K modality can be called “irrevocable knowledge”, identity Ba2aP = BaP .
since it is a belief that is stable under revision with
any information (including false ones):
s |= KaQ iff: s |= BaP Q for all P.
      </p>
      <p>“Strong Belief” Another important doxastic
attitude, called strong belief, is given by:
SbaP = {s ∈ P : s(a) ∩ P 6= ∅ and w &gt;a
t for all t ∈ s(a) ∩ P and all w ∈ s(a) \ P }.</p>
      <p>
        So P is strong belief at a state s iff P is
epistemically possible and moreover all epistemically
possible P -states at s are more plausible than all
epistemically possible non- P states. This notion was
called “strong belief” by Battigalli and Siniscalchi
[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], while Stalnaker [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] calls it “robust belief”. It is
easy to see that we have the following equivalence:
S |= SbaP iff S |= BaP and S |= BaQP for every
Q such that S |= ¬Ka(Q → ¬P ). In other words:
something is strong belief iff it is believed and if this
belief can only be defeated by evidence (truthful or
not) that is known to contradict it. An example is
the “presumption of innocence” in a trial: requiring
the members of the jury to hold the accused as
“innocent until proven guilty” means asking them
to start the trial with a “strong belief” in innocence.
      </p>
    </sec>
    <sec id="sec-3">
      <title>There are other differences: irrevocable knowledge</title>
      <p>K satisfies the axioms of the modal system S5,
so it is fully introspective; in contrast, defeasible
knowledge 2 is only positively introspective, but
not necessarily positively introspective. (In fact, the
complete logic of 2 is the modal logic S4.3.) An
agent’s belief can be safe without him necessarily
“knowing” this (in the “strong” sense of the
irrevocable knowledge K): “safety” (similarly to “truth”)
is an external property of the agent’s beliefs, that
can be ascertained only by comparing his
beliefrevision system with reality. Indeed, the only way
for an agent to know a belief to be safe is to actually
know it to be truthful. This is captured by the valid
identity: Ka2aP = KaP . In other words: knowing
that something is safe to believe is the same as just
knowing it to be true. In fact, all beliefs held by Example 1: Consider the situation of Professor
an agent “appear safe” to him: in order to believe Albert Winestein. Albert feels that he is a genius. He
them, he has to believe that they are safe. This is knows that there are only two possible explanations
expressed by the valid identity: Ba2aP = BaP , for this feeling: either he is a genius or he’s drunk.
saying that: believing that something is safe to He doesn’t feel drunk, so he believes that he is
believe is the same as just believing it3. Contrast a sober genius. However, if he realized that he’s
this with the situation concerning “knowledge”: in drunk, he’d think that his genius feeling was just
our logic (as in most standard doxastic-epistemic the effect of the drink; i.e. after learning he is drunk
logics), we have the identity: BaKaP = KaP . So he’d come to believe that he was just a drunk
nonbelieving that something is known is the same as genius. In reality though, Albert is both drunk and
knowing it! a genius.</p>
      <p>
        The difference between K and 2 and their dif- We can represent Albert’s information and
(conferent properties, expressed by the above identities, ditional) by the following plausibility relation:
are enough to solve the so-called “Paradox of the
Perfect Believer” in [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ], [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ], [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ], [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]: ¬D, ¬G D, G a / D, ¬G a / ¬D, G
when we say that somebody “only believes that
she knows something (without really knowing it)”,
we’re using the word “knowledge” in a different
sense than the fully introspective K modality. A
natural reading reading is to interpret it as the
defeasible knowledge 2, in which case “believing
      </p>
      <p>3The proof is an easy semantic exercise, which can be rendered
in English as: saying that “the best worlds have the property that all
the worlds at least as good as them are P -worlds” is equivalent to
simply saying that “the best worlds are P -worlds”.</p>
      <p>Here, as in all other drawings, we use labeled
arrows for plausibility relations ≤a (not for the
“best alternative” relations →a !), going from less
plausible to more plausible worlds, but we skip
loops and composed arrows (since ≤a are
reflexive and transitive). The real world is (D, G).
Albert considers (D, ¬G) as being more plausible
than (D, G), and (¬D, G) as more plausible than
(D, ¬G). Albert can distinguish all these worlds
from (¬D, ¬G), since (in the real world) he knows she has a strong opinion about his drunkness: she
(“Ka”) that either D or G holds. can see him, so judging by his behavior she either</p>
      <p>Consider another agent, Professor Mary Curry. strongly believes he’s drunk or she strongly believes
She is pretty sure that Albert is drunk: she can see he’s not drunk. However, her actual opinion about
this with her very own eyes. But Marry is completely this is unknown to Albert, who thus considers both
indifferent with respect to Albert’s genius: so she opinions as equally plausible.
considers the possibility of genius and the one of The resulting model is:
non-genius as equally plausible. However, having a
tphhaitlotshoephteisctailmmoinnyd,oMfhaeryr iesyaewsamreayofinthperpinocsispibleilibtye ¬D,O ¬G o m / ¬DO , G q ma 1 D, ¬OG qm ma 1 D,O G
wrong: it is in principle possible that Albert is not a a a a
drunk, despite the presence of the usual symptoms. a
Marry’s beliefs are captured by her plausibility ¬D, ¬G o m / ¬D, G mq ma D, ¬G qm m 1 D, G
order:
¬D, ¬G o m
/ ¬D, G
m / D, G o</p>
      <p>m / D, ¬G
We can see from the drawing that Mary strongly
believes D, and in fact her belief is safe: so she
“knows” that Albert is drunk, in the sense of
defeasible knowledge (although she doesn’t know it, in
the sense of K). But she is completely indifferent
with respect to G: hence she considers the
possibility of G and ¬G as equally plausible.</p>
      <p>To put together the agents’ plausibility orders, we
need to be told what do they know about each other.</p>
      <p>Suppose all their opinions as described above (i.e.
all their conditional beliefs) are common knowledge:
essentially, this means their doxastic preferences are
common knowledge. We thus obtain the following
multi-agent plausibility model:
¬D, ¬G o m
/ ¬D, G q a</p>
      <p>a
m 1 D, ¬G mq m</p>
      <p>1 D, G</p>
    </sec>
    <sec id="sec-4">
      <title>At the real world (D, G), one can check that</title>
      <p>BaG is true. Further, Albert does not know G,
hence (D, G) |= ¬KaG ∧ ¬2aG while (D, G) |=
Ka(D ∨ G). Moreover, he doesn’t “know” G in the
defeasible sense either: his belief in G is not safe,
since BD</p>
      <p>a ¬G holds in the real world: so if Albert
would learn (correctly) that he was drunk, he’d lose
his (true) belief in being a genius.</p>
    </sec>
    <sec id="sec-5">
      <title>Example 2 Let us now relax our assumptions about</title>
      <p>the agents’ mutual knowledge: suppose that only
Albert’s opinions are common knowledge; in addition,
suppose that it is common knowledge that Mary
has no opinion on Albert’s genius (so she considers
genius and non-genius as equi-plausible), but that</p>
    </sec>
    <sec id="sec-6">
      <title>The real world is represented by the upper (D, G)</title>
      <p>state. One can check that, in the real world, Mary
still strongly believes Albert he’s drunk; but he does
not know this: Mary’s plausibility relation between
D and ¬D is unknown to Albert. However, he
knows that either she strongly believes D or she
strongly believes ¬D.</p>
      <p>We can go on and modify the example further, by
allowing that Albert’s plausibility is not commonly
known either etc. But, for simplicity of drawing,
we stop here: when less common knowledge is
assumed, more worlds are possible, and hence the
drawings get more and more complex.</p>
      <p>G-Bisimulation For a group G ⊆ A of agents,
we say the pointed models S = (S, ≥a, k k, s0)a∈A
and S0 = (S0, ≥0a, k k0, s00)a∈A are G-bisimilar , and
write S 'G S0, if the pointed Kripke models (S, ≥a
, k k, s0)a∈G and (S0, ≥0a, k k0, s00)a∈G (having as
accessibility relations only the G-labeled relations) are
bisimilar in the usual sense from Modal Logic [?].
When G = A, we simply write S ' S0, and say
S and S0 are bisimilar. Bisimilar models differ only
formally: they encode precisely the same
doxasticepistemic information, and they satisfy the same
modal sentences.</p>
    </sec>
    <sec id="sec-7">
      <title>III. BELIEF DYNAMICS: SINCERE, PERSUASIVE</title>
      <p>PUBLIC COMMUNICATION</p>
      <p>
        We move on now to belief dynamics: what
happens when some proposition P is publicly
announced? According to Dynamic Epistemic Logic,
this induces, not only a revision of beliefs, but a
change of model: a “revision” of the whole
relational structure, changing the agents’ plausibility
orders. However, the specific change depends on is S0 = {s ∈ S : s |= P }; (c) s ≤0a t iff
the agents’ attitudes to the plausibility of the an- s ≤a t and s, t ∈ S0.
nouncement: how certain is the new information? (2) Learning from a Strongly Trusted Source:
Three main possibilities have been discussed in the (Joint) “Radical” Upgrade. The “radical upgrade”
literature: (1) the announcement P is certainly true: (or “lexicographic upgrade”) ⇑ P , as an operation
it is common knowledge that the speaker tells the on pointed plausibility models, can be described as
truth; (2) the announcement is strongly believed to “promoting” all the P -worlds within each
informabe true by everybody: it is common knowledge that tion cell so that they become “better” (more
plaueverybody strongly believes that the speaker tells the sible) than all ¬P -worlds in the same information
truth; (3) the announcement is (simply) believed: it cell, while keeping everything else the same: the
is common knowledge that everybody believes (in valuation, the actual world and the relative ordering
the simple, “weak” sense) that the speaker tells between worlds within either of the two zones (P
the truth. These three alternatives correspond to and ¬P ) stay the same. Formally, a radical upgrade
three forms of “joint learning”, forms discussed in ⇑ P is (a) a total upgrade (taking as input any model
[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] in a Dynamic Epistemic Logic context: S), such that (b) S0 = S, and (c): s ≤0a t iff either
“update” 4 !P , “radical upgrade” ⇑ P and “con- s 6∈ PS and t ∈ s(a) ∩ PS, or s ≤a t.
servative upgrade” ↑ P . Under various names, the
single-agent versions of these doxastic transformers (3) “Barely believing” what you hear: (Joint)
have been previously proposed by e.g. Rott [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], “Conservative” Upgrade. The so-called
“conserBoutilier [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and Veltman [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ]. vative upgrade” ↑ P (called “minimal conditional
      </p>
      <p>
        We will use “joint upgrades” as a general term revision” by Boutilier [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]) performs in a sense
for all these three model transformers, and denote the minimal possible revision of a model that is
them in general by †P , where † ∈ {!, ⇑, ↑}. For- forced by believing the new information P . As an
mally, each of our joint upgrades is a (possibly operation on pointed models, it can be described
partial) function taking as inputs pointed models as “promoting” only the “best” (most plausible)
S = (S, ≤a, k k, s0) and returning new (“upgraded”) P -worlds, so that they become the most plausible
pointed models †P (S) = (S0, ≤0a, k k0, s00), with in their information cell, while keeping everything
S0 ⊆ S. Since upgrades are purely doxastic, they else the same. Formally, ↑ P is (a) a total upgrade,
won’t affect the real world or the “ontic facts” such that (b) S0 = S, and (c): s ≤0a t iff either
of each world: i.e. they all satisfy s00 = s0 and t ∈ besta( s(a) ∩ PS ) or s ≤a t.
kpk0 = kpk ∩ S0 , for atomic p. So, in order to Redundancy, Informativity and Sincerity A joint
completely describe a given upgrade, we only have upgrade †P is redundant on a model S with respect
to specify (a) its possible inputs S, (b) the new set to a group of agents G ⊆ A if the upgraded model
of states S0; (c) the new relations ≤0a. is G-bisimilar to the original one: †P (S) 'G S.
(1) Learning Certain information: Joint “Up- This means that, as far as the group G is concerned,
date”. The update !P is an operation on pointed †P doesn’t change anything: all the group G’s
models which is executable (on a pointed model S) doxastic attitudes stay the same after the upgrade.
iff P is true (at S) and which deletes all the non- P - An upgrade †P is informative (on S) to group G if
worlds from the pointed model, leaving everything it is not redundant with respect to G. An upgrade
else the same. Formally, an update !P is an upgrade †P is redundant with respect to an agent a if it is
such that: (a) it takes as inputs only pointed models redundant with respect to the singleton {a}.
S, such that S |= P ; (b) the new set of states Redundancy is especially important if we want
to capture the “sincerity” of an announcement
4Note that in Belief Revision, the term “belief update” is used made by a speaker. Intuitively, an announcement
for a totally different operation (the Katzuno-Mendelzon update[
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]), is “sincere” when it agrees with the speaker’s prior
while what we call “update” is known as “conditioning”. We choose epistemic state: accepting the announcement doesn’t
tboutfowlelowwanhterteo twhaerntetrhmeinreoalodgery augsaeidnstinanDyypnoasmsiibcleEcpoisntfeumsiiocnLsowgiitch, change the speaker’s own state.
the KM update. Definition: A (public) announcement †ϕ made
by an agent a is said to be sincere if it leaves
unchanged agent a’s own plausibility structure; i.e.
it’s non-informative to agent a.
lently: iff all G-agents’ plausibility relations
coincide: ≤a=≤b for all a, b ∈ G.
3) A pointed model S is invariant under
↑communication within G iff all (simple)
beliefs are common beliefs within G, i.e. for all
propositions P and all agents a, b ∈ A, BaP
holds in S iff BbP holds in S; equivalently:
iff all G-agents’ “best alternative” relations
coincide: →a=→b for all a, b ∈ G.
      </p>
      <p>Proposition 1
1) In a pointed model S, !P is redundant with
respect to a group G iff P is common knowledge
in S among the group G; i.e.: S 'G!P (S) iff
S |= CkGP . Special case: an announcement
!P made by an agent a is sincere iff a knows
P , i.e. if KaP holds in the original model Example 3 Suppose that in the situation in
Ex(before the announcement). ample 1 above, a trusted, infallible source publicly
2) ⇑ P is redundant with respect to a group G iff announces that Albert is drunk: this is “hard”,
init is common knowledge in the group G that controvertible information, corresponding to a joint
P is strongly believed (by all G-agents); i.e. update !D. The updated model is
S 'G⇑ P (S) iff S |= CkG(ESbG). Special a
case: an announcement ⇑ P made by an agent D, ¬G mq m 1 D, G
a is sincere iff a strongly believes P (before
the announcement). After the update, Albert starts to wrongly believe
3) ↑ P is redundant with respect to a group that ¬G is the case! This is an example of true but
G iff it is common knowledge in the group un-safe belief : it can be lost after acquiring (new)
G that P is believed (by all G-agents); i.e. true information.</p>
      <p>S 'G↑ P (S) iff S |= CkG(EbGP ). Special Example 4 Consider again the situation in example
case: an announcement ↑ P made by an 3, but instead of Albert receiving the information
agent s is sincere iff a believes P (before the from an infallible source, he receives the
informaannouncement). tion from Mary. Mary announces publicly (to
AlInvariance under communication: For a given bert) that D is the case and we assume that Mary’s
upgrade type † ∈ {!, ⇑, ↑}, we say that a pointed announcement is both sincere and persuasive: she
model S is invariant under †-communication within tells what she thinks and she convinces Albert.
group G iff, for all propositions P , any sincere Since Mary is a fallible agent (and not an infallible
announcement of the form †P made by any agent source), this announcement is soft: in principle, she
in G is redundant in S. could be wrong, or she could lie, or she could
Proposition 2 simply guess and be right only by chance. So we
1) A pointed model S is invariant under !- cannot interpret Mary’s announcement as a “hard”
communication within G iff all (irrevocable) update !D, since such an announcement wouldn’t be
knowledge is common knowledge within G, sincere: the update !D would automatically change
i.e. for all propositions P and all agents Mary’s order (making her irrevocably know D,
ian, bS;∈eqAuiv,aKleanPtly:hioflfdasllinG-aSgeinffts’KebpPistehmolidcs iwt haesnash“esodfitd”na’tnknnoouwncietmbeenfot r⇑e!)D. ;Biu.et.waeftcearnhemaoridnegl
relations coincide: ∼a=∼b for all a, b ∈ G. it, all agents upgrade with D: they start to prefer any
2) A pointed model S is invariant under ⇑- D-world to any ¬D-world . The upgraded model is
communication within G iff all “defeasible a a
knowledge” is common defeasible knowledge ¬D, ¬G o m / ¬D, G m 1- D, G m m -1 D, ¬G
within G, i.e. for all propositions P and all
agents a, b ∈ A, 2aP holds in S iff 2bP Note that Mary’s order is left unchanged, so the
holds in S; equivalently: iff all strong be- announcement was indeed sincere.
liefs (conditional beliefs) are common strong Example 5 What if instead Mary announces that
beliefs (common conditional beliefs); equiva- she “knows” that Albert is drunk? If we take this in
Counterexample 6 Note that simply announcing
that she believes D, or even that she strongly
believes D, won’t do: this will not be persuasive,
since it will not change Albert’s beliefs about the
facts of the matter (D or ¬D), although it may
change his beliefs about her beliefs. Being informed
of another’s beliefs is not enough to convince you
of their truth. Indeed, Mary’s beliefs are already
common knowledge in the initial model of Example
1: so an upgrade ⇑ (BmD) would be superfluous!</p>
    </sec>
    <sec id="sec-8">
      <title>Persuasiveness So what is needed for persuasive</title>
      <p>communication is that the speaker (Mary)
“converts” the others to her own beliefs. For this, she
should not simply announce that she believes them.
Instated, she can either announce that something is
the case (when in fact she just strongly believes
that it is the case), or else announce that she
defeasibly “knows” it (when she only believes that
she “knows” it, and in fact this implies that she
strongly believes that she “knows”).</p>
    </sec>
    <sec id="sec-9">
      <title>IV. MERGE OPERATIONS</title>
      <p>
        the sense of irrevocable knowledge K, then such only two basic merge operators: “parallel merge”
an announcement would not be sincere: indeed, and “lexicographic merge”.
in the original situation of Example 1, KmD was Parallel Merge The merge operation we consider
false. However, she did “know” it in the sense of first can be thought of as the most “democratic”
defeasible knowledge 2mD: she correctly believed form of aggregation: everybody has a veto, so
D, and this belief was safe. This “knowledge” was that group preferences are unanimous preferences.
fallible, and she was aware of this: she didn’t believe Following [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], we call it parallel merge. It simply
that she knows irrevocably (¬BmKmD), but she takes the merged relation to be the intersection
believed that she “knows” defeasibly (Bm2mD). Ja Ra∈G := Ta∈G Ra of all the preference relations
Hence, she is sincere if she announces that she of the agents in a given group G ⊆ A.5
“knows” in this sense. Assuming that Albert is also Parallel merge is particularly well suited for
agaware of the fallibility of her knowledge, but that gregating the agents’ “hard information”
(irrevocahe still highly trusts her to be right, we can interpret ble knowledge) K, i.e. for merging the epistemic
this as a sincere and persuasive announcement of the relations {∼a}a∈G. Since if we consider absolutely
form ⇑ (2D). Its effect is the same as in Example certain and fully introspective knowledge, there is
4: the upgraded model is the same. no danger of introducing an inconsistency. The
agents can pool their information in a completely
symmetric manner, accepting the other’s bits
without reservations. In fact, parallel merge of the
agents’ irrevocable knowledge gives us the standard
concept of “distributed knowledge” DK:
      </p>
      <p>
        A merge operation, or “aggregation procedure”,
is an operator taking any sequence {Ri}1≤i≤n of
preference relations into a “group preference”
relation Ji Ri = R1 J R2 J · · · Rn. In [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] the authors Ra/b := Ra&lt; ∪ (Ra=∼ ∩ Rb) = Ra&lt; ∪ (Ra ∩ Rb) =
give a general classification of types of preference Ra ∩ (Ra&lt; ∪ Rb).
merge, in a very general context, subject to some
minimal “fairness” and rationality conditions. They 5From a purely formal perspective, parallel merge resembles the
show that all the merge operations satisfying these sSo.-cHal.leHdan“snsoonn-p,riHo.riRtizoettd, Hbe.lvieafnrDeviitsmioanrs”chk.nBowutnnforotemthtahte“wmoerrkgeo”fis
conditions can be represented as compositions of not “revision”!
DKGP = [ \
      </p>
      <p>Ra]P.</p>
      <p>a∈G
Lexicographic Merge When the group is
hierarchically structured according to some total order (on
agents), called a “priority order”, then the agents
with higher priority are thought of as having a
higher “epistemic expertise” than the agents with
lower priority. For a group G = {a, b} of two
agents, in which a has higher priority, we can think
of a as the “expert” (or the professor) and of b
as the “layman” (or the student). In this context,
the natural doxastic merge operation is the
socalled lexicographic merge. For two agents a, b, the
“lexicographic merge” Ra/b gives priority to agent
a’s strong (i.e. strict) preferences over b’s: first, the
strict preference order of a is adopted by the group;
and when a is indifferent between two options, then
b’s preference is adopted; finally, a-incomparability
gives group incomparability. Formally:
believing ¬G; while if we give priority to Albert,
they both end up believing ¬D. In fact, no type of
hierarchic belief merge is a warranty of veracity.</p>
    </sec>
    <sec id="sec-10">
      <title>V. “REALIZING” MERGE DYNAMICALLY</title>
      <p>The lexicographic merge is particularly suited for
aggregating “soft information” (strong beliefs, safe
beliefs, conditional beliefs) in the absence of any
hard information: since soft information is not fully
reliable (because of lack of negative introspection
for 2, and because of potential falsity for belief,
conditional belief and strong belief), some
“screening” must be applied to some agents’ information
(and so some hierarchy must be enforced), in order
to ensure consistency of the merge.</p>
      <p>Intuitively, the purpose of sharing hard
knowledge, defeasible knowledge or beliefs is to achieve a
state in which there is nothing else to share, i.e. one
in which any further sharing is redundant: all hard
knowledge, or defeasible knowledge, or beliefs, are
(Relative) Priority Merge Note that, in lexico- already shared in common. For sharing via a specific
graphic merge, the first agent’s priority is “abso- type of public communication † ∈ {!, ⇑, ↑}, this
lute”. But in the presence of “hard” information, happens precisely when the model-changing process
the lexicographic merge of soft information must induced by †-type sharing reaches a fixed point of
†be modified, by first pooling together all the hard communication: a model that is invariant under that
information and then using it to restrict the lexico- particular type of announcements.
graphic merge of soft information. This leads us to For every specific type of public
communicaa “more democratic” combination of Parallel Merge tion † ∈ {!, ⇑, ↑}, agent a’s “relevant structure”
and Lexicographic Merge, called “(relative) priority ian a model S is given by: a’s epistemic relation
merge” Ra⊗b: ∼⊆ S × S in the case of updates !; a’s plausibility
relation ≤a in the case of radical upgrade ⇑; and
Ra⊗b := (Ra&lt; ∩ Rb∼) ∪ (Ra=∼ ∩ Rb) = a’s doxastic “best alternative” relations →a in the
case of conservative upgrade.</p>
      <p>Ra ∩ Rb∼ ∩ (Ra&lt; ∪ Rb). A (finite) †-upgrade sequence is a finite sequence
In a Relative Priority Merge, both agents have a †P~ = (†P 1, . . . , †P n) of upgrades †P i of the
“veto” with respect to group incomparability. Here given type † ∈ {!, ⇑, ↑}. Any †-upgrade sequence
the group can only compare options that both agents induces a (partial) function, mapping every pointed
can compare; and whenever the group can compare model S into a finite sequence †P~ (S) = (Si)i of
two options, everything goes on as in the lexico- pointed models, defined inductively by: S0 := S;
graphic merge. Agent a’s order gets priority, while and Si+1 := †P i(Si), if this is defined (and
unb’s order is adopted only when a is indifferent. defined otherwise). A †-upgrade sequence †P~ is a</p>
      <p>Since our plausibility structures they encode both †-communication sequence within group G if all its
the “hard” and the “soft” information possessed by upgrades are sincere for at least one G-agent at the
the agent, it seems that Priority Merge is best suited moment of speaking: i.e. for every i ≤ n there exists
for aggregating the agents’ plausibility relations. ai ∈ G such that †P i is sincere for ai on Si.
Example 7: If in Example 1, we give priority to A †-communication sequence †P~ within a group
Mary, the relative priority merge Rm⊗a of Mary’s G is exhaustive on a model S if the last model
and Albert’s original plausibility orders amounts to: Sn of the induced sequence †P~ (S) is invariant
under (sincere) †-communication; equivalently, iff
¬D, ¬G ¬D, G / D, G / D, ¬G it is maximal: it cannot be extended to any longer
†-communication sequence. By Proposition 2, the
If instead we give priority to Albert, we simply last model Sn generated by an exhaustive
†obtain Albert’s order as our “merge”: communication sequence is one in which all the
G-agents’ “relevant structures” Ran coincide.</p>
      <p>Ra⊗m = Ra. An exhaustive †-communication sequence within
It is important to note that in both cases of Example G realizes a given preference merge operation N
7, some of the resulting joint beliefs are wrong: on a given pointed model S if, for any agent
when giving priority to Mary, both agents end up b ∈ G, the relevant structures Rbn in the last
generated model is the N-merge of the initial to make only one announcement: instead of
succesrelevant structures {Ra0}a∈G: i.e. Rbn = Na∈G Ra0, sively announcing everything he knows, he can just
for all b ∈ G. A merge operation N is realizable announce the conjunction !(V{P : S |= KaP }) of
by †-communication (within a group G) if there all the things he knows.
exists some exhaustive †-communication sequence Proposition 4 For every given priority order
(within G) that realizes N. The merge opera- (a1, . . . , an) on agents, the corresponding
priortion is said to be constructively realizable by †- ity merge (of plausibility relations) is
construccommunication if there exists a protocol such that tively realizable by radical upgrades (i.e. by
⇑every †-communication sequence that complies with communication), but is not the only such realizable
the protocol is exhaustive and realizes N. operation. The protocol is a natural modification</p>
      <p>For each of the above types of public communica- of the previous one: following the priority order,
tion (!, ⇑, ↑), we can ask which merge operations are the agents have to publicly announce “all that
realizable, or constructively realizable. The answer they strongly believe”. More precisely, for each set
depends on the constraints (transitivity, connected- P ⊆ S such that P is strongly believed by the given
ness etc.) satisfied by the agents’ relevant structures agent a, a joint radical upgrade ⇑ P is performed.
(epistemic, doxastic or plausibility relations). Formally, the protocol consists again of n steps,
each step being a sequence of announcements by
the same agent: first, the first agent according to the
priority order, say a, announces all that he strongly
believes. This is the sequence of radical upgrades:
ρa := Y{⇑ P : P ⊆ S such that S |= SbaP }.</p>
      <p>Proposition 3 Parallel merge is the only merge
operation that is realizable by updates (i.e. by
!communication). Moreover, parallel merge is
constructively realizable by updates. The protocol is as
follows: in no particular order, the agents have to
publicly announce “all that they know” (in the sense
of irrevocable knowledge K). More precisely, for
each set of states P ⊆ S such that P is known to a
given agent a, a public announcement !P is made.</p>
      <p>
        This essentially is the protocol in van Benthem’s
paper “One is a Lonely Number” [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Formally,
the protocol consists of n steps, each step being a
sequence of announcements by the same agent: first,
one of the agents, say a, announces all he knows.
      </p>
      <p>This is the sequence of announcements:</p>
    </sec>
    <sec id="sec-11">
      <title>Then, the next agent in the hierarchy, say b, per</title>
      <p>forms a similar step (announcing all she strongly
believes after the first step), etc.</p>
      <p>Important Observations: (1) Now, the order of the
announcements matters: the agents have to respect
the priority order. Moreover, no interruptions are
allowed: agents with lower priority can speak only
after the agents with higher priority finished
announcing all their strong beliefs. Any interruptions
may lead to the realization of complete different
σa := Y{!P : P ⊆ S such that s |= KaP } merge operations (see the Counterexample below)!
(2) This protocol can also be simplified by
restrict(where Q is sequential composition of actions). ing it only to “defeasible knowledge”
announceThen, another agent b performs a similar step (an- ments, i.e. announcements of the form !(2aP ). But
nouncing all she knows after the first step), etc. recall that, unlike irrevocable knowledge, defeasible
is not negatively introspective: so the agents don’t
Important Observations: (1) The order in which know for sure what things they “know” and what
the agents make the announcements doesn’t actually not, and hence the best they can do is to announce
matter. The may even “interrupt” each other: any all the things they believe they “know”. But, since
exhaustive !-communication sequence produces the believing to (indefeasibly) “know” is the same as
same result. (2) The protocol can be simplified by believing, they have to announce that they “know”
restricting it only to knowledge announcements, i.e. P , for each proposition P which they believe. So
of the form !(KaP ) (for each P such KaP holds): the simplified protocol replaces e.g. the first step by
instead of announcing all they know, the agents the following sequence of radical upgrades
announce that they know all that they know. (3) The
protocol can be simplified by allowing each agent ρ0a := Y{⇑ (2aP ) : P ⊆ S such that S |= BaP }.
(3) Unlike the case of upgrades and parallel merge, the first announcement; nevertheless, b should have
in general the above protocol actually requires mul- first announced this second strong belief before a
tiple announcements by the same agents, includ- would have been allowed to speak!) And, indeed,
ing announcing facts that may already be entailed the resulting model, though a fixed point of
⇑by their previous announcements! A sequence of communication (since all the plausibility relations
radical upgrades is in general not equivalent to a come to coincide), realizes a different merge
operradical upgrade, so there is no way to compress the ation than either of the two priority merges:
sequences ρa or ρ0a into a single upgrade! a,b
Example 8 Recall the initial order of Marry and #
Albert in Example 1. Consider the protocol: 8?&gt;9s=:&lt;; a,b 3 @GFAwEBDC a,b 3 8?&gt;9u=:&lt;;
The Power of Agendas This order-dependence
⇑ 2mD; ⇑ Ka(D ∨ G); ⇑ 2a¬G illustrates a phenomenon well-known in Social
The first is a sincere announcement by Mary, the rest Choice Theory: the important role of the person
are sincere announcements by Albert. The second who “sets the agenda”: the “Judge” who assigns
announcement, though not in “defeasible knowl- priorities to witnesses’ stands; the chairman or
edge” form (as required by the simplified protocol moderator who determines the order of the speakers
in observation 2 above), is equivalent to one in this in a meeting, as well as the the issues to be discussed
form, because of the identity: KaP = 2aKaP . This and the relative priority of each issue.
communication sequence yields the model presented Proposition 5 For every given priority order
in Example 7, as the result of the priority merge (a1, . . . , an) on agents, the corresponding
priorRm⊗a of the two plausibility orders. ity merge of doxastic “best alternatives” relations
Counterexample 9 To show the non-uniqueness of {→a}a is constructively realizable by conservative
priority merge among ⇑-realizable merge operations upgrades (i.e. by ↑-communication). The protocol
and the order-dependency of the above protocol, is the natural modification of the previous one:
note first that the priority merge of the ordering following the priority order, the agents have to
publicly announce “all that they (simply) believe”.
a More precisely, for each set of states P ⊆ S such</p>
      <p>$ that P is believed by the given agent a, a joint
8?&gt;9s=:&lt;; a 3 8?&gt;9u=:&lt;; a 3 @GFAwEBDC conservative upgrade ↑ P is performed.
with the ordering Similar observations as the ones following
Proposition 4 apply to the case of doxastic upgrades:
b priority merge is not the only realizable merge</p>
      <p># operation; the order of announcements does
mat@GAFwBEDC b 4 8?&gt;9s=:&lt;; b 3 8?&gt;9u=:&lt;; ter; in general, the protocol may require multiple
announcements by the same agents.
is equal to either of the two orders (depending on
which agent has priority). But consider now the
following public dialogue</p>
    </sec>
    <sec id="sec-12">
      <title>In this paper, we focused on dynamically re</title>
      <p>⇑ 2bu · ⇑ 2a(u ∨ w) alizing two specific merge operations by public
communication. But, as we saw, depending on the
This first is a sincere announcement by b, the second “agenda”, soft announcements can realize a whole
is sincere announcement by a. This is an exhaustive plethora of merge operations. Nevertheless, not
⇑-communication sequence, but note that the strict everything goes: the requirements imposed on the
priority order required by the above protocol is not plausibility relations generally pose restrictions to
respected here: the first speaker b is “interrupted” by which kinds of merge are realizable. This raises an
the second speaker a before she finished announcing important open question: characterize the class of
all his strong beliefs. (Indeed, s ∨ u is also a strong merge operations realizable by radical (or
conserbelief of agent b, though one that is entailed by vative) upgrades.</p>
    </sec>
    <sec id="sec-13">
      <title>VI. CONCLUSION</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>H.</given-names>
            <surname>Andreka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ryan and P-Y. Schobbens</surname>
          </string-name>
          <article-title>“Operators and Laws for Combining Preference Relations”</article-title>
          ,
          <source>Journal of Logic and Computation</source>
          ,
          <volume>12</volume>
          (
          <issue>1</issue>
          ),
          <fpage>13</fpage>
          -
          <lpage>53</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>K.J.</given-names>
            <surname>Arrow</surname>
          </string-name>
          , “
          <article-title>A Difficulty in the Concept of Social Welfare”</article-title>
          ,
          <source>Journal of Political Economy</source>
          ,
          <volume>58</volume>
          (
          <issue>4</issue>
          ),
          <fpage>328</fpage>
          -
          <lpage>346</lpage>
          ,
          <year>1950</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Baltag</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Smets</surname>
          </string-name>
          , “
          <article-title>Conditional doxastic models: a qualitative approach to dynamic belief revision”</article-title>
          ,
          <source>Electronic Notes in Theoretical Computer Science</source>
          ,
          <volume>165</volume>
          ,
          <fpage>5</fpage>
          -
          <lpage>21</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Baltag</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Smets</surname>
          </string-name>
          , “
          <article-title>The Logic of Conditional Doxastic Actions: A Theory of dynamic multi-agent belief revision”</article-title>
          , in S. Artemov and R. Parikh (eds.),
          <source>Proceedings of the Workshop on Rationality and Knowledge</source>
          ,
          <volume>13</volume>
          -
          <fpage>30</fpage>
          ,
          <string-name>
            <surname>ESSLLI</surname>
          </string-name>
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Baltag</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Smets</surname>
          </string-name>
          , “
          <article-title>Dynamic Belief Revision over MultiAgent Plausibility Models”</article-title>
          , in G. Bonanno, W. van der Hoek, M. Woolridge (eds.),
          <source>Proceedings of the 7th Conference on Logic and the Foundations of Game and Decision (LOFT</source>
          <year>2006</year>
          ),
          <fpage>11</fpage>
          -
          <lpage>24</lpage>
          , University of Liverpool,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Baltag</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Smets</surname>
          </string-name>
          ,
          <article-title>Probabilistic Dynamic Belief Revision</article-title>
          , in J. van Benthem and
          <string-name>
            <given-names>S.</given-names>
            <surname>Ju</surname>
          </string-name>
          and
          <string-name>
            <given-names>F.</given-names>
            <surname>Veltman</surname>
          </string-name>
          (eds.),
          <source>Proceedings of LORI'07</source>
          , College Publications London,
          <fpage>21</fpage>
          -
          <lpage>39</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Baltag</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Smets</surname>
          </string-name>
          , “
          <article-title>A Qualitative Theory of Dynamic Interactive Belief Revision”</article-title>
          , in G. Bonanno, W. van der Hoek, M. Wooldridge (eds.),
          <article-title>Logic and the Foundations of Game and Decision Theory, Texts in Logic and</article-title>
          Games,
          <volume>3</volume>
          ,
          <fpage>9</fpage>
          -
          <lpage>58</lpage>
          , Amsterdam University Press,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Baltag</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Smets</surname>
          </string-name>
          , “
          <article-title>The Logic of Conditional Doxastic Actions”</article-title>
          , in R. van Rooij and
          <string-name>
            <surname>K.</surname>
          </string-name>
          Apt (eds.),
          <source>New Perspectives on Games and Interaction, Texts in Logic and Games</source>
          ,
          <volume>4</volume>
          ,
          <fpage>9</fpage>
          -
          <lpage>31</lpage>
          , Amsterdam University Press,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>P.</given-names>
            <surname>Battigalli</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Siniscalchi</surname>
          </string-name>
          , “
          <article-title>Strong Belief and Forward Induction Reasoning”</article-title>
          ,
          <source>Journal of Econonomic Theory</source>
          ,
          <volume>105</volume>
          ,
          <fpage>356</fpage>
          -
          <lpage>391</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>C.</given-names>
            <surname>Boutilier</surname>
          </string-name>
          , “
          <article-title>Iterated Revision and Minimal Change of Conditional Beliefs”</article-title>
          , JPL,
          <volume>25</volume>
          (
          <issue>3</issue>
          ),
          <fpage>262</fpage>
          -
          <lpage>305</lpage>
          ,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>J.F.A.K. van Benthem</surname>
          </string-name>
          , “
          <article-title>One is a lonely number”</article-title>
          . In P. Koepke
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chatzidakis</surname>
          </string-name>
          and
          <string-name>
            <given-names>W.</given-names>
            <surname>Pohlers</surname>
          </string-name>
          , (eds.)
          <source>Logic Colloquium</source>
          <year>2002</year>
          ,
          <fpage>96</fpage>
          -
          <lpage>129</lpage>
          , ASL and
          <string-name>
            <given-names>A.K.</given-names>
            <surname>Peters</surname>
          </string-name>
          ,
          <string-name>
            <surname>Wellesley</surname>
            <given-names>MA</given-names>
          </string-name>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>J.F.A.K. van Benthem</surname>
          </string-name>
          , “
          <article-title>Dynamic logic of belief revision”</article-title>
          ,
          <source>JANCL</source>
          ,
          <volume>17</volume>
          (
          <issue>2</issue>
          ),
          <fpage>129</fpage>
          -
          <lpage>155</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>J.F.A.K. van Benthem</surname>
          </string-name>
          ,
          <source>“Priority Product Update as Social Choice” (Expanded version)</source>
          ,
          <source>Unpublished Manuscript, November 2007</source>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>J.F.A.K. van Benthem</surname>
          </string-name>
          ,
          <source>Logical Dynamics of Information and Interaction</source>
          , Manuscript, To appear,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>J.F.A.K. van Benthem</surname>
            and
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Liu</surname>
          </string-name>
          , “
          <article-title>Dynamic logic of preference upgrade”</article-title>
          ,
          <source>Journal of Applied Non-Classical Logics</source>
          , University of Amsterdam,
          <volume>17</volume>
          (
          <issue>2</issue>
          ),
          <fpage>157</fpage>
          -
          <lpage>182</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>O.</given-names>
            <surname>Board</surname>
          </string-name>
          , “
          <article-title>Dynamic interactive epistemology”</article-title>
          ,
          <source>Games and Economic Behaviour</source>
          ,
          <volume>49</volume>
          ,
          <fpage>49</fpage>
          -
          <lpage>80</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>N.</given-names>
            <surname>Friedmann</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.Y.</given-names>
            <surname>Halpern</surname>
          </string-name>
          , “
          <article-title>Conditional logics of belief revision”</article-title>
          ,
          <source>Proc. of 12th National Conference in Artificial Intelligence</source>
          , AAAI Press, Menlo Park, CA,
          <fpage>915</fpage>
          -
          <lpage>921</lpage>
          ,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>P.</given-names>
            <surname>Gochet</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Gribomont</surname>
          </string-name>
          , “Epistemic Logic”,
          <string-name>
            <given-names>D.M.</given-names>
            <surname>Gabbay</surname>
          </string-name>
          and J. Woods (eds.),
          <source>Handbook of the History of Logic, Elsevier</source>
          ,
          <volume>7</volume>
          ,
          <fpage>99</fpage>
          -
          <lpage>195</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>A.</given-names>
            <surname>Grove</surname>
          </string-name>
          , “
          <article-title>Two modellings for theory change”</article-title>
          ,
          <source>Journal of Philosophical Logic</source>
          ,
          <year>v17</year>
          ,
          <fpage>157</fpage>
          -
          <lpage>170</lpage>
          ,
          <year>1988</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>J.Y.</given-names>
            <surname>Halpern</surname>
          </string-name>
          ,
          <source>Reasoning about Uncertainty</source>
          , MIT Press, Cambridge MA,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>H.</given-names>
            <surname>Katsuno</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Mendelzon</surname>
          </string-name>
          , “
          <article-title>On the difference between updating a knowledge base and revising it”</article-title>
          ,Cambridge Tracts in Theoretical Computer Science,
          <volume>183</volume>
          -
          <fpage>203</fpage>
          ,
          <year>1992</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>J.-J.</given-names>
            <surname>Ch</surname>
          </string-name>
          . Meyer and W. van der Hoek,
          <source>Epistemic Logic for AI and Computer Science</source>
          , Cambridge Tracts in Theoretical Computer Science,
          <volume>41</volume>
          , Cambridge University Press, Cambridge,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>H.</given-names>
            <surname>Rott</surname>
          </string-name>
          , “
          <article-title>Conditionals and theory change: revisions, expansions, and additions</article-title>
          ” in Synthese,
          <volume>81</volume>
          (
          <issue>1</issue>
          ),
          <fpage>91</fpage>
          -
          <lpage>113</lpage>
          ,
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>W.</given-names>
            <surname>Spohn</surname>
          </string-name>
          , “
          <article-title>Ordinal conditional functions: a dynamic theory of epistemic states”</article-title>
          , in W.L. Harper and
          <string-name>
            <given-names>B.</given-names>
            <surname>Skyrms</surname>
          </string-name>
          (eds.),
          <article-title>Causation in Decision, Belief Change</article-title>
          , and Statistics, vol. II,
          <fpage>105</fpage>
          -
          <lpage>134</lpage>
          ,
          <year>1988</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>R.</given-names>
            <surname>Stalnaker</surname>
          </string-name>
          , “
          <article-title>On Logics of Knowledge and Belief”</article-title>
          ,
          <source>Philosophical Studies</source>
          , vol.
          <volume>128</volume>
          ,
          <fpage>169</fpage>
          -
          <lpage>199</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>R.</given-names>
            <surname>Stalnaker</surname>
          </string-name>
          , “
          <article-title>Knowledge, Belief and Counterfactual Reasoning in Games”</article-title>
          ,
          <source>Economics and Philosophy</source>
          , vol.
          <volume>12</volume>
          ,
          <fpage>133</fpage>
          -
          <lpage>163</lpage>
          ,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <surname>W. van der Hoek</surname>
          </string-name>
          , “
          <article-title>Systems for knowledge and beliefs”</article-title>
          ,
          <source>Journal of Logic and Computation</source>
          ,
          <volume>3</volume>
          , nr.
          <volume>2</volume>
          ,
          <fpage>173</fpage>
          -
          <lpage>195</lpage>
          ,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>F.</given-names>
            <surname>Veltman</surname>
          </string-name>
          , “
          <article-title>Defaults in Update Semantics”</article-title>
          ,
          <source>Journal of Philosophical Logic</source>
          ,
          <volume>25</volume>
          ,
          <fpage>221</fpage>
          -
          <lpage>261</lpage>
          ,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>F.P.J.M.</given-names>
            <surname>Voorbraak</surname>
          </string-name>
          ,
          <article-title>As Far as I Know</article-title>
          , Utrecht University, Utrecht,
          <string-name>
            <surname>NL</surname>
          </string-name>
          , Questiones Infinitae volume VII,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>T.</given-names>
            <surname>Williamson</surname>
          </string-name>
          , “
          <article-title>Some philosophical aspects of reasoning about knowledge”</article-title>
          ,
          <source>Proceedings of TARK'01, J. van Benthem (ed.)</source>
          ,
          <fpage>97</fpage>
          -
          <lpage>97</lpage>
          , Morgan Kaufmann Publishers, San Francisco,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>