=Paper=
{{Paper
|id=Vol-494/paper-20
|storemode=property
|title=Talking Your Way into Agreement: Belief Merge by Persuasive Communication
|pdfUrl=https://ceur-ws.org/Vol-494/famaspaper2.pdf
|volume=Vol-494
|dblpUrl=https://dblp.org/rec/conf/mallow/BaltagS09
}}
==Talking Your Way into Agreement: Belief Merge by Persuasive Communication==
Talking Your Way into Agreement:
Belief Merge by Persuasive Communication
Alexandru Baltag Sonja Smets
Computing Laboratory Dept. of Artificial Intelligence,
Oxford University and Dept. of Philosophy
Email: Alexandru.Baltag@comlab.ox.ac.uk University of Groningen
&
IEG, Oxford University
Email: S.J.L.Smets@rug.nl
I. I NTRODUCTION edge” (in the sense of absolutely certain, un-
revisable, irrevocable knowledge, i.e. the epistemic
We investigate the issue of reaching doxastic
concept mostly used in Logic, Computer Science
agreement among the agents of a group by “shar-
and Economics), their “strong beliefs” and “safe
ing” information via successive acts of sincere,
beliefs” (also known as “defeasible knowledge”, i.e.
persuasive public communication within the group.
the epistemic concept used mostly by philosophers
As usually considered in Social Choice theory, and researchers in Belief Revision theory), as well
the problem of preference aggregation is to find as their “conditional beliefs” (encoding their “belief-
a natural and fair “merge” operation (subject to revision strategy”, i.e. their contingency plans for
various naturalness or fairness conditions), for ag- belief change). In other words, an agent’s doxastic
gregating the agents’ preferences into a single group preference structure capture all her “information”:
preference. Depending on the stringency of the both her “hard” (absolutely certain, infallible) infor-
required fairness conditions, one can obtain either mation and her “soft” (potentially fallible) informa-
an Impossibility theorem (e.g Arrows theorem [2]) tion. In this context, a preference merge operation
or a classification of the possible types of reasonable corresponds to a way of combining the agents
merge operations [1]. information into a single “group information”.
In this paper we propose a more “dynamic” ap-
proach to this issue. Dynamically speaking, “merg- Similarly, preference-changing actions can be in-
ing” preference relations means finding an action terpreted in a doxastic setting as acts of commu-
or a sequence of actions (a protocol) that, when ap- nication or persuasion. But not every preference-
plied to any arbitrary multi-agent preference model, changing action can be understood in this way: there
produces a new model in which all the agents’ has to be a specific relation between one agent’s (the
preference relations are the same. When the new speaker’s) prior preferences before the action and
relations are the result of a specific merge operation, the whole’s group’s posterior preferences. Actions
we say that we have “realized” this operation via in which this relation holds will be instances of
the given (sequence of) action(s). One would like to sincere and persuasive public communication.
know what types of merges are realizable by using An announcement of some information P is said
only specific types of preference-changing actions. to be “public” when it is common knowledge that
In a doxastic/epistemic setting, the agents pref- this particular message P is announced and that all
erence relations are interpreted as “doxastic pref- the agents are adopting the same attitude towards
erences” or “doxastic plausibility” orders. These the (plausibility of the) announcement: they all
encode the agents beliefs, but in fact they capture adopt the same opinion about the reliability of this
all their doxastic-epistemic attitudes: their “knowl- information. Depending on the specific common
attitude, there are three main possibilities that have after a persuasive communication, all agents reach
been discussed in the literature: 1) the informa- a partial agreement, namely with respect to the
tion P is certainly true: it is common knowledge specific information that has been communicated.
that the message is necessarily truthful; (2) the In a cooperative setting, the goal of “sharing”
announcement is strongly believed by all agents to doxastic information is reaching “agreement” with
be true: it is common knowledge that everybody respect to all the (relevant) issues. Indeed, the
strongly believes that the speaker tells the truth; natural stopping point of iterated sharing is when
(3) the announcement is (simply) believed: it is nothing is left for further sharing or persuading; i.e.
common knowledge that everybody believes (in the complete agreement. Any further sincere persuasive
simple, “weak” sense) that the speaker tells the communication is redundant at that stage: it can no
truth. These three alternatives correspond to three longer change any agent’s doxastic structure. This
forms of “learning” a public announcement, forms happens exactly when all the agents’ relevant doxas-
discussed in [12], [14] in a Dynamic Epistemic tic attitudes towards all issues are exactly the same.
Logic context: “update” 1 !P , “radical upgrade” (Which attitude is relevant depends again of the
⇑ P and “conservative upgrade” ↑ P . Under various type of communication: “knowledge” for updates,
names, they have been previously proposed in the “belief” for conservative upgrades, “strong belief”
literature on Belief Revision, e.g. by Rott [23] and for radical upgrades). This means that the agents’
Boutilier [10] , and in the literature on dynamic (relevant) accessibility relations (i.e. respectively,
semantics for natural language by Veltman [28]. The the knowledge relations, the belief structure or the
first operation (update) models a “truthful public strong belief structure) became identical: we say
announcements” of “hard” information; the other that these structures have “merged” into one.
two are models of “soft” public announcements. So we arrive in a natural way at the main issue
“Sincerity” of a communication act can be de- addressed in this paper: the “dynamic merge” of
fined as sharing of information that was already doxastic structures by sincere persuasive public
“accepted” by the speaker (before the act). The communication. In particular, we investigate the
meaning of “acceptance” depends on the form realizability of merge operations via (1) updates,
of communication: as we’ll see, for updates with (2) radical upgrades and (3) conservative upgrades.
“hard” information, acceptance means “knowledge” We show that, in the first case, only the epistemic
(in the irrevocable sense), while for upgrades with structures (given by the “hard” knowledge relations)
“soft” information, acceptance just means some type can be merged; and moreover, the only form of
of “belief” or “strong belief” (depending on whether realizable merge is in this case the so-called “par-
the upgrade is “conservative” or “radical”). But, as allel merge” [1], given by the intersection of all
a general concept, prior acceptance requires that preference relations. Epistemically, this corresponds
the speaker’s own doxastic structure should not be to the familiar concept of “distributed knowledge”.
changed by her sincere communication. The realizability result is constructive, it comes
“Persuasiveness” requires that the communicated with a specific announcement-based protocol for
information becomes commonly “accepted” by all realizing this merge. This is essentially the algo-
the agents (in the same sense of “acceptance” that rithm in van Benthem’s paper “One is a Lonely
the speaker has adopted): this means that, after the Number” [11]: the agents announce “all they know”,
act, everybody commonly exhibits the same doxastic in no particular order. In the second case (radical
attitude as the speaker (knowledge, belief or strong upgrade), the “defeasible knowledge” structures are
belief) towards the communicated information. So, merged, but in fact this implies that all the other
doxastic attitudes become the same: the agents’
1
Note that in Belief Revision, the term “belief update” is used whole “doxastic preference” structures are merged.
for a totally different operation (the Katzuno-Mendelzon update[21]), The natural analogue of the above-mentioned pro-
while what we call “update” is known as “conditioning”. We choose tocol for radical upgrades realizes now a different
to follow here the terminology used in Dynamic Epistemic Logic,
but we want to warn the reader against any possible confusions with type of merge (“priority merge”, itself a natural
the KM update. epistemic modification of the other basic type of
merge considered in [1], the “lexicographic merge”). (relative) priority merge. In section V we present
Finally, in the case of conservative upgrades, only the protocols for dynamic realizations of parallel
the (simple) belief structures (given by the doxastic merge and priority merge, giving counterexamples
relations) can be merged. Moreover, priority merge that point out the differences between them. We
is realizable via the natural analogue of the same end with a short note and an open question in our
protocol above for conservative upgrades. Conclusions section.
This surface similarity between the three cases is
pleasing, but in fact it hides deeper dissimilarities. II. P LAUSIBILITY S TRUCTURES AND D OXASTIC
As we mentioned, the realizable merge is unique in ATTITUDES
the first case. This is not true in the other cases: a In this section, we review some basic notions and
whole class of merge operations can be realized by results from [3]. We use finite “plausibility” frames,
radical or conservative upgrades. Moreover, in the in the sense of our papers [3], [4], [5], [6], [7], [8].
first case, the order in which the announcements is These kind of semantic structures are the natural
irrelevant, while in the other cases the order matters: multi-agent generalizations of structures that are
if the upgrades are performed in a different order standard, in one form or another, in Belief Revi-
than the one prescribed in the protocol, then dif- sion: Halpern’s “preferential models” [20], Spohn’s
ferent merge operations may be realized! Finally, in ordinal-ranked models [24], Board’s “belief-revision
the first case, the merge may be realized by allowing structures” [16], Grove’s “sphere” models [19]. Un-
only one announcement by each agent (of “all she like the settings in [7], [8], we restrict here to the
knows”). But this is not true in the other cases: the finite case, for reasons of simplicity.
agents may have to make many soft announcements, For a given set A of labels called “agents”, a
including announcing facts that may already be (finite, multi-agent) plausibility frame is just a finite,
entailed by their previous announcements! multi-agent Kripke frame (S, Ra )a∈A in which the
Some of the questions we address in this paper accessibility relations Ra ⊆ S × S are usually
came to our attention after hearing a presentation denoted by ≤a , are called “plausibility orders” or
by J. van Benthem on “The Social Choice Behind “doxastic preference” relations, and are assumed
Belief Revision” at the workshop “Dynamic Logic to be locally connected preorders. Here, a “locally
Montreal” in 2007 [13]. Van Benthem’s view was connected preorder” ≤⊆ S × S is a reflexive and
that belief dynamics in itself can be captured as transitive relation such that: if s ≤ t and s ≤ w
a form of preference merge (between the prior then either t ≤ w or w ≤ t; and if t ≤ s and
doxastic preferences and the on-going doxastic pref- w ≤ s then either t ≤ w or w ≤ t. See [3] for
erences about the new information). One can see a justification and motivation for these conditions.2
that our approach here is actually the dual of the per- We use the notation s ∼a t for the comparability
spective adopted in [13]: implementing preference relation with respect to ≤a (i.e. s ∼a t iff either
merge dynamically by successive belief revisions, s ≤a t or t ≤a s), s a
s |= Ka Q iff: s |= BaP Q for all P. t for all t ∈ s(a) ∩ P and all w ∈ s(a) \ P }.
There are other differences: irrevocable knowledge So P is strong belief at a state s iff P is epis-
K satisfies the axioms of the modal system S5, temically possible and moreover all epistemically
so it is fully introspective; in contrast, defeasible possible P -states at s are more plausible than all
knowledge 2 is only positively introspective, but epistemically possible non-P states. This notion was
not necessarily positively introspective. (In fact, the called “strong belief” by Battigalli and Siniscalchi
complete logic of 2 is the modal logic S4.3.) An [9], while Stalnaker [26] calls it “robust belief”. It is
agent’s belief can be safe without him necessarily easy to see that we have the followingQequivalence:
“knowing” this (in the “strong” sense of the irrevo- S |= Sba P iff S |= Ba P and S |= Ba P for every
cable knowledge K): “safety” (similarly to “truth”) Q such that S |= ¬Ka (Q → ¬P ). In other words:
is an external property of the agent’s beliefs, that something is strong belief iff it is believed and if this
can be ascertained only by comparing his belief- belief can only be defeated by evidence (truthful or
revision system with reality. Indeed, the only way not) that is known to contradict it. An example is
for an agent to know a belief to be safe is to actually the “presumption of innocence” in a trial: requiring
know it to be truthful. This is captured by the valid the members of the jury to hold the accused as
identity: Ka 2a P = Ka P . In other words: knowing “innocent until proven guilty” means asking them
that something is safe to believe is the same as just to start the trial with a “strong belief” in innocence.
knowing it to be true. In fact, all beliefs held by Example 1: Consider the situation of Professor
an agent “appear safe” to him: in order to believe Albert Winestein. Albert feels that he is a genius. He
them, he has to believe that they are safe. This is knows that there are only two possible explanations
expressed by the valid identity: Ba 2a P = Ba P , for this feeling: either he is a genius or he’s drunk.
saying that: believing that something is safe to He doesn’t feel drunk, so he believes that he is
believe is the same as just believing it3 . Contrast a sober genius. However, if he realized that he’s
this with the situation concerning “knowledge”: in drunk, he’d think that his genius feeling was just
our logic (as in most standard doxastic-epistemic the effect of the drink; i.e. after learning he is drunk
logics), we have the identity: Ba Ka P = Ka P . So he’d come to believe that he was just a drunk non-
believing that something is known is the same as genius. In reality though, Albert is both drunk and
knowing it! a genius.
The difference between K and 2 and their dif- We can represent Albert’s information and (con-
ferent properties, expressed by the above identities, ditional) by the following plausibility relation:
are enough to solve the so-called “Paradox of the
when we say that somebody “only believes that
a / a /
Perfect Believer” in [18], [29], [27], [22], [30], [17]: ¬D, ¬G D, G D, ¬G ¬D, G
she knows something (without really knowing it)”, Here, as in all other drawings, we use labeled
we’re using the word “knowledge” in a different arrows for plausibility relations ≤a (not for the
sense than the fully introspective K modality. A “best alternative” relations →a !), going from less
natural reading reading is to interpret it as the plausible to more plausible worlds, but we skip
defeasible knowledge 2, in which case “believing loops and composed arrows (since ≤a are reflex-
ive and transitive). The real world is (D, G). Al-
3
The proof is an easy semantic exercise, which can be rendered bert considers (D, ¬G) as being more plausible
in English as: saying that “the best worlds have the property that all
the worlds at least as good as them are P -worlds” is equivalent to than (D, G), and (¬D, G) as more plausible than
simply saying that “the best worlds are P -worlds”. (D, ¬G). Albert can distinguish all these worlds
from (¬D, ¬G), since (in the real world) he knows she has a strong opinion about his drunkness: she
(“Ka ”) that either D or G holds. can see him, so judging by his behavior she either
Consider another agent, Professor Mary Curry. strongly believes he’s drunk or she strongly believes
She is pretty sure that Albert is drunk: she can see he’s not drunk. However, her actual opinion about
this with her very own eyes. But Marry is completely this is unknown to Albert, who thus considers both
indifferent with respect to Albert’s genius: so she opinions as equally plausible.
considers the possibility of genius and the one of The resulting model is:
non-genius as equally plausible. However, having a q a q a
1 m m 1 D, G
¬D, ¬G o / ¬D, G
philosophical mind, Mary is aware of the possibility m
D, ¬G
that the testimony of her eyes may in principle be O O m O O
q q
wrong: it is in principle possible that Albert is not a a a a
drunk, despite the presence of the usual symptoms. a a
m 1
¬D, ¬G o / ¬D, G
m
Marry’s beliefs are captured by her plausibility D, ¬G m D, G
m m
The real world is represented by the upper (D, G)
order:
state. One can check that, in the real world, Mary
¬D, ¬G o / ¬D, G m / D, G o / D, ¬G
m m
still strongly believes Albert he’s drunk; but he does
We can see from the drawing that Mary strongly
not know this: Mary’s plausibility relation between
believes D, and in fact her belief is safe: so she
D and ¬D is unknown to Albert. However, he
“knows” that Albert is drunk, in the sense of defea-
knows that either she strongly believes D or she
sible knowledge (although she doesn’t know it, in
strongly believes ¬D.
the sense of K). But she is completely indifferent
We can go on and modify the example further, by
with respect to G: hence she considers the possibil-
allowing that Albert’s plausibility is not commonly
ity of G and ¬G as equally plausible.
known either etc. But, for simplicity of drawing,
To put together the agents’ plausibility orders, we
we stop here: when less common knowledge is
need to be told what do they know about each other.
assumed, more worlds are possible, and hence the
Suppose all their opinions as described above (i.e.
drawings get more and more complex.
all their conditional beliefs) are common knowledge:
G-Bisimulation For a group G ⊆ A of agents,
essentially, this means their doxastic preferences are
we say the pointed models S = (S, ≥a , k k, s0 )a∈A
common knowledge. We thus obtain the following
and S0 = (S 0 , ≥0a , k k0 , s00 )a∈A are G-bisimilar, and
multi-agent plausibility model:
q a q a , k k, s )
write S 'G S0 , if the pointed Kripke models (S, ≥a
0 0 0 0
0 a∈G and (S , ≥a , k k , s0 )a∈G (having as ac-
m 1 m m 1 cessibility relations only the G-labeled relations) are
¬D, ¬G o
m /
¬D, G D, ¬G D, G
bisimilar in the usual sense from Modal Logic [?].
At the real world (D, G), one can check that When G = A, we simply write S ' S0 , and say
Ba G is true. Further, Albert does not know G, S and S0 are bisimilar. Bisimilar models differ only
hence (D, G) |= ¬Ka G ∧ ¬2a G while (D, G) |= formally: they encode precisely the same doxastic-
Ka (D ∨ G). Moreover, he doesn’t “know” G in the epistemic information, and they satisfy the same
defeasible sense either: his belief in G is not safe, modal sentences.
since BaD ¬G holds in the real world: so if Albert
would learn (correctly) that he was drunk, he’d lose III. B ELIEF DYNAMICS : S INCERE , P ERSUASIVE
his (true) belief in being a genius. P UBLIC C OMMUNICATION
Example 2 Let us now relax our assumptions about We move on now to belief dynamics: what hap-
the agents’ mutual knowledge: suppose that only Al- pens when some proposition P is publicly an-
bert’s opinions are common knowledge; in addition, nounced? According to Dynamic Epistemic Logic,
suppose that it is common knowledge that Mary this induces, not only a revision of beliefs, but a
has no opinion on Albert’s genius (so she considers change of model: a “revision” of the whole rela-
genius and non-genius as equi-plausible), but that tional structure, changing the agents’ plausibility
orders. However, the specific change depends on is S 0 = {s ∈ S : s |= P }; (c) s ≤0a t iff
the agents’ attitudes to the plausibility of the an- s ≤a t and s, t ∈ S 0 .
nouncement: how certain is the new information? (2) Learning from a Strongly Trusted Source:
Three main possibilities have been discussed in the (Joint) “Radical” Upgrade. The “radical upgrade”
literature: (1) the announcement P is certainly true: (or “lexicographic upgrade”) ⇑ P , as an operation
it is common knowledge that the speaker tells the on pointed plausibility models, can be described as
truth; (2) the announcement is strongly believed to “promoting” all the P -worlds within each informa-
be true by everybody: it is common knowledge that tion cell so that they become “better” (more plau-
everybody strongly believes that the speaker tells the sible) than all ¬P -worlds in the same information
truth; (3) the announcement is (simply) believed: it cell, while keeping everything else the same: the
is common knowledge that everybody believes (in valuation, the actual world and the relative ordering
the simple, “weak” sense) that the speaker tells between worlds within either of the two zones (P
the truth. These three alternatives correspond to and ¬P ) stay the same. Formally, a radical upgrade
three forms of “joint learning”, forms discussed in ⇑ P is (a) a total upgrade (taking as input any model
[12], [14] in a Dynamic Epistemic Logic context: S), such that (b) S 0 = S, and (c): s ≤0a t iff either
“update” 4 !P , “radical upgrade” ⇑ P and “con- s 6∈ PS and t ∈ s(a) ∩ PS , or s ≤a t.
servative upgrade” ↑ P . Under various names, the
single-agent versions of these doxastic transformers (3) “Barely believing” what you hear: (Joint)
have been previously proposed by e.g. Rott [23], “Conservative” Upgrade. The so-called “conser-
Boutilier [10] and Veltman [28]. vative upgrade” ↑ P (called “minimal conditional
We will use “joint upgrades” as a general term revision” by Boutilier [10]) performs in a sense
for all these three model transformers, and denote the minimal possible revision of a model that is
them in general by †P , where † ∈ {!, ⇑, ↑}. For- forced by believing the new information P . As an
mally, each of our joint upgrades is a (possibly operation on pointed models, it can be described
partial) function taking as inputs pointed models as “promoting” only the “best” (most plausible)
S = (S, ≤a , k k, s0 ) and returning new (“upgraded”) P -worlds, so that they become the most plausible
pointed models †P (S) = (S 0 , ≤0a , k k0 , s00 ), with in their information cell, while keeping everything
S 0 ⊆ S. Since upgrades are purely doxastic, they else the same. Formally, ↑ P is (a) a total upgrade,
won’t affect the real world or the “ontic facts” such that (b) S 0 = S, and (c): s ≤0a t iff either
of each world: i.e. they all satisfy s00 = s0 and t ∈ besta ( s(a) ∩ PS ) or s ≤a t.
kpk0 = kpk ∩ S 0 , for atomic p. So, in order to Redundancy, Informativity and Sincerity A joint
completely describe a given upgrade, we only have upgrade †P is redundant on a model S with respect
to specify (a) its possible inputs S, (b) the new set to a group of agents G ⊆ A if the upgraded model
of states S 0 ; (c) the new relations ≤0a . is G-bisimilar to the original one: †P (S) 'G S.
(1) Learning Certain information: Joint “Up- This means that, as far as the group G is concerned,
date”. The update !P is an operation on pointed †P doesn’t change anything: all the group G’s
models which is executable (on a pointed model S) doxastic attitudes stay the same after the upgrade.
iff P is true (at S) and which deletes all the non-P - An upgrade †P is informative (on S) to group G if
worlds from the pointed model, leaving everything it is not redundant with respect to G. An upgrade
else the same. Formally, an update !P is an upgrade †P is redundant with respect to an agent a if it is
such that: (a) it takes as inputs only pointed models redundant with respect to the singleton {a}.
S, such that S |= P ; (b) the new set of states Redundancy is especially important if we want
to capture the “sincerity” of an announcement
4
Note that in Belief Revision, the term “belief update” is used made by a speaker. Intuitively, an announcement
for a totally different operation (the Katzuno-Mendelzon update[21]), is “sincere” when it agrees with the speaker’s prior
while what we call “update” is known as “conditioning”. We choose epistemic state: accepting the announcement doesn’t
to follow here the terminology used in Dynamic Epistemic Logic,
but we want to warn the reader against any possible confusions with change the speaker’s own state.
the KM update. Definition: A (public) announcement †ϕ made
by an agent a is said to be sincere if it leaves lently: iff all G-agents’ plausibility relations
unchanged agent a’s own plausibility structure; i.e. coincide: ≤a =≤b for all a, b ∈ G.
it’s non-informative to agent a. 3) A pointed model S is invariant under ↑-
Proposition 1 communication within G iff all (simple) be-
1) In a pointed model S, !P is redundant with re- liefs are common beliefs within G, i.e. for all
spect to a group G iff P is common knowledge propositions P and all agents a, b ∈ A, Ba P
in S among the group G; i.e.: S 'G !P (S) iff holds in S iff Bb P holds in S; equivalently:
S |= CkG P . Special case: an announcement iff all G-agents’ “best alternative” relations
!P made by an agent a is sincere iff a knows coincide: →a =→b for all a, b ∈ G.
P , i.e. if Ka P holds in the original model Example 3 Suppose that in the situation in Ex-
(before the announcement). ample 1 above, a trusted, infallible source publicly
2) ⇑ P is redundant with respect to a group G iff announces that Albert is drunk: this is “hard”, in-
it is common knowledge in the group G that controvertible information, corresponding to a joint
update !D. The updated model is
q
P is strongly believed (by all G-agents); i.e.
S 'G ⇑ P (S) iff S |= CkG (ESbG ). Special a
m 1 D, G
case: an announcement ⇑ P made by an agent D, ¬G m
a is sincere iff a strongly believes P (before
the announcement). After the update, Albert starts to wrongly believe
3) ↑ P is redundant with respect to a group that ¬G is the case! This is an example of true but
G iff it is common knowledge in the group un-safe belief : it can be lost after acquiring (new)
G that P is believed (by all G-agents); i.e. true information.
S 'G ↑ P (S) iff S |= CkG (EbG P ). Special Example 4 Consider again the situation in example
case: an announcement ↑ P made by an 3, but instead of Albert receiving the information
agent s is sincere iff a believes P (before the from an infallible source, he receives the informa-
announcement). tion from Mary. Mary announces publicly (to Al-
Invariance under communication: For a given bert) that D is the case and we assume that Mary’s
upgrade type † ∈ {!, ⇑, ↑}, we say that a pointed announcement is both sincere and persuasive: she
model S is invariant under †-communication within tells what she thinks and she convinces Albert.
group G iff, for all propositions P , any sincere Since Mary is a fallible agent (and not an infallible
announcement of the form †P made by any agent source), this announcement is soft: in principle, she
in G is redundant in S. could be wrong, or she could lie, or she could
Proposition 2 simply guess and be right only by chance. So we
1) A pointed model S is invariant under !- cannot interpret Mary’s announcement as a “hard”
communication within G iff all (irrevocable) update !D, since such an announcement wouldn’t be
knowledge is common knowledge within G, sincere: the update !D would automatically change
i.e. for all propositions P and all agents Mary’s order (making her irrevocably know D,
a, b ∈ A, Ka P holds in S iff Kb P holds when she didn’t know it before!). But we can model
in S; equivalently: iff all G-agents’ epistemic it as a “soft” announcement ⇑ D; i.e. after hearing
relations coincide: ∼a =∼b for all a, b ∈ G. it, all agents upgrade with D: they start to prefer any
D-world to any ¬D-world. The upgraded model is
a - -
2) A pointed model S is invariant under ⇑-
communication within G iff all “defeasible a
m 1 m 1
¬D, ¬G o / ¬D, G
m
knowledge” is common defeasible knowledge D, G D, ¬G
m
within G, i.e. for all propositions P and all
agents a, b ∈ A, 2a P holds in S iff 2b P Note that Mary’s order is left unchanged, so the
holds in S; equivalently: iff all strong be- announcement was indeed sincere.
liefs (conditional beliefs) are common strong Example 5 What if instead Mary announces that
beliefs (common conditional beliefs); equiva- she “knows” that Albert is drunk? If we take this in
the sense of irrevocable knowledge K, then such only two basic merge operators: “parallel merge”
an announcement would not be sincere: indeed, and “lexicographic merge”.
in the original situation of Example 1, Km D was Parallel Merge The merge operation we consider
false. However, she did “know” it in the sense of first can be thought of as the most “democratic”
defeasible knowledge 2m D: she correctly believed form of aggregation: everybody has a veto, so
D, and this belief was safe. This “knowledge” was that group preferences are unanimous preferences.
fallible, and she was aware of this: she didn’t believe Following [1], we call it parallel merge. It simply
that she knows irrevocably (¬Bm Km D), but she takes the merged relation to be the intersection
believed that she “knows” defeasibly (Bm 2m D).
J T
a Ra∈G := a∈G Ra of all the preference relations
Hence, she is sincere if she announces that she of the agents in a given group G ⊆ A.5
“knows” in this sense. Assuming that Albert is also Parallel merge is particularly well suited for ag-
aware of the fallibility of her knowledge, but that gregating the agents’ “hard information” (irrevoca-
he still highly trusts her to be right, we can interpret ble knowledge) K, i.e. for merging the epistemic
this as a sincere and persuasive announcement of the relations {∼a }a∈G . Since if we consider absolutely
form ⇑ (2D). Its effect is the same as in Example certain and fully introspective knowledge, there is
4: the upgraded model is the same. no danger of introducing an inconsistency. The
Counterexample 6 Note that simply announcing agents can pool their information in a completely
that she believes D, or even that she strongly symmetric manner, accepting the other’s bits with-
believes D, won’t do: this will not be persuasive, out reservations. In fact, parallel merge of the
since it will not change Albert’s beliefs about the agents’ irrevocable knowledge gives us the standard
facts of the matter (D or ¬D), although it may concept of “distributed knowledge” DK:
change his beliefs about her beliefs. Being informed \
DKG P = [ Ra ]P.
of another’s beliefs is not enough to convince you
a∈G
of their truth. Indeed, Mary’s beliefs are already
common knowledge in the initial model of Example Lexicographic Merge When the group is hierarchi-
1: so an upgrade ⇑ (Bm D) would be superfluous! cally structured according to some total order (on
agents), called a “priority order”, then the agents
Persuasiveness So what is needed for persuasive
with higher priority are thought of as having a
communication is that the speaker (Mary) “con-
higher “epistemic expertise” than the agents with
verts” the others to her own beliefs. For this, she
lower priority. For a group G = {a, b} of two
should not simply announce that she believes them.
agents, in which a has higher priority, we can think
Instated, she can either announce that something is
of a as the “expert” (or the professor) and of b
the case (when in fact she just strongly believes
as the “layman” (or the student). In this context,
that it is the case), or else announce that she
the natural doxastic merge operation is the so-
defeasibly “knows” it (when she only believes that
called lexicographic merge. For two agents a, b, the
she “knows” it, and in fact this implies that she
“lexicographic merge” Ra/b gives priority to agent
strongly believes that she “knows”).
a’s strong (i.e. strict) preferences over b’s: first, the
IV. M ERGE O PERATIONS strict preference order of a is adopted by the group;
and when a is indifferent between two options, then
A merge operation, or “aggregation procedure”, b’s preference is adopted; finally, a-incomparability
is an operator taking any sequence {Ri }1≤i≤n of gives group incomparability. Formally:
preference relations
J intoJa “group preference” rela- ∼
J
tion i Ri = R1 R2 · · · Rn . In [1] the authors Ra/b := Ra< ∪ (Ra= ∩ Rb ) = Ra< ∪ (Ra ∩ Rb ) =
give a general classification of types of preference Ra ∩ (Ra< ∪ Rb ).
merge, in a very general context, subject to some
5
minimal “fairness” and rationality conditions. They From a purely formal perspective, parallel merge resembles the
so-called “non-prioritized belief revision” known from the work of
show that all the merge operations satisfying these S. H. Hansson, H. Rott, H. van Ditmarsch. But note that “merge” is
conditions can be represented as compositions of not “revision”!
The lexicographic merge is particularly suited for believing ¬G; while if we give priority to Albert,
aggregating “soft information” (strong beliefs, safe they both end up believing ¬D. In fact, no type of
beliefs, conditional beliefs) in the absence of any hierarchic belief merge is a warranty of veracity.
hard information: since soft information is not fully
reliable (because of lack of negative introspection V. “R EALIZING ” M ERGE DYNAMICALLY
for 2, and because of potential falsity for belief, Intuitively, the purpose of sharing hard knowl-
conditional belief and strong belief), some “screen- edge, defeasible knowledge or beliefs is to achieve a
ing” must be applied to some agents’ information state in which there is nothing else to share, i.e. one
(and so some hierarchy must be enforced), in order in which any further sharing is redundant: all hard
to ensure consistency of the merge. knowledge, or defeasible knowledge, or beliefs, are
(Relative) Priority Merge Note that, in lexico- already shared in common. For sharing via a specific
graphic merge, the first agent’s priority is “abso- type of public communication † ∈ {!, ⇑, ↑}, this
lute”. But in the presence of “hard” information, happens precisely when the model-changing process
the lexicographic merge of soft information must induced by †-type sharing reaches a fixed point of †-
be modified, by first pooling together all the hard communication: a model that is invariant under that
information and then using it to restrict the lexico- particular type of announcements.
graphic merge of soft information. This leads us to For every specific type of public communica-
a “more democratic” combination of Parallel Merge tion † ∈ {!, ⇑, ↑}, agent a’s “relevant structure”
and Lexicographic Merge, called “(relative) priority in a model S is given by: a’s epistemic relation
a
merge” Ra⊗b : ∼⊆ S × S in the case of updates !; a’s plausibility
∼
relation ≤a in the case of radical upgrade ⇑; and
< ∼ =
Ra⊗b := (Ra ∩ Rb ) ∪ (Ra ∩ Rb ) = a’s doxastic “best alternative” relations →a in the
case of conservative upgrade.
Ra ∩ Rb∼ ∩ (Ra< ∪ Rb ).
A (finite) †-upgrade sequence is a finite sequence
In a Relative Priority Merge, both agents have a †P~ = (†P 1 , . . . , †P n ) of upgrades †P i of the
“veto” with respect to group incomparability. Here given type † ∈ {!, ⇑, ↑}. Any †-upgrade sequence
the group can only compare options that both agents induces a (partial) function, mapping every pointed
can compare; and whenever the group can compare model S into a finite sequence †P~ (S) = (Si )i of
two options, everything goes on as in the lexico- pointed models, defined inductively by: S0 := S;
graphic merge. Agent a’s order gets priority, while and Si+1 := †P i (Si ), if this is defined (and un-
b’s order is adopted only when a is indifferent. defined otherwise). A †-upgrade sequence †P~ is a
Since our plausibility structures they encode both †-communication sequence within group G if all its
the “hard” and the “soft” information possessed by upgrades are sincere for at least one G-agent at the
the agent, it seems that Priority Merge is best suited moment of speaking: i.e. for every i ≤ n there exists
for aggregating the agents’ plausibility relations. ai ∈ G such that †P i is sincere for ai on Si .
Example 7: If in Example 1, we give priority to A †-communication sequence †P~ within a group
Mary, the relative priority merge Rm⊗a of Mary’s G is exhaustive on a model S if the last model
and Albert’s original plausibility orders amounts to: S of the induced sequence †P~ (S) is invariant
n
under (sincere) †-communication; equivalently, iff
¬D, ¬G ¬D, G / D, G / D, ¬G it is maximal: it cannot be extended to any longer
†-communication sequence. By Proposition 2, the
If instead we give priority to Albert, we simply last model Sn generated by an exhaustive †-
obtain Albert’s order as our “merge”: communication sequence is one in which all the
Ra⊗m = Ra . G-agents’ “relevant structures” Ran coincide.
An exhaustive †-communication sequence within N
It is important to note that in both cases of Example G realizes a given preference merge operation
7, some of the resulting joint beliefs are wrong: on a given pointed model S if, for any agent
when giving priority to Mary, both agents end up b ∈ G, the relevant structures Rbn in the last
N
generated model is the -merge of the
N initial to make only one announcement: instead of succes-
0 n 0
relevant structures {Ra }a∈G : i.e. RN
b = a∈G Ra , sively announcing everythingVhe knows, he can just
for all b ∈ G. A merge operation is realizable announce the conjunction !( {P : S |= Ka P }) of
by †-communication (within a group G) if there all the things he knows.
exists some exhaustive †-communication
N sequence Proposition 4 For every given priority order
(within G) that realizes . The merge opera- (a1 , . . . , an ) on agents, the corresponding prior-
tion is said to be constructively realizable by †- ity merge (of plausibility relations) is construc-
communication if there exists a protocol such that tively realizable by radical upgrades (i.e. by ⇑-
every †-communication sequence that complies
N with communication), but is not the only such realizable
the protocol is exhaustive and realizes . operation. The protocol is a natural modification
For each of the above types of public communica- of the previous one: following the priority order,
tion (!, ⇑, ↑), we can ask which merge operations are the agents have to publicly announce “all that
realizable, or constructively realizable. The answer they strongly believe”. More precisely, for each set
depends on the constraints (transitivity, connected- P ⊆ S such that P is strongly believed by the given
ness etc.) satisfied by the agents’ relevant structures agent a, a joint radical upgrade ⇑ P is performed.
(epistemic, doxastic or plausibility relations). Formally, the protocol consists again of n steps,
each step being a sequence of announcements by
Proposition 3 Parallel merge is the only merge
the same agent: first, the first agent according to the
operation that is realizable by updates (i.e. by !-
priority order, say a, announces all that he strongly
communication). Moreover, parallel merge is con-
believes. This is the sequence of radical upgrades:
structively realizable by updates. The protocol is as Y
follows: in no particular order, the agents have to ρa := {⇑ P : P ⊆ S such that S |= Sba P }.
publicly announce “all that they know” (in the sense
of irrevocable knowledge K). More precisely, for Then, the next agent in the hierarchy, say b, per-
each set of states P ⊆ S such that P is known to a forms a similar step (announcing all she strongly
given agent a, a public announcement !P is made. believes after the first step), etc.
Important Observations: (1) Now, the order of the
This essentially is the protocol in van Benthem’s
announcements matters: the agents have to respect
paper “One is a Lonely Number” [11]. Formally,
the priority order. Moreover, no interruptions are
the protocol consists of n steps, each step being a
allowed: agents with lower priority can speak only
sequence of announcements by the same agent: first,
after the agents with higher priority finished an-
one of the agents, say a, announces all he knows.
nouncing all their strong beliefs. Any interruptions
This is the sequence of announcements:
may lead to the realization of complete different
σa :=
Y
{!P : P ⊆ S such that s |= Ka P } merge operations (see the Counterexample below)!
(2) This protocol can also be simplified by restrict-
(where
Q
is sequential composition of actions). ing it only to “defeasible knowledge” announce-
Then, another agent b performs a similar step (an- ments, i.e. announcements of the form !(2a P ). But
nouncing all she knows after the first step), etc. recall that, unlike irrevocable knowledge, defeasible
is not negatively introspective: so the agents don’t
Important Observations: (1) The order in which know for sure what things they “know” and what
the agents make the announcements doesn’t actually not, and hence the best they can do is to announce
matter. The may even “interrupt” each other: any all the things they believe they “know”. But, since
exhaustive !-communication sequence produces the believing to (indefeasibly) “know” is the same as
same result. (2) The protocol can be simplified by believing, they have to announce that they “know”
restricting it only to knowledge announcements, i.e. P , for each proposition P which they believe. So
of the form !(Ka P ) (for each P such Ka P holds): the simplified protocol replaces e.g. the first step by
instead of announcing all they know, the agents the following sequence of radical upgrades
announce that they know all that they know. (3) The Y
protocol can be simplified by allowing each agent ρ0a := {⇑ (2a P ) : P ⊆ S such that S |= Ba P }.
(3) Unlike the case of upgrades and parallel merge, the first announcement; nevertheless, b should have
in general the above protocol actually requires mul- first announced this second strong belief before a
tiple announcements by the same agents, includ- would have been allowed to speak!) And, indeed,
ing announcing facts that may already be entailed the resulting model, though a fixed point of ⇑-
by their previous announcements! A sequence of communication (since all the plausibility relations
radical upgrades is in general not equivalent to a come to coincide), realizes a different merge oper-
radical upgrade, so there is no way to compress the ation than either of the two priority merges:
sequences ρa or ρ0a into a single upgrade! a,b
89:;
?>=< 3 @ABC
GFED 3 89:;
?>=<
Example 8 Recall the initial order of Marry and #
Albert in Example 1. Consider the protocol: s a,b
w a,b
u
The Power of Agendas This order-dependence
⇑ 2m D; ⇑ Ka (D ∨ G); ⇑ 2a ¬G
illustrates a phenomenon well-known in Social
The first is a sincere announcement by Mary, the rest Choice Theory: the important role of the person
are sincere announcements by Albert. The second who “sets the agenda”: the “Judge” who assigns
announcement, though not in “defeasible knowl- priorities to witnesses’ stands; the chairman or
edge” form (as required by the simplified protocol moderator who determines the order of the speakers
in observation 2 above), is equivalent to one in this in a meeting, as well as the the issues to be discussed
form, because of the identity: Ka P = 2a Ka P . This and the relative priority of each issue.
communication sequence yields the model presented Proposition 5 For every given priority order
in Example 7, as the result of the priority merge (a1 , . . . , an ) on agents, the corresponding prior-
Rm⊗a of the two plausibility orders. ity merge of doxastic “best alternatives” relations
Counterexample 9 To show the non-uniqueness of {→a }a is constructively realizable by conservative
priority merge among ⇑-realizable merge operations upgrades (i.e. by ↑-communication). The protocol
and the order-dependency of the above protocol, is the natural modification of the previous one:
note first that the priority merge of the ordering following the priority order, the agents have to
publicly announce “all that they (simply) believe”.
a
More precisely, for each set of states P ⊆ S such
?>=<
89:; 3 ?>=<
89:; 3 @ABC
GFED
$ that P is believed by the given agent a, a joint
s u w
a a conservative upgrade ↑ P is performed.
with the ordering Similar observations as the ones following Propo-
sition 4 apply to the case of doxastic upgrades:
b
priority merge is not the only realizable merge
GFED
@ABC 4 ?>=<
89:; 3 ?>=<
89:;
# operation; the order of announcements does mat-
w b
s b
u ter; in general, the protocol may require multiple
is equal to either of the two orders (depending on announcements by the same agents.
which agent has priority). But consider now the VI. C ONCLUSION
following public dialogue In this paper, we focused on dynamically re-
⇑ 2b u · ⇑ 2a (u ∨ w) alizing two specific merge operations by public
communication. But, as we saw, depending on the
This first is a sincere announcement by b, the second “agenda”, soft announcements can realize a whole
is sincere announcement by a. This is an exhaustive plethora of merge operations. Nevertheless, not
⇑-communication sequence, but note that the strict everything goes: the requirements imposed on the
priority order required by the above protocol is not plausibility relations generally pose restrictions to
respected here: the first speaker b is “interrupted” by which kinds of merge are realizable. This raises an
the second speaker a before she finished announcing important open question: characterize the class of
all his strong beliefs. (Indeed, s ∨ u is also a strong merge operations realizable by radical (or conser-
belief of agent b, though one that is entailed by vative) upgrades.
R EFERENCES updating a knowledge base and revising it”,Cambridge Tracts
in Theoretical Computer Science, 183–203, 1992.
[1] H. Andreka, M. Ryan and P-Y. Schobbens “Operators and Laws [22] J.-J.Ch. Meyer and W. van der Hoek, Epistemic Logic for AI and
for Combining Preference Relations”, Journal of Logic and Computer Science, Cambridge Tracts in Theoretical Computer
Computation, 12(1), 13–53, 2002. Science, 41, Cambridge University Press, Cambridge, 1995.
[2] K.J. Arrow, “A Difficulty in the Concept of Social Welfare”, [23] H. Rott, “Conditionals and theory change: revisions, expan-
Journal of Political Economy, 58(4), 328-346, 1950. sions, and additions” in Synthese, 81(1), 91-113, 1989.
[3] A. Baltag and S. Smets, “Conditional doxastic models: a [24] W. Spohn, “Ordinal conditional functions: a dynamic theory
qualitative approach to dynamic belief revision”, Electronic of epistemic states”, in W.L. Harper and B. Skyrms (eds.),
Notes in Theoretical Computer Science, 165, 5–21, 2006. Causation in Decision, Belief Change, and Statistics, vol. II,
[4] A. Baltag and S. Smets, “The Logic of Conditional Doxastic 105–134, 1988.
Actions: A Theory of dynamic multi-agent belief revision”, in [25] R. Stalnaker, “On Logics of Knowledge and Belief”, Philo-
S. Artemov and R. Parikh (eds.), Proceedings of the Workshop sophical Studies, vol. 128, 169–199, 2006.
on Rationality and Knowledge, 13–30, ESSLLI 2006. [26] R. Stalnaker, “Knowledge, Belief and Counterfactual Reasoning
[5] A. Baltag and S. Smets, “Dynamic Belief Revision over Multi- in Games”, Economics and Philosophy, vol. 12, 133–163, 1996.
Agent Plausibility Models”, in G. Bonanno, W. van der Hoek, [27] W. van der Hoek, “Systems for knowledge and beliefs”, Journal
M. Woolridge (eds.), Proceedings of the 7th Conference on of Logic and Computation, 3, nr. 2, 173–195, 1993.
Logic and the Foundations of Game and Decision (LOFT 2006), [28] F. Veltman, “Defaults in Update Semantics”, Journal of Philo-
11–24, University of Liverpool, 2006. sophical Logic, 25, 221–261, 1996.
[6] A. Baltag and S. Smets, Probabilistic Dynamic Belief Revision, [29] F.P.J.M. Voorbraak, As Far as I Know, Utrecht University,
in J. van Benthem and S. Ju and F. Veltman (eds.), Proceedings Utrecht, NL, Questiones Infinitae volume VII, 1993.
of LORI’07, College Publications London, 21–39, 2007. [30] T. Williamson, “Some philosophical aspects of reasoning about
[7] A. Baltag and S. Smets, “A Qualitative Theory of Dynamic knowledge”, Proceedings of TARK’01, J. van Benthem (ed.),
Interactive Belief Revision”, in G. Bonanno, W. van der Hoek, 97–97, Morgan Kaufmann Publishers, San Francisco, 2001.
M. Wooldridge (eds.), Logic and the Foundations of Game
and Decision Theory, Texts in Logic and Games, 3, 9–58,
Amsterdam University Press, 2008.
[8] A. Baltag and S. Smets, “The Logic of Conditional Doxastic
Actions”, in R. van Rooij and K. Apt (eds.), New Perspectives
on Games and Interaction, Texts in Logic and Games, 4, 9–31,
Amsterdam University Press, 2008.
[9] P. Battigalli and M. Siniscalchi, “Strong Belief and Forward
Induction Reasoning”, Journal of Econonomic Theory, 105,
356–391, 2002.
[10] C. Boutilier, “Iterated Revision and Minimal Change of Con-
ditional Beliefs”, JPL, 25(3), 262–305, 1996.
[11] J.F.A.K. van Benthem, “One is a lonely number”. In P. Koepke
Z. Chatzidakis and W. Pohlers, (eds.) Logic Colloquium 2002,
96-129, ASL and A.K. Peters, Wellesley MA, 2006.
[12] J.F.A.K. van Benthem, “Dynamic logic of belief revision”,
JANCL, 17 (2), 129-155, 2007.
[13] J.F.A.K. van Benthem, “Priority Product Update as Social
Choice” (Expanded version), Unpublished Manuscript, Novem-
ber 2007
[14] J.F.A.K. van Benthem, Logical Dynamics of Information and
Interaction, Manuscript, To appear, 2009.
[15] J.F.A.K. van Benthem and F. Liu, “Dynamic logic of preference
upgrade”, Journal of Applied Non-Classical Logics, University
of Amsterdam, 17 (2), 157–182, 2007.
[16] O. Board, “Dynamic interactive epistemology”,Games and Eco-
nomic Behaviour, 49, 49–80, 2002.
[17] N. Friedmann and J.Y. Halpern, “Conditional logics of belief
revision”, Proc. of 12th National Conference in Artificial Intel-
ligence, AAAI Press, Menlo Park, CA, 915–921, 1994.
[18] P. Gochet and P. Gribomont, “Epistemic Logic”, D.M. Gabbay
and J. Woods (eds.), Handbook of the History of Logic, Elsevier,
7, 99–195, 2006.
[19] A. Grove, “Two modellings for theory change”, Journal of
Philosophical Logic, v17, 157–170, 1988.
[20] J.Y. Halpern, Reasoning about Uncertainty, MIT Press, Cam-
bridge MA, 2003.
[21] H. Katsuno and A. Mendelzon, “On the difference between