<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Talking Your Way into Agreement: Belief Merge by Persuasive Communication</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Alexandru</forename><surname>Baltag</surname></persName>
							<email>alexandru.baltag@comlab.ox.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="laboratory">Computing Laboratory</orgName>
								<orgName type="institution">Oxford University</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Sonja</forename><surname>Smets</surname></persName>
							<email>smets@rug.nl</email>
							<affiliation key="aff1">
								<orgName type="department" key="dep1">Dept. of Artificial Intelligence</orgName>
								<orgName type="department" key="dep2">Dept. of Philosophy University of Groningen &amp; IEG</orgName>
								<orgName type="institution">Oxford University</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Talking Your Way into Agreement: Belief Merge by Persuasive Communication</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">D963B6E3F7A4AE88FAE443619585A684</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T05:49+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract/>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>I. INTRODUCTION</head><p>We investigate the issue of reaching doxastic agreement among the agents of a group by "sharing" information via successive acts of sincere, persuasive public communication within the group.</p><p>As usually considered in Social Choice theory, the problem of preference aggregation is to find a natural and fair "merge" operation (subject to various naturalness or fairness conditions), for aggregating the agents' preferences into a single group preference. Depending on the stringency of the required fairness conditions, one can obtain either an Impossibility theorem (e.g Arrows theorem <ref type="bibr" target="#b1">[2]</ref>) or a classification of the possible types of reasonable merge operations <ref type="bibr" target="#b0">[1]</ref>.</p><p>In this paper we propose a more "dynamic" approach to this issue. Dynamically speaking, "merging" preference relations means finding an action or a sequence of actions (a protocol) that, when applied to any arbitrary multi-agent preference model, produces a new model in which all the agents' preference relations are the same. When the new relations are the result of a specific merge operation, we say that we have "realized" this operation via the given (sequence of) action(s). One would like to know what types of merges are realizable by using only specific types of preference-changing actions.</p><p>In a doxastic/epistemic setting, the agents preference relations are interpreted as "doxastic preferences" or "doxastic plausibility" orders. These encode the agents beliefs, but in fact they capture all their doxastic-epistemic attitudes: their "knowl-edge" (in the sense of absolutely certain, unrevisable, irrevocable knowledge, i.e. the epistemic concept mostly used in Logic, Computer Science and Economics), their "strong beliefs" and "safe beliefs" (also known as "defeasible knowledge", i.e. the epistemic concept used mostly by philosophers and researchers in Belief Revision theory), as well as their "conditional beliefs" (encoding their "beliefrevision strategy", i.e. their contingency plans for belief change). In other words, an agent's doxastic preference structure capture all her "information": both her "hard" (absolutely certain, infallible) information and her "soft" (potentially fallible) information. In this context, a preference merge operation corresponds to a way of combining the agents information into a single "group information".</p><p>Similarly, preference-changing actions can be interpreted in a doxastic setting as acts of communication or persuasion. But not every preferencechanging action can be understood in this way: there has to be a specific relation between one agent's (the speaker's) prior preferences before the action and the whole's group's posterior preferences. Actions in which this relation holds will be instances of sincere and persuasive public communication.</p><p>An announcement of some information P is said to be "public" when it is common knowledge that this particular message P is announced and that all the agents are adopting the same attitude towards the (plausibility of the) announcement: they all adopt the same opinion about the reliability of this information. Depending on the specific common attitude, there are three main possibilities that have been discussed in the literature: 1) the information P is certainly true: it is common knowledge that the message is necessarily truthful; (2) the announcement is strongly believed by all agents to be true: it is common knowledge that everybody strongly believes that the speaker tells the truth;</p><p>(3) the announcement is (simply) believed: it is common knowledge that everybody believes (in the simple, "weak" sense) that the speaker tells the truth. These three alternatives correspond to three forms of "learning" a public announcement, forms discussed in <ref type="bibr" target="#b11">[12]</ref>, <ref type="bibr" target="#b13">[14]</ref> in a Dynamic Epistemic Logic context: "update"<ref type="foot" target="#foot_0">1</ref> !P , "radical upgrade" ⇑ P and "conservative upgrade" ↑ P . Under various names, they have been previously proposed in the literature on Belief Revision, e.g. by Rott <ref type="bibr" target="#b22">[23]</ref> and Boutilier <ref type="bibr" target="#b9">[10]</ref> , and in the literature on dynamic semantics for natural language by Veltman <ref type="bibr" target="#b27">[28]</ref>. The first operation (update) models a "truthful public announcements" of "hard" information; the other two are models of "soft" public announcements.</p><p>"Sincerity" of a communication act can be defined as sharing of information that was already "accepted" by the speaker (before the act). The meaning of "acceptance" depends on the form of communication: as we'll see, for updates with "hard" information, acceptance means "knowledge" (in the irrevocable sense), while for upgrades with "soft" information, acceptance just means some type of "belief" or "strong belief" (depending on whether the upgrade is "conservative" or "radical"). But, as a general concept, prior acceptance requires that the speaker's own doxastic structure should not be changed by her sincere communication.</p><p>"Persuasiveness" requires that the communicated information becomes commonly "accepted" by all the agents (in the same sense of "acceptance" that the speaker has adopted): this means that, after the act, everybody commonly exhibits the same doxastic attitude as the speaker (knowledge, belief or strong belief) towards the communicated information. So, after a persuasive communication, all agents reach a partial agreement, namely with respect to the specific information that has been communicated.</p><p>In a cooperative setting, the goal of "sharing" doxastic information is reaching "agreement" with respect to all the (relevant) issues. Indeed, the natural stopping point of iterated sharing is when nothing is left for further sharing or persuading; i.e. complete agreement. Any further sincere persuasive communication is redundant at that stage: it can no longer change any agent's doxastic structure. This happens exactly when all the agents' relevant doxastic attitudes towards all issues are exactly the same. (Which attitude is relevant depends again of the type of communication: "knowledge" for updates, "belief" for conservative upgrades, "strong belief" for radical upgrades). This means that the agents' (relevant) accessibility relations (i.e. respectively, the knowledge relations, the belief structure or the strong belief structure) became identical: we say that these structures have "merged" into one.</p><p>So we arrive in a natural way at the main issue addressed in this paper: the "dynamic merge" of doxastic structures by sincere persuasive public communication. In particular, we investigate the realizability of merge operations via (1) updates, (2) radical upgrades and (3) conservative upgrades. We show that, in the first case, only the epistemic structures (given by the "hard" knowledge relations) can be merged; and moreover, the only form of realizable merge is in this case the so-called "parallel merge" <ref type="bibr" target="#b0">[1]</ref>, given by the intersection of all preference relations. Epistemically, this corresponds to the familiar concept of "distributed knowledge". The realizability result is constructive, it comes with a specific announcement-based protocol for realizing this merge. This is essentially the algorithm in van Benthem's paper "One is a Lonely Number" <ref type="bibr" target="#b10">[11]</ref>: the agents announce "all they know", in no particular order. In the second case (radical upgrade), the "defeasible knowledge" structures are merged, but in fact this implies that all the other doxastic attitudes become the same: the agents' whole "doxastic preference" structures are merged. The natural analogue of the above-mentioned protocol for radical upgrades realizes now a different type of merge ("priority merge", itself a natural epistemic modification of the other basic type of merge considered in <ref type="bibr" target="#b0">[1]</ref>, the "lexicographic merge"). Finally, in the case of conservative upgrades, only the (simple) belief structures (given by the doxastic relations) can be merged. Moreover, priority merge is realizable via the natural analogue of the same protocol above for conservative upgrades.</p><p>This surface similarity between the three cases is pleasing, but in fact it hides deeper dissimilarities. As we mentioned, the realizable merge is unique in the first case. This is not true in the other cases: a whole class of merge operations can be realized by radical or conservative upgrades. Moreover, in the first case, the order in which the announcements is irrelevant, while in the other cases the order matters: if the upgrades are performed in a different order than the one prescribed in the protocol, then different merge operations may be realized! Finally, in the first case, the merge may be realized by allowing only one announcement by each agent (of "all she knows"). But this is not true in the other cases: the agents may have to make many soft announcements, including announcing facts that may already be entailed by their previous announcements! Some of the questions we address in this paper came to our attention after hearing a presentation by J. van Benthem on "The Social Choice Behind Belief Revision" at the workshop "Dynamic Logic Montreal" in 2007 <ref type="bibr" target="#b12">[13]</ref>. Van Benthem's view was that belief dynamics in itself can be captured as a form of preference merge (between the prior doxastic preferences and the on-going doxastic preferences about the new information). One can see that our approach here is actually the dual of the perspective adopted in <ref type="bibr" target="#b12">[13]</ref>: implementing preference merge dynamically by successive belief revisions, instead of understanding belief revision in terms of preference merge.</p><p>In the next section we introduce the necessary background on different notions of knowledge, belief and other doxastic attitudes. The main focus will be on the semantics, which is given via preference models. In section III, we introduce the main concepts of belief dynamics, following the work in <ref type="bibr" target="#b2">[3]</ref>, <ref type="bibr" target="#b3">[4]</ref>, <ref type="bibr" target="#b4">[5]</ref>, <ref type="bibr" target="#b5">[6]</ref>, <ref type="bibr" target="#b11">[12]</ref> on joint upgrades, as models for "sincere, persuasive public announcements". In section IV we present three natural merge operations: parallel merge, lexicographic merge and (relative) priority merge. In section V we present the protocols for dynamic realizations of parallel merge and priority merge, giving counterexamples that point out the differences between them. We end with a short note and an open question in our Conclusions section.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>II. PLAUSIBILITY STRUCTURES AND DOXASTIC ATTITUDES</head><p>In this section, we review some basic notions and results from <ref type="bibr" target="#b2">[3]</ref>. We use finite "plausibility" frames, in the sense of our papers <ref type="bibr" target="#b2">[3]</ref>, <ref type="bibr" target="#b3">[4]</ref>, <ref type="bibr" target="#b4">[5]</ref>, <ref type="bibr" target="#b5">[6]</ref>, <ref type="bibr" target="#b6">[7]</ref>, <ref type="bibr" target="#b7">[8]</ref>. These kind of semantic structures are the natural multi-agent generalizations of structures that are standard, in one form or another, in Belief Revision: Halpern's "preferential models" <ref type="bibr" target="#b19">[20]</ref>, Spohn's ordinal-ranked models <ref type="bibr" target="#b23">[24]</ref>, Board's "belief-revision structures" <ref type="bibr" target="#b15">[16]</ref>, Grove's "sphere" models <ref type="bibr" target="#b18">[19]</ref>. Unlike the settings in <ref type="bibr" target="#b6">[7]</ref>, <ref type="bibr" target="#b7">[8]</ref>, we restrict here to the finite case, for reasons of simplicity.</p><p>For a given set A of labels called "agents", a (finite, multi-agent) plausibility frame is just a finite, multi-agent Kripke frame (S, R a ) a∈A in which the accessibility relations R a ⊆ S × S are usually denoted by ≤ a , are called "plausibility orders" or "doxastic preference" relations, and are assumed to be locally connected preorders. Here, a "locally connected preorder" ≤⊆ S × S is a reflexive and transitive relation such that: if s ≤ t and s ≤ w then either t ≤ w or w ≤ t; and if t ≤ s and w ≤ s then either t ≤ w or w ≤ t. See <ref type="bibr" target="#b2">[3]</ref> for a justification and motivation for these conditions. <ref type="foot" target="#foot_1">2</ref>We use the notation s ∼ a t for the comparability relation with respect to ≤ a (i.e. s ∼ a t iff either s ≤ a t or t ≤ a s), s &lt; a t for the corresponding strict order relation (i.e. s &lt; a t iff s ≤ a t but t ≤ a s), and s ∼ = a t for the corresponding indifference relation (i.e. s ∼ = a t iff both s ≤ a t and t ≤ a s). When using the R a notation for the preference relations ≤ a , we also use the notations R &lt; a , R ∼ a and R ∼ = a to denote the corresponding strict order, comparability and indifference relations &lt; a , ∼ a and ∼ = a .</p><p>In a plausibility frame, the comparability relations ∼ a are equivalence relations, hence they induce partitions. We denote by s(a) := {t ∈ S : s ∼ a t} the ∼ a -partition cell of s, comprising all a's epistemic alternatives for s. Finally, we use → a to denote the "best alternative" or "most preferred" relation → a , given by: s → a t iff t ∈ s(a) and t ≥ a t for all t ∈ s(a). Plausibility Models A (finite, multi-agent, pointed) plausibility model is a structure S = (S, ≤ a , • , s 0 ) a∈A , consisting of a plausibility frame (S, ≤ a ) a∈A together with a valuation map • : Φ → P(S), mapping every element p of some given set Φ of "atomic sentences" into a set of states p ⊆ S, and together with a designated state s 0 ∈ S, called the "actual state". (Common) Knowledge and (Conditional) Belief Given a plausibility model S, sets P, Q ⊆ S of states, an agent a ∈ A and some group G ⊆ A, we define: best a P = M ax ≤a P := {s ∈ P : t ≤ a s for all t ∈ P }, K a P := {s ∈ S : s(a) ⊆ P },</p><formula xml:id="formula_0">B a P := {s ∈ S : best a s(a) ⊆ P }, B Q a P := {s ∈ S : best a ( s(a) ∩ Q ) ⊆ P }, Ek G P := a∈G K a P , Eb G P := a∈G B a P , Ck G P := n∈N Ek n G P (where Ek 0 G P := P and Ek n+1 G := Ek G (Ek n G P ) ), EbP := Eb A P , and CkP := Ck A P .</formula><p>Interpretation. The elements of S represent the "possible worlds", or possible states of a system: possible descriptions of the real world. The correct description of the real world is given by the "actual state" s 0 . The atomic sentences p ∈ Φ represent "ontic" (non-doxastic) facts, that might hold or not in a given state. The valuation tells us which facts hold at which worlds. For each agent a, the equivalence relation ∼ a represents the agent a's epistemic indistinguishability relation, inducing a's information partition; s(a) is the state s's information cell with respect to a's partition: if s were the real state, then agent a would consider all the states t ∈ s(a) as "epistemically possible". K a P is the proposition "agent a knows P ": observe that this is indeed the same as Aumann's partition-based definition of knowledge. The plausibility relation ≤ a is agent a's "doxastic preference" relation: her plausibility order between her "epistemically possible" states. So we read s ≤ a t as "agent a considers t at least as plausible as s (though the two are epistemically indistinguishable)". This is meant to capture the agent's (conditional) beliefs about the state of the system. Note that s ≤ a t implies s ∼ a t, so that the agent only compares the plausibility of states that are epistemically indistinguishable: so we are not concerned here with counterfactual beliefs (going against the agent's knowledge), but only with conditional beliefs (if given new evidence that must be compatible with prior knowledge). So B Q a P is read "agent a believes P conditional on Q " and means that, if a would receive some further (certain) information Q (to be added to what she already knows) then she would believe that P was the case. So conditional beliefs B Q a give descriptions of the agent's plan (or commitments) about what would she believe (about the current state) if she would learn some new information Q. To quote J. van Benthem in <ref type="bibr" target="#b11">[12]</ref>, conditional beliefs are "static preencodings" of the agent's potential belief changes in the face of new information. The above definition says that B Q a P holds iff P holds in all the "best" (i.e. the most plausible) Q-states (that are consistent with a's knowledge). In particular, a simple (nonconditional) belief B a P holds iff P holds in all the best states that are epistemically possible for a. Kripke Modalities For any binary accessibility relation R ⊆ S × S and set P ⊆ S, the corresponding Kripke modality is given by:</p><p>[R]P := {s ∈ S : ∀t (sRt ⇒ t ∈ P )}</p><p>We think of sets P ⊆ S as propositions and write s |= P instead of s ∈ P .</p><p>It is easy to see that belief is the Kripke modality B a = [→ a ] for the "best alternative" relation → a defined above. Similarly, knowledge is the Kripke modality for the epistemic relation</p><formula xml:id="formula_1">K a = [ a ∼].</formula><p>Safe belief as "defeasible knowledge" The Kripke modality for the plausibility relation 2 a := [≤ a ] was called "safe belief " in <ref type="bibr" target="#b2">[3]</ref>, and "the preference modality" in <ref type="bibr" target="#b14">[15]</ref>. It was also considered by Stalnaker in <ref type="bibr" target="#b24">[25]</ref>, as a formalization of Lehrer's notion of "defeasible knowledge". According to this so-called defeasibility theory of knowledge, a belief counts as "knowledge" if it is stable under belief revision with any true information. Indeed, the safe belief modality has the property that it is conditionally believed under any true condition:</p><formula xml:id="formula_2">s |= 2 a Q iff: s |= B P</formula><p>a Q for all P such that s |= P. For this reason, we'll refer to 2 using either of the terms "safe belief" and "defeasible knowledge".</p><p>In contrast, the knowledge concept captured by the K modality can be called "irrevocable knowledge", since it is a belief that is stable under revision with any information (including false ones):</p><formula xml:id="formula_3">s |= K a Q iff: s |= B P a Q for all P.</formula><p>There are other differences: irrevocable knowledge K satisfies the axioms of the modal system S5, so it is fully introspective; in contrast, defeasible knowledge 2 is only positively introspective, but not necessarily positively introspective. (In fact, the complete logic of 2 is the modal logic S4.3.) An agent's belief can be safe without him necessarily "knowing" this (in the "strong" sense of the irrevocable knowledge K): "safety" (similarly to "truth") is an external property of the agent's beliefs, that can be ascertained only by comparing his beliefrevision system with reality. Indeed, the only way for an agent to know a belief to be safe is to actually know it to be truthful. This is captured by the valid identity: K a 2 a P = K a P . In other words: knowing that something is safe to believe is the same as just knowing it to be true. In fact, all beliefs held by an agent "appear safe" to him: in order to believe them, he has to believe that they are safe. This is expressed by the valid identity: B a 2 a P = B a P , saying that: believing that something is safe to believe is the same as just believing it <ref type="foot" target="#foot_2">3</ref> . Contrast this with the situation concerning "knowledge": in our logic (as in most standard doxastic-epistemic logics), we have the identity: B a K a P = K a P . So believing that something is known is the same as knowing it! The difference between K and 2 and their different properties, expressed by the above identities, are enough to solve the so-called "Paradox of the Perfect Believer" in <ref type="bibr" target="#b17">[18]</ref>, <ref type="bibr" target="#b28">[29]</ref>, <ref type="bibr" target="#b26">[27]</ref>, <ref type="bibr" target="#b21">[22]</ref>, <ref type="bibr" target="#b29">[30]</ref>, <ref type="bibr" target="#b16">[17]</ref>: when we say that somebody "only believes that she knows something (without really knowing it)", we're using the word "knowledge" in a different sense than the fully introspective K modality. A natural reading reading is to interpret it as the defeasible knowledge 2, in which case "believing that you know" is the same as "believing", by the identity B a 2 a P = B a P . "Strong Belief" Another important doxastic attitude, called strong belief, is given by: Sb a P = {s ∈ P : s(a) ∩ P = ∅ and w &gt; a t for all t ∈ s(a) ∩ P and all w ∈ s(a) \ P }.</p><p>So P is strong belief at a state s iff P is epistemically possible and moreover all epistemically possible P -states at s are more plausible than all epistemically possible non-P states. This notion was called "strong belief" by Battigalli and Siniscalchi <ref type="bibr" target="#b8">[9]</ref>, while Stalnaker <ref type="bibr" target="#b25">[26]</ref> calls it "robust belief". It is easy to see that we have the following equivalence:</p><formula xml:id="formula_4">S |= Sb a P iff S |= B a P and S |= B Q a P for every Q such that S |= ¬K a (Q → ¬P ).</formula><p>In other words: something is strong belief iff it is believed and if this belief can only be defeated by evidence (truthful or not) that is known to contradict it. An example is the "presumption of innocence" in a trial: requiring the members of the jury to hold the accused as "innocent until proven guilty" means asking them to start the trial with a "strong belief" in innocence.</p><p>Example 1: Consider the situation of Professor Albert Winestein. Albert feels that he is a genius. He knows that there are only two possible explanations for this feeling: either he is a genius or he's drunk. He doesn't feel drunk, so he believes that he is a sober genius. However, if he realized that he's drunk, he'd think that his genius feeling was just the effect of the drink; i.e. after learning he is drunk he'd come to believe that he was just a drunk nongenius. In reality though, Albert is both drunk and a genius.</p><p>We can represent Albert's information and (conditional) by the following plausibility relation:</p><formula xml:id="formula_5">¬D, ¬G D, G a / / D, ¬G a / / ¬D, G</formula><p>Here, as in all other drawings, we use labeled arrows for plausibility relations ≤ a (not for the "best alternative" relations → a !), going from less plausible to more plausible worlds, but we skip loops and composed arrows (since ≤ a are reflexive and transitive). The real world is (D, G). Albert considers (D, ¬G) as being more plausible than (D, G), and (¬D, G) as more plausible than (D, ¬G). Albert can distinguish all these worlds from (¬D, ¬G), since (in the real world) he knows ("K a ") that either D or G holds.</p><p>Consider another agent, Professor Mary Curry. She is pretty sure that Albert is drunk: she can see this with her very own eyes. But Marry is completely indifferent with respect to Albert's genius: so she considers the possibility of genius and the one of non-genius as equally plausible. However, having a philosophical mind, Mary is aware of the possibility that the testimony of her eyes may in principle be wrong: it is in principle possible that Albert is not drunk, despite the presence of the usual symptoms. Marry's beliefs are captured by her plausibility order:</p><formula xml:id="formula_6">¬D, ¬G o o m / / ¬D, G m / / D, G o o m / / D, ¬G</formula><p>We can see from the drawing that Mary strongly believes D, and in fact her belief is safe: so she "knows" that Albert is drunk, in the sense of defeasible knowledge (although she doesn't know it, in the sense of K). But she is completely indifferent with respect to G: hence she considers the possibility of G and ¬G as equally plausible.</p><p>To put together the agents' plausibility orders, we need to be told what do they know about each other. Suppose all their opinions as described above (i.e. all their conditional beliefs) are common knowledge: essentially, this means their doxastic preferences are common knowledge. We thus obtain the following multi-agent plausibility model:</p><formula xml:id="formula_7">¬D, ¬G m / / ¬D, G o o m 1 1 D, ¬G a q q 1 1 D, G a q q m m m At the real world (D, G), one can check that B a G is true. Further, Albert does not know G, hence (D, G) |= ¬K a G ∧ ¬2 a G while (D, G) |= K a (D ∨ G).</formula><p>Moreover, he doesn't "know" G in the defeasible sense either: his belief in G is not safe, since B D a ¬G holds in the real world: so if Albert would learn (correctly) that he was drunk, he'd lose his (true) belief in being a genius. Example 2 Let us now relax our assumptions about the agents' mutual knowledge: suppose that only Albert's opinions are common knowledge; in addition, suppose that it is common knowledge that Mary has no opinion on Albert's genius (so she considers genius and non-genius as equi-plausible), but that she has a strong opinion about his drunkness: she can see him, so judging by his behavior she either strongly believes he's drunk or she strongly believes he's not drunk. However, her actual opinion about this is unknown to Albert, who thus considers both opinions as equally plausible.</p><p>The resulting model is:</p><formula xml:id="formula_8">¬D, ¬G m / / a ¬D, G o o m 1 1 a D, ¬G a q q 1 1 a D, G a q q m m m a ¬D, ¬G m / / O O ¬D, G o o O O D, ¬G a q q 1 1 O O m m m D, G a q q m m m O O</formula><p>The real world is represented by the upper (D, G) state. One can check that, in the real world, Mary still strongly believes Albert he's drunk; but he does not know this: Mary's plausibility relation between D and ¬D is unknown to Albert. However, he knows that either she strongly believes D or she strongly believes ¬D.</p><p>We can go on and modify the example further, by allowing that Albert's plausibility is not commonly known either etc. But, for simplicity of drawing, we stop here: when less common knowledge is assumed, more worlds are possible, and hence the drawings get more and more complex. G-Bisimulation For a group G ⊆ A of agents, we say the pointed models S = (S, ≥ a , , s 0 ) a∈A and S = (S , ≥ a , , s 0 ) a∈A are G-bisimilar, and write S G S , if the pointed Kripke models (S, ≥ a , , s 0 ) a∈G and (S , ≥ a , , s 0 ) a∈G (having as accessibility relations only the G-labeled relations) are bisimilar in the usual sense from Modal Logic [?]. When G = A, we simply write S S , and say S and S are bisimilar. Bisimilar models differ only formally: they encode precisely the same doxasticepistemic information, and they satisfy the same modal sentences.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>III. BELIEF DYNAMICS: SINCERE, PERSUASIVE PUBLIC COMMUNICATION</head><p>We move on now to belief dynamics: what happens when some proposition P is publicly announced? According to Dynamic Epistemic Logic, this induces, not only a revision of beliefs, but a change of model: a "revision" of the whole relational structure, changing the agents' plausibility orders. However, the specific change depends on the agents' attitudes to the plausibility of the announcement: how certain is the new information? Three main possibilities have been discussed in the literature: (1) the announcement P is certainly true: it is common knowledge that the speaker tells the truth; (2) the announcement is strongly believed to be true by everybody: it is common knowledge that everybody strongly believes that the speaker tells the truth; (3) the announcement is (simply) believed: it is common knowledge that everybody believes (in the simple, "weak" sense) that the speaker tells the truth. These three alternatives correspond to three forms of "joint learning", forms discussed in <ref type="bibr" target="#b11">[12]</ref>, <ref type="bibr" target="#b13">[14]</ref> in a Dynamic Epistemic Logic context: "update"<ref type="foot" target="#foot_3">4</ref> !P , "radical upgrade" ⇑ P and "conservative upgrade" ↑ P . Under various names, the single-agent versions of these doxastic transformers have been previously proposed by e.g. Rott <ref type="bibr" target="#b22">[23]</ref>, Boutilier <ref type="bibr" target="#b9">[10]</ref> and Veltman <ref type="bibr" target="#b27">[28]</ref>.</p><p>We will use "joint upgrades" as a general term for all these three model transformers, and denote them in general by †P , where † ∈ {!, ⇑, ↑}. Formally, each of our joint upgrades is a (possibly partial) function taking as inputs pointed models S = (S, ≤ a , , s 0 ) and returning new ("upgraded") pointed models †P (S) = (S , ≤ a , , s 0 ), with S ⊆ S. Since upgrades are purely doxastic, they won't affect the real world or the "ontic facts" of each world: i.e. they all satisfy s 0 = s 0 and p = p ∩ S , for atomic p. So, in order to completely describe a given upgrade, we only have to specify (a) its possible inputs S, (b) the new set of states S ; (c) the new relations ≤ a .</p><p>(1) Learning Certain information: Joint "Update". The update !P is an operation on pointed models which is executable (on a pointed model S) iff P is true (at S) and which deletes all the non-Pworlds from the pointed model, leaving everything else the same. Formally, an update !P is an upgrade such that: (a) it takes as inputs only pointed models S, such that S |= P ; (b) the new set of states is S = {s ∈ S : s |= P }; (c) s ≤ a t iff s ≤ a t and s, t ∈ S .</p><p>(2) Learning from a Strongly Trusted Source: (Joint) "Radical" Upgrade. The "radical upgrade" (or "lexicographic upgrade") ⇑ P , as an operation on pointed plausibility models, can be described as "promoting" all the P -worlds within each information cell so that they become "better" (more plausible) than all ¬P -worlds in the same information cell, while keeping everything else the same: the valuation, the actual world and the relative ordering between worlds within either of the two zones (P and ¬P ) stay the same. Formally, a radical upgrade ⇑ P is (a) a total upgrade (taking as input any model S), such that (b) S = S, and (c): s ≤ a t iff either s ∈ P S and t ∈ s(a) ∩ P S , or s ≤ a t.</p><p>(3) "Barely believing" what you hear: (Joint) "Conservative" Upgrade. The so-called "conservative upgrade" ↑ P (called "minimal conditional revision" by Boutilier <ref type="bibr" target="#b9">[10]</ref>) performs in a sense the minimal possible revision of a model that is forced by believing the new information P . As an operation on pointed models, it can be described as "promoting" only the "best" (most plausible) P -worlds, so that they become the most plausible in their information cell, while keeping everything else the same. Formally, ↑ P is (a) a total upgrade, such that (b) S = S, and (c): s ≤ a t iff either t ∈ best a ( s(a) ∩ P S ) or s ≤ a t.</p><p>Redundancy, Informativity and Sincerity A joint upgrade †P is redundant on a model S with respect to a group of agents G ⊆ A if the upgraded model is G-bisimilar to the original one: †P (S) G S. This means that, as far as the group G is concerned, †P doesn't change anything: all the group G's doxastic attitudes stay the same after the upgrade. An upgrade †P is informative (on S) to group G if it is not redundant with respect to G. An upgrade †P is redundant with respect to an agent a if it is redundant with respect to the singleton {a}.</p><p>Redundancy is especially important if we want to capture the "sincerity" of an announcement made by a speaker. Intuitively, an announcement is "sincere" when it agrees with the speaker's prior epistemic state: accepting the announcement doesn't change the speaker's own state.</p><p>Definition </p><formula xml:id="formula_9">D, ¬G 1 1 D, G a q q m m m</formula><p>After the update, Albert starts to wrongly believe that ¬G is the case! This is an example of true but un-safe belief : it can be lost after acquiring (new) true information.</p><p>Example 4 Consider again the situation in example 3, but instead of Albert receiving the information from an infallible source, he receives the information from Mary. Mary announces publicly (to Albert) that D is the case and we assume that Mary's announcement is both sincere and persuasive: she tells what she thinks and she convinces Albert. Since Mary is a fallible agent (and not an infallible source), this announcement is soft: in principle, she could be wrong, or she could lie, or she could simply guess and be right only by chance. So we cannot interpret Mary's announcement as a "hard" update !D, since such an announcement wouldn't be sincere: the update !D would automatically change Mary's order (making her irrevocably know D, when she didn't know it before!). But we can model it as a "soft" announcement ⇑ D; i.e. after hearing it, all agents upgrade with D: they start to prefer any D-world to any ¬D-world. The upgraded model is</p><formula xml:id="formula_10">¬D, ¬G m / / ¬D, G o o m 1 1 a -- D, G 1 1 a -- D, ¬G m m m</formula><p>Note that Mary's order is left unchanged, so the announcement was indeed sincere. Example 5 What if instead Mary announces that she "knows" that Albert is drunk? If we take this in the sense of irrevocable knowledge K, then such an announcement would not be sincere: indeed, in the original situation of Example 1, K m D was false. However, she did "know" it in the sense of defeasible knowledge 2 m D: she correctly believed D, and this belief was safe. This "knowledge" was fallible, and she was aware of this: she didn't believe that she knows irrevocably (¬B m K m D), but she believed that she "knows" defeasibly (B m 2 m D). Hence, she is sincere if she announces that she "knows" in this sense. Assuming that Albert is also aware of the fallibility of her knowledge, but that he still highly trusts her to be right, we can interpret this as a sincere and persuasive announcement of the form ⇑ (2D). Its effect is the same as in Example 4: the upgraded model is the same. Counterexample 6 Note that simply announcing that she believes D, or even that she strongly believes D, won't do: this will not be persuasive, since it will not change Albert's beliefs about the facts of the matter (D or ¬D), although it may change his beliefs about her beliefs. Being informed of another's beliefs is not enough to convince you of their truth. Indeed, Mary's beliefs are already common knowledge in the initial model of Example 1: so an upgrade ⇑ (B m D) would be superfluous! Persuasiveness So what is needed for persuasive communication is that the speaker (Mary) "converts" the others to her own beliefs. For this, she should not simply announce that she believes them. Instated, she can either announce that something is the case (when in fact she just strongly believes that it is the case), or else announce that she defeasibly "knows" it (when she only believes that she "knows" it, and in fact this implies that she strongly believes that she "knows").</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>IV. MERGE OPERATIONS</head><p>A merge operation, or "aggregation procedure", is an operator taking any sequence {R i } 1≤i≤n of preference relations into a "group preference" rela-</p><formula xml:id="formula_11">tion i R i = R 1 R 2 • • • R n .</formula><p>In <ref type="bibr" target="#b0">[1]</ref> the authors give a general classification of types of preference merge, in a very general context, subject to some minimal "fairness" and rationality conditions. They show that all the merge operations satisfying these conditions can be represented as compositions of only two basic merge operators: "parallel merge" and "lexicographic merge". Parallel Merge The merge operation we consider first can be thought of as the most "democratic" form of aggregation: everybody has a veto, so that group preferences are unanimous preferences. Following <ref type="bibr" target="#b0">[1]</ref>, we call it parallel merge. It simply takes the merged relation to be the intersection a R a∈G := a∈G R a of all the preference relations of the agents in a given group G ⊆ A. <ref type="foot" target="#foot_4">5</ref>Parallel merge is particularly well suited for aggregating the agents' "hard information" (irrevocable knowledge) K, i.e. for merging the epistemic relations {∼ a } a∈G . Since if we consider absolutely certain and fully introspective knowledge, there is no danger of introducing an inconsistency. The agents can pool their information in a completely symmetric manner, accepting the other's bits without reservations. In fact, parallel merge of the agents' irrevocable knowledge gives us the standard concept of "distributed knowledge" DK:</p><formula xml:id="formula_12">DK G P = [ a∈G R a ]P.</formula><p>Lexicographic Merge When the group is hierarchically structured according to some total order (on agents), called a "priority order", then the agents with higher priority are thought of as having a higher "epistemic expertise" than the agents with lower priority. For a group G = {a, b} of two agents, in which a has higher priority, we can think of a as the "expert" (or the professor) and of b as the "layman" (or the student). In this context, the natural doxastic merge operation is the socalled lexicographic merge. For two agents a, b, the "lexicographic merge" R a/b gives priority to agent a's strong (i.e. strict) preferences over b's: first, the strict preference order of a is adopted by the group; and when a is indifferent between two options, then b's preference is adopted; finally, a-incomparability gives group incomparability. Formally:</p><formula xml:id="formula_13">R a/b := R &lt; a ∪ (R ∼ = a ∩ R b ) = R &lt; a ∪ (R a ∩ R b ) = R a ∩ (R &lt; a ∪ R b ).</formula><p>The lexicographic merge is particularly suited for aggregating "soft information" (strong beliefs, safe beliefs, conditional beliefs) in the absence of any hard information: since soft information is not fully reliable (because of lack of negative introspection for 2, and because of potential falsity for belief, conditional belief and strong belief), some "screening" must be applied to some agents' information (and so some hierarchy must be enforced), in order to ensure consistency of the merge. (Relative) Priority Merge Note that, in lexicographic merge, the first agent's priority is "absolute". But in the presence of "hard" information, the lexicographic merge of soft information must be modified, by first pooling together all the hard information and then using it to restrict the lexicographic merge of soft information. This leads us to a "more democratic" combination of Parallel Merge and Lexicographic Merge, called "(relative) priority merge" R a⊗b :</p><formula xml:id="formula_14">R a⊗b := (R &lt; a ∩ R ∼ b ) ∪ (R ∼ = a ∩ R b ) = R a ∩ R ∼ b ∩ (R &lt; a ∪ R b ).</formula><p>In a Relative Priority Merge, both agents have a "veto" with respect to group incomparability. Here the group can only compare options that both agents can compare; and whenever the group can compare two options, everything goes on as in the lexicographic merge. Agent a's order gets priority, while b's order is adopted only when a is indifferent.</p><p>Since our plausibility structures they encode both the "hard" and the "soft" information possessed by the agent, it seems that Priority Merge is best suited for aggregating the agents' plausibility relations. If instead we give priority to Albert, we simply obtain Albert's order as our "merge":</p><formula xml:id="formula_15">R a⊗m = R a .</formula><p>It is important to note that in both cases of Example 7, some of the resulting joint beliefs are wrong: when giving priority to Mary, both agents end up believing ¬G; while if we give priority to Albert, they both end up believing ¬D. In fact, no type of hierarchic belief merge is a warranty of veracity.</p><p>V. "REALIZING" MERGE DYNAMICALLY Intuitively, the purpose of sharing hard knowledge, defeasible knowledge or beliefs is to achieve a state in which there is nothing else to share, i.e. one in which any further sharing is redundant: all hard knowledge, or defeasible knowledge, or beliefs, are already shared in common. For sharing via a specific type of public communication † ∈ {!, ⇑, ↑}, this happens precisely when the model-changing process induced by †-type sharing reaches a fixed point of †communication: a model that is invariant under that particular type of announcements.</p><p>For every specific type of public communication † ∈ {!, ⇑, ↑}, agent a's "relevant structure" in a model S is given by: a's epistemic relation a ∼⊆ S × S in the case of updates !; a's plausibility relation ≤ a in the case of radical upgrade ⇑; and a's doxastic "best alternative" relations → a in the case of conservative upgrade.</p><p>A (finite) †-upgrade sequence is a finite sequence † P = ( †P 1 , . . . , †P n ) of upgrades †P i of the given type † ∈ {!, ⇑, ↑}. Any †-upgrade sequence induces a (partial) function, mapping every pointed model S into a finite sequence † P (S) = (S i ) i of pointed models, defined inductively by: S 0 := S; and S i+1 := †P i (S i ), if this is defined (and undefined otherwise). A †-upgrade sequence † P is a †-communication sequence within group G if all its upgrades are sincere for at least one G-agent at the moment of speaking: i.e. for every i ≤ n there exists a i ∈ G such that †P i is sincere for a i on S i .</p><p>A †-communication sequence † P within a group G is exhaustive on a model S if the last model S n of the induced sequence † P (S) is invariant under (sincere) †-communication; equivalently, iff it is maximal: it cannot be extended to any longer †-communication sequence. By Proposition 2, the last model S n generated by an exhaustive †communication sequence is one in which all the G-agents' "relevant structures" R n a coincide. An exhaustive †-communication sequence within G realizes a given preference merge operation on a given pointed model S if, for any agent b ∈ G, the relevant structures R n b in the last generated model is the -merge of the initial relevant structures {R 0 a } a∈G : i.e. R n b = a∈G R 0 a , for all b ∈ G. A merge operation is realizable by †-communication (within a group G) if there exists some exhaustive †-communication sequence (within G) that realizes . The merge operation is said to be constructively realizable by †communication if there exists a protocol such that every †-communication sequence that complies with the protocol is exhaustive and realizes .</p><p>For each of the above types of public communication (!, ⇑, ↑), we can ask which merge operations are realizable, or constructively realizable. The answer depends on the constraints (transitivity, connectedness etc.) satisfied by the agents' relevant structures (epistemic, doxastic or plausibility relations). Proposition 3 Parallel merge is the only merge operation that is realizable by updates (i.e. by !communication). Moreover, parallel merge is constructively realizable by updates. The protocol is as follows: in no particular order, the agents have to publicly announce "all that they know" (in the sense of irrevocable knowledge K). More precisely, for each set of states P ⊆ S such that P is known to a given agent a, a public announcement !P is made.</p><p>This essentially is the protocol in van Benthem's paper "One is a Lonely Number" <ref type="bibr" target="#b10">[11]</ref>. Formally, the protocol consists of n steps, each step being a sequence of announcements by the same agent: first, one of the agents, say a, announces all he knows. This is the sequence of announcements:</p><formula xml:id="formula_16">σ a := {!P : P ⊆ S such that s |= K a P }</formula><p>(where is sequential composition of actions). Then, another agent b performs a similar step (announcing all she knows after the first step), etc. Important Observations: (1) The order in which the agents make the announcements doesn't actually matter. The may even "interrupt" each other: any exhaustive !-communication sequence produces the same result. (2) The protocol can be simplified by restricting it only to knowledge announcements, i.e. of the form !(K a P ) (for each P such K a P holds): instead of announcing all they know, the agents announce that they know all that they know. (3) The protocol can be simplified by allowing each agent to make only one announcement: instead of successively announcing everything he knows, he can just announce the conjunction !( {P : S |= K a P }) of all the things he knows. Proposition 4 For every given priority order (a 1 , . . . , a n ) on agents, the corresponding priority merge (of plausibility relations) is constructively realizable by radical upgrades (i.e. by ⇑communication), but is not the only such realizable operation. The protocol is a natural modification of the previous one: following the priority order, the agents have to publicly announce "all that they strongly believe". More precisely, for each set P ⊆ S such that P is strongly believed by the given agent a, a joint radical upgrade ⇑ P is performed.</p><p>Formally, the protocol consists again of n steps, each step being a sequence of announcements by the same agent: first, the first agent according to the priority order, say a, announces all that he strongly believes. This is the sequence of radical upgrades:</p><formula xml:id="formula_17">ρ a := {⇑ P : P ⊆ S such that S |= Sb a P }.</formula><p>Then, the next agent in the hierarchy, say b, performs a similar step (announcing all she strongly believes after the first step), etc. Important Observations: (1) Now, the order of the announcements matters: the agents have to respect the priority order. Moreover, no interruptions are allowed: agents with lower priority can speak only after the agents with higher priority finished announcing all their strong beliefs. Any interruptions may lead to the realization of complete different merge operations (see the Counterexample below)!</p><p>(2) This protocol can also be simplified by restricting it only to "defeasible knowledge" announcements, i.e. announcements of the form !(2 a P ). But recall that, unlike irrevocable knowledge, defeasible is not negatively introspective: so the agents don't know for sure what things they "know" and what not, and hence the best they can do is to announce all the things they believe they "know". But, since believing to (indefeasibly) "know" is the same as believing, they have to announce that they "know" P , for each proposition P which they believe. So the simplified protocol replaces e.g. the first step by the following sequence of radical upgrades ρ a := {⇑ (2 a P ) : P ⊆ S such that S |= B a P }.</p><p>(3) Unlike the case of upgrades and parallel merge, in general the above protocol actually requires multiple announcements by the same agents, including announcing facts that may already be entailed by their previous announcements! A sequence of radical upgrades is in general not equivalent to a radical upgrade, so there is no way to compress the sequences ρ a or ρ a into a single upgrade! Example 8 Recall the initial order of Marry and Albert in Example 1. Consider the protocol:</p><formula xml:id="formula_18">⇑ 2 m D; ⇑ K a (D ∨ G); ⇑ 2 a ¬G</formula><p>The first is a sincere announcement by Mary, the rest are sincere announcements by Albert. The second announcement, though not in "defeasible knowledge" form (as required by the simplified protocol in observation 2 above), is equivalent to one in this form, because of the identity: K a P = 2 a K a P . This communication sequence yields the model presented in Example 7, as the result of the priority merge R m⊗a of the two plausibility orders. Counterexample 9 To show the non-uniqueness of priority merge among ⇑-realizable merge operations and the order-dependency of the above protocol, note first that the priority merge of the ordering This first is a sincere announcement by b, the second is sincere announcement by a. This is an exhaustive ⇑-communication sequence, but note that the strict priority order required by the above protocol is not respected here: the first speaker b is "interrupted" by the second speaker a before she finished announcing all his strong beliefs. (Indeed, s ∨ u is also a strong belief of agent b, though one that is entailed by the first announcement; nevertheless, b should have first announced this second strong belief before a would have been allowed to speak!) And, indeed, the resulting model, though a fixed point of ⇑communication (since all the plausibility relations come to coincide), realizes a different merge operation than either of the two priority merges: The Power of Agendas This order-dependence illustrates a phenomenon well-known in Social Choice Theory: the important role of the person who "sets the agenda": the "Judge" who assigns priorities to witnesses' stands; the chairman or moderator who determines the order of the speakers in a meeting, as well as the the issues to be discussed and the relative priority of each issue. Proposition 5 For every given priority order (a 1 , . . . , a n ) on agents, the corresponding priority merge of doxastic "best alternatives" relations {→ a } a is constructively realizable by conservative upgrades (i.e. by ↑-communication). The protocol is the natural modification of the previous one: following the priority order, the agents have to publicly announce "all that they (simply) believe". More precisely, for each set of states P ⊆ S such that P is believed by the given agent a, a joint conservative upgrade ↑ P is performed.</p><p>Similar observations as the ones following Proposition 4 apply to the case of doxastic upgrades: priority merge is not the only realizable merge operation; the order of announcements does matter; in general, the protocol may require multiple announcements by the same agents.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>VI. CONCLUSION</head><p>In this paper, we focused on dynamically realizing two specific merge operations by public communication. But, as we saw, depending on the "agenda", soft announcements can realize a whole plethora of merge operations. Nevertheless, not everything goes: the requirements imposed on the plausibility relations generally pose restrictions to which kinds of merge are realizable. This raises an important open question: characterize the class of merge operations realizable by radical (or conservative) upgrades.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Example 7 :</head><label>7</label><figDesc>If in Example 1, we give priority to Mary, the relative priority merge R m⊗a of Mary's and Albert's original plausibility orders amounts to:</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>3 ?</head><label>3</label><figDesc>&gt;=&lt; 89:; u is equal to either of the two orders (depending on which agent has priority). But consider now the following public dialogue⇑ 2 b u • ⇑ 2 a (u ∨ w)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>: A (public) announcement †ϕ made by an agent a is said to be sincere if it leaves unchanged agent a's own plausibility structure; i.e. it's non-informative to agent a. Proposition 1 1) In a pointed model S, !P is redundant with respect to a group G iff P is common knowledge in S among the group G; i.e.: S G !P (S) iff S |= Ck G P . Special case: an announcement !P made by an agent a is sincere iff a knows P , i.e. if K a P holds in the original model (before the announcement). 2) ⇑ P is redundant with respect to a group G iff it is common knowledge in the group G that P is strongly believed (by all G-agents); i.e. S G ⇑ P (S) iff S |= Ck G (ESb Invariance under communication: For a given upgrade type † ∈ {!, ⇑, ↑}, we say that a pointed model S is invariant under †-communication within group G iff, for all propositions P , any sincere announcement of the form †P made by any agent in G is redundant in S. ≤ a =≤ b for all a, b ∈ G. 3) A pointed model S is invariant under ↑communication within G iff all (simple) beliefs are common beliefs within G, i.e. for all propositions P and all agents a, b ∈ A, B a P holds in S iff B b P holds in S; equivalently: iff all G-agents' "best alternative" relations coincide: → a =→ b for all a, b ∈ G.</figDesc><table><row><cell>Proposition 2 1) A pointed model S is invariant under !-communication within G iff all (irrevocable) knowledge is common knowledge within G, i.e. for all propositions P and all agents a, b ∈ A, K 2) A pointed model S is invariant under ⇑-communication within G iff all "defeasible knowledge" is common defeasible knowledge within G, i.e. for all propositions P and all agents a, b ∈ A, 2 Example 3 Suppose that in the situation in Ex-ample 1 above, a trusted, infallible source publicly announces that Albert is drunk: this is "hard", in-controvertible information, corresponding to a joint update !D. The updated model is</cell></row></table><note>G ). Special case: an announcement ⇑ P made by an agent a is sincere iff a strongly believes P (before the announcement). 3) ↑ P is redundant with respect to a group G iff it is common knowledge in the group G that P is believed (by all G-agents); i.e. S G ↑ P (S) iff S |= Ck G (Eb G P ). Special case: an announcement ↑ P made by an agent s is sincere iff a believes P (before the announcement). a P holds in S iff K b P holds in S; equivalently: iff all G-agents' epistemic relations coincide: ∼ a =∼ b for all a, b ∈ G. a P holds in S iff 2 b P holds in S; equivalently: iff all strong beliefs (conditional beliefs) are common strong beliefs (common conditional beliefs); equiva-lently: iff all G-agents' plausibility relations coincide:</note></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">Note that in Belief Revision, the term "belief update" is used for a totally different operation (the Katzuno-Mendelzon update<ref type="bibr" target="#b20">[21]</ref>), while what we call "update" is known as "conditioning". We choose to follow here the terminology used in Dynamic Epistemic Logic, but we want to warn the reader against any possible confusions with the KM update.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">In the infinite case, one has to add a well-foundedness condition, obtaining "locally well-preordered" relations.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">The proof is an easy semantic exercise, which can be rendered in English as: saying that "the best worlds have the property that all the worlds at least as good as them are P -worlds" is equivalent to simply saying that "the best worlds are P -worlds".</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">Note that in Belief Revision, the term "belief update" is used for a totally different operation (the Katzuno-Mendelzon update<ref type="bibr" target="#b20">[21]</ref>), while what we call "update" is known as "conditioning". We choose to follow here the terminology used in Dynamic Epistemic Logic, but we want to warn the reader against any possible confusions with the KM update.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">From a purely formal perspective, parallel merge resembles the so-called "non-prioritized belief revision" known from the work of S. H. Hansson, H. Rott, H. van Ditmarsch. But note that "merge" is not "revision"!</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Operators and Laws for Combining Preference Relations</title>
		<author>
			<persName><forename type="first">H</forename><surname>Andreka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ryan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P-Y</forename><surname>Schobbens</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Logic and Computation</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="13" to="53" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A Difficulty in the Concept of Social Welfare</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">J</forename><surname>Arrow</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Political Economy</title>
		<imprint>
			<biblScope unit="volume">58</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="328" to="346" />
			<date type="published" when="1950">1950</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Conditional doxastic models: a qualitative approach to dynamic belief revision</title>
		<author>
			<persName><forename type="first">A</forename><surname>Baltag</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Smets</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Electronic Notes in Theoretical Computer Science</title>
		<imprint>
			<biblScope unit="volume">165</biblScope>
			<biblScope unit="page" from="5" to="21" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">The Logic of Conditional Doxastic Actions: A Theory of dynamic multi-agent belief revision</title>
		<author>
			<persName><forename type="first">A</forename><surname>Baltag</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Smets</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Workshop on Rationality and Knowledge</title>
				<editor>
			<persName><forename type="first">S</forename><surname>Artemov</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Parikh</surname></persName>
		</editor>
		<meeting>the Workshop on Rationality and Knowledge<address><addrLine>ESSLLI</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="13" to="30" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Dynamic Belief Revision over Multi-Agent Plausibility Models</title>
		<author>
			<persName><forename type="first">A</forename><surname>Baltag</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Smets</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 7th Conference on Logic and the Foundations of Game and Decision (LOFT 2006)</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Bonanno</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">W</forename><surname>Van Der Hoek</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Woolridge</surname></persName>
		</editor>
		<meeting>the 7th Conference on Logic and the Foundations of Game and Decision (LOFT 2006)</meeting>
		<imprint>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="11" to="24" />
		</imprint>
		<respStmt>
			<orgName>University of Liverpool</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Probabilistic Dynamic Belief Revision</title>
		<author>
			<persName><forename type="first">A</forename><surname>Baltag</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Smets</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of LORI&apos;07</title>
				<editor>
			<persName><forename type="first">J</forename><surname>Van Benthem</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Ju</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Veltman</surname></persName>
		</editor>
		<meeting>LORI&apos;07<address><addrLine>London</addrLine></address></meeting>
		<imprint>
			<publisher>College Publications</publisher>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="21" to="39" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">A Qualitative Theory of Dynamic Interactive Belief Revision</title>
		<author>
			<persName><forename type="first">A</forename><surname>Baltag</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Smets</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Logic and the Foundations of Game and Decision Theory, Texts in Logic and Games</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Bonanno</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">W</forename><surname>Van Der Hoek</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Wooldridge</surname></persName>
		</editor>
		<imprint>
			<publisher>Amsterdam University Press</publisher>
			<date type="published" when="2008">2008</date>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="9" to="58" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">The Logic of Conditional Doxastic Actions</title>
		<author>
			<persName><forename type="first">A</forename><surname>Baltag</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Smets</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">New Perspectives on Games and Interaction, Texts in Logic and Games</title>
				<editor>
			<persName><forename type="first">R</forename><surname>Van Rooij</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Apt</surname></persName>
		</editor>
		<imprint>
			<publisher>Amsterdam University Press</publisher>
			<date type="published" when="2008">2008</date>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="9" to="31" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Strong Belief and Forward Induction Reasoning</title>
		<author>
			<persName><forename type="first">P</forename><surname>Battigalli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Siniscalchi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Econonomic Theory</title>
		<imprint>
			<biblScope unit="volume">105</biblScope>
			<biblScope unit="page" from="356" to="391" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Iterated Revision and Minimal Change of Conditional Beliefs</title>
		<author>
			<persName><forename type="first">C</forename><surname>Boutilier</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">JPL</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="262" to="305" />
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">One is a lonely number</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F A K</forename><surname>Van Benthem</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Logic Colloquium 2002</title>
				<editor>
			<persName><forename type="first">A</forename><forename type="middle">K</forename><surname>Asl</surname></persName>
		</editor>
		<editor>
			<persName><surname>Peters</surname></persName>
		</editor>
		<meeting><address><addrLine>Wellesley MA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="96" to="129" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Dynamic logic of belief revision</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F A K</forename><surname>Van Benthem</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">JANCL</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="129" to="155" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Priority Product Update as Social Choice</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F A K</forename><surname>Van Benthem</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2007">2007</date>
			<pubPlace>November</pubPlace>
		</imprint>
	</monogr>
	<note>Expanded version. Unpublished Manuscript</note>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Logical Dynamics of Information and Interaction</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F A K</forename><surname>Van Benthem</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
	<note>Manuscript. To appear</note>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Dynamic logic of preference upgrade</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F A K</forename><surname>Van Benthem</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Liu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Applied Non-Classical Logics, University of Amsterdam</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="157" to="182" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Dynamic interactive epistemology</title>
		<author>
			<persName><forename type="first">O</forename><surname>Board</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Games and Economic Behaviour</title>
		<imprint>
			<biblScope unit="volume">49</biblScope>
			<biblScope unit="page" from="49" to="80" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Conditional logics of belief revision</title>
		<author>
			<persName><forename type="first">N</forename><surname>Friedmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Y</forename><surname>Halpern</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of 12th National Conference in Artificial Intelligence</title>
				<meeting>of 12th National Conference in Artificial Intelligence<address><addrLine>Menlo Park, CA</addrLine></address></meeting>
		<imprint>
			<publisher>AAAI Press</publisher>
			<date type="published" when="1994">1994</date>
			<biblScope unit="page" from="915" to="921" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Epistemic Logic</title>
		<author>
			<persName><forename type="first">P</forename><surname>Gochet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Gribomont</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Handbook of the History of Logic</title>
				<editor>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Gabbay</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Woods</surname></persName>
		</editor>
		<imprint>
			<publisher>Elsevier</publisher>
			<date type="published" when="2006">2006</date>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="99" to="195" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Two modellings for theory change</title>
		<author>
			<persName><forename type="first">A</forename><surname>Grove</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Philosophical Logic</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="page" from="157" to="170" />
			<date type="published" when="1988">1988</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Reasoning about Uncertainty</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Y</forename><surname>Halpern</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2003">2003</date>
			<publisher>MIT Press</publisher>
			<pubPlace>Cambridge MA</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">On the difference between updating a knowledge base and revising it</title>
		<author>
			<persName><forename type="first">H</forename><surname>Katsuno</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mendelzon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Cambridge Tracts in Theoretical Computer Science</title>
		<imprint>
			<biblScope unit="page" from="183" to="203" />
			<date type="published" when="1992">1992</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Epistemic Logic for AI and Computer Science</title>
		<author>
			<persName><forename type="first">J.-J</forename><surname>Ch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Meyer</surname></persName>
		</author>
		<author>
			<persName><surname>Van Der Hoek</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="s">Cambridge Tracts in Theoretical Computer Science</title>
		<imprint>
			<biblScope unit="volume">41</biblScope>
			<date type="published" when="1995">1995</date>
			<publisher>Cambridge University Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Conditionals and theory change: revisions, expansions, and additions</title>
		<author>
			<persName><forename type="first">H</forename><surname>Rott</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Synthese</title>
		<imprint>
			<biblScope unit="volume">81</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="91" to="113" />
			<date type="published" when="1989">1989</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Ordinal conditional functions: a dynamic theory of epistemic states</title>
		<author>
			<persName><forename type="first">W</forename><surname>Spohn</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Causation in Decision, Belief Change, and Statistics</title>
				<editor>
			<persName><forename type="first">W</forename><forename type="middle">L</forename><surname>Harper</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">B</forename><surname>Skyrms</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="1988">1988</date>
			<biblScope unit="volume">II</biblScope>
			<biblScope unit="page" from="105" to="134" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">On Logics of Knowledge and Belief</title>
		<author>
			<persName><forename type="first">R</forename><surname>Stalnaker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Philosophical Studies</title>
		<imprint>
			<biblScope unit="volume">128</biblScope>
			<biblScope unit="page" from="169" to="199" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Knowledge, Belief and Counterfactual Reasoning in Games</title>
		<author>
			<persName><forename type="first">R</forename><surname>Stalnaker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Economics and Philosophy</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="133" to="163" />
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Systems for knowledge and beliefs</title>
		<author>
			<persName><forename type="first">W</forename><surname>Van Der Hoek</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Logic and Computation</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="173" to="195" />
			<date type="published" when="1993">1993</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Defaults in Update Semantics</title>
		<author>
			<persName><forename type="first">F</forename><surname>Veltman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Philosophical Logic</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="page" from="221" to="261" />
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">As Far as I Know</title>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">P J M</forename><surname>Voorbraak</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="s">Questiones Infinitae volume</title>
		<imprint>
			<biblScope unit="volume">VII</biblScope>
			<date type="published" when="1993">1993</date>
		</imprint>
		<respStmt>
			<orgName>Utrecht University</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Some philosophical aspects of reasoning about knowledge</title>
		<author>
			<persName><forename type="first">T</forename><surname>Williamson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of TARK&apos;01</title>
				<editor>
			<persName><forename type="first">J</forename><surname>Van Benthem</surname></persName>
		</editor>
		<meeting>TARK&apos;01<address><addrLine>San Francisco</addrLine></address></meeting>
		<imprint>
			<publisher>Morgan Kaufmann Publishers</publisher>
			<date type="published" when="2001">2001</date>
			<biblScope unit="page" from="97" to="97" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
