=Paper=
{{Paper
|id=Vol-1626/DARe-16_3
|storemode=property
|title=An Approach to Qualitative Belief Change Modulo Ontic Strength
|pdfUrl=https://ceur-ws.org/Vol-1626/DARe-16_3.pdf
|volume=Vol-1626
|authors=Gavin Rens,Gabriele Kern-Isberner
|dblpUrl=https://dblp.org/rec/conf/ecai/RensK16
}}
==An Approach to Qualitative Belief Change Modulo Ontic Strength==
An Approach to Qualitative Belief Change Modulo Ontic Strength Gavin Rens1 and Gabriele Kern-Isberner2 Abstract. Sometimes, strictly choosing between belief revision and • We may gain insights about the relationship between belief re- belief update is inadequate in a dynamical, uncertain environment. vision and belief update when analysed in the qualitative belief Boutilier combined the two notions to allow updates in response to change setting. external changes to inform an agent about its prior beliefs. His ap- proach is based on ranking functions. Rens proposed a new method Let L be the classical propositional language, and W the (finite) to trade off probabilistic revision and update, in proportion to the set of possible worlds (valuations) induced from a finite set of propo- agent’s confidence for whether to revise or update. In this paper, sitional variables. We denote the models of a sentence α ∈ L by we translate Rens’s approach from a probabilistic setting to a setting JαK and the fact that w satisfied α by w α. For a set of sentences with ranking functions. Given the translation, we are able to compare K ⊆ L, JKK := {w ∈ W | ∀β ∈ K, w β}. We refer to a Boutilier’s and Rens’s approaches. We found that Rens’s approach is probability function or a ranking function as an epistemic state. In an extension of Boutilier’s. this paper, we denote the result of a belief change operation as ◦ α, where is an epistemic state and ◦ is the operator. If we need to refer to the value of a particular world w in the changed epistemic state, 1 Introduction we write ◦α (w). Next, we review the essentials of Rens’s HSBC construction. Traditionally, belief revision is regarded as change of beliefs about In Section 3, we provide the qualitative, rank-based translation of the objective static state of the world. And belief update is regarded HSBC (i.e., HQBC). We analyse our HQBC with respect to two fun- as change of beliefs due to recording the change which occurred in damental classical rationality postulates in Section 4. In Section 5, the underlying state of the world. We shall use the generic term belief we compare our hybrid qualitative belief change construction to change to including belief update and belief revision. An agent may Boutilier’s generalized update construction. Two examples are pre- not always be certain whether an observation is a side-effect of an ac- sented in Section 6, and we end the paper with a summary of what tion/event (requiring update), or whether the observation did not have has been achieved here, and a discussion about related and future a physical cause and is thus pure information (requiring revision). work. Boutilier [3] proposed a generalized qualitative update procedure, which combines both belief update and revision. He used ranking functions as advocated by Spohn [13, 14] to capture notions of pref- 2 Hybrid Belief Change via Probability Theory erence. Rens [11] proposed a quantitative approach to mix proba- bilistic belief update and revision, where the trade-off is controlled Rens [11] proposed the hybrid stochastic belief change (HSBC) op- by the so-called ontic strength of the observation received. To our eration to combine notions of probabilistic belief revision and prob- knowledge, his “mixture” method is novel. In this paper, we pro- abilistic belief update. HSBC may be employed in agents who deal pose a translation of Rens’s method back to a qualitative setting us- with uncertainty by maintaining a probability distribution b over pos- ing Spohn-rankings. The difference in the present approach to that of Boutilier is that ours trades belief update and revision off in pro- P P is, b : W → [0, 1], such that sible worlds w they could be in. That w∈W b(w) = 1, and b(α) := w∈W,w α b(w) for all α ∈ L. b portion to the agent’s judgement of the ontic strength of the received will often be represented as a set of pairs {(w, p) | w ∈ W, p ∈ evidence. [0, 1]}. We refer to b as an epistemic state in the context of this There are several reason why we would like a qualitative version work. In the HSBC framework, an agent maintains an epistemic state, of Rens’s hybrid stochastic belief change (HSBC). which changes as new information is received or observed. Rens [11] proposes the tuple hW, Evt, T, E, O, osi to formalize • Ordering preferred worlds by ranking them instead of providing the HSBC framework, where exact probabilities may be more intuitive for agent designers. • Some domains may not require the agent to work with precise val- • W is a set of possible worlds; ues like probabilities, and computations over ranked preferences • Evt is a set of atomic events; are then cheaper, because finding the minimum of a set is gener- • T : W × Evt × W → [0, 1] is P a transition function such that ally cheaper than finding its sum (the distinction between mini- for every e ∈ Evt and w ∈ W , w0 ∈W T (w, e, w0 ) = 1, and mization and summation will become clear later). T (w, e, w0 ) models the probability of a transition to world w0 , 1 Centre for AI Research, and University of KwaZulu-Natal, School of Math- given the occurrence of event e in world w; ematics, Statistics and Computer Science, and CSIR Meraka, South Africa • E is the event function such that E(e, w) = P (e | w), the proba- 2 Dortmund University of Technology, Dortmund, Germany bility of the occurrence of event e in w; • O : L × W →P[0, 1] is an observation function such that for evidence does not contradict the agent’s current beliefs, and to revise every world w, α∈Ω O(α, w) = 1, and O(α, w) models the by imaging otherwise: probability of observing α in w, where Ω ⊂ L is the set of pos- b BC α if b(α) > 0 sible observations, up to equivalence, and where if α ≡ β, then b BCI α := b OGI α if b(α) = 0 O(α, w) = O(β, w), for all worlds w;3 • os : Ω × W → [0, 1] such that os(α, w) is the agent’s ontic Finally, Rens [11] proposes a way of trading off the probabilistic strength for α perceived in w. update and probabilistic revision, using the notion of ontic strength. He argues that an agent could reason with a range of degrees for In HSBC, the epistemic state updated with α (denoted b α) is information being ontic (the effect of a physical action or occurrence) defined as or epistemic (purely informative). It is assumed that the higher the b α := (w0 , p0 ) | w0 ∈ W, p0 = information’s degree of being ontic, the lower the epistemic status of that information. “An agent has a certain sense of the degree to which 1 X X O(α, w0 ) T (w, e, w0 )E(e, w)b(w) , a piece of received information is due to a physical action or event in γ w∈W e∈Evt the world. This sense may come about due to a combination of sensor readings and reasoning. If the agent performs an action and a change where γ is a normalizing factor. in the local environment matches the expected effect of the action, it It is mostly agreed upon that Bayesian conditioning corresponds can be quite certain that the effect is ontic information,” [11, p. 129]. to classical belief expansion. This is evidenced by Bayesian condi- os(α, w) is defined to equal 1 when α is certainly ontic in w, and tioning (BC) being defined only when b(α) 6= 0 (i.e., when α does 0 when α is certainly epistemic (the epistemic strength of α in w is not contradict the agent’s current beliefs). In other words, one could es(α, w) := 1 − os(α, w)). define revision to be The hybrid stochastic change of epistemic state b due to new in- b BC α := {(w, p) | w ∈ W, p = P (w | α)}, formation α with ontic strength (denoted b α) is defined as n as long as P (α) 6= 0, where b α := (w, p) | w ∈ W, p = 1 o O(α, w)b(w) es(α, w)bBCI α (w) + os(α, w)bα (w) , P (w | α) := P 0 0 .4 (1) γ 0 w0 ∈W O(α, w )b(w ) where γ 0 is a normalizing factor so that w∈W b P α (w) = 1. To accommodate cases where b(α) = 0, that is, where α contra- dicts the agent’s current beliefs and its beliefs need to be revised in the stronger sense, we shall make use of imaging. Imaging was in- 3 Hybrid Belief Change via Ranking Theory troduced by Lewis [9] as a means of revising a probability function. Let κ be a ranking on worlds in W , representing the agent’s cur- Informally, Lewis’s original solution for accommodating contradict- rent epistemic state, as first proposed by Spohn [13]. That is, κ : ing evidence α is to move the probability of each world to its closest, W → N ∪ {∞}, where N = {0, 1, 2, . . .}, such that there exists α-world. Lewis made the strong assumption that every world has a w ∈ W for which κ(w) = 0 and κ(wi ) ≤ κ(wj ) is interpreted a unique closest α-world. More general versions of imaging allow as world wi being at least as plausible or preferred as world wj . worlds to have several, equally proximate closest worlds. κ(w× ) = ∞ is meant to indicate that w× is impossible, implausible, In two papers, Rens [11] and colleagues [12] propose generalized least preferred. Worlds w0 for which κ(w0 ) = 0 are considered most imaging: Let Min(α, w, d) be the set of α-worlds closest to w mea- plausible, most preferred, or believed. In fact, ranking functions are sured with d, some acceptable measure of distance between worlds rankings of implausibility. The degree of plausibility of proposition (e.g., Hamming or Dalal distance). Formally, α is κ(α) := min {κ(w)}. (2) Min(α, w, d) := {w0 ∈ JαK | ∀w00 ∈ JαK, d(w, w0 ) ≤ d(w, w00 )}. w∈W,w α We shall denote an agent’s belief set, given epistemic state κ, as Then generalized imaging (denoted GI) is defined as Bel (κ) := {β ∈ L | κ−1 (0) ⊆ JβK}. b GI α := (w, p) |w ∈ W, p = 0 if w 6∈ JαK, X Since Spohn’s ranking functions can be considered as the loga- else p = b(w0 )/|Min(α, w0 , d)| . rithm of probabilities [7], when translating from probability theory w0 ∈W to Spohn ranking theory, multiplicationPbecomes addition, division w∈Min(α,w0 ,d) becomes subtraction, and summation ( w∈W ) becomes minimiza- Rens [11] argues that if observation likelihoods are known, they tion (minw∈W ). should be used to weight the probabilities computed by the GI oper- Conditional plausibility is defined as [13] ation; a new imaging operation is thus defined as κ(β | α) := κ(α ∧ β) − κ(α). n O(α, w)bGI α (w) o Let φw be a complete theory for w. It will be useful to know that b OGI α := (w, p) | w ∈ W, p = P 0 GI 0 , w0 ∈W O(α, w )bα (w ) κ(α ∧ β) = κ(α | β) + κ(β). And consequently, that κ(α) = minw∈W {κ(α | φw ) + κ(w)}. One can also define κ(w | α) in where the denominator is a normalizing factor. At last, with respect terms of κ(α | w) as follows. to revision, Rens [11] defines BCI to revise by conditioning when the 3 ≡ denotes logical equivalence. κ(w | α) = κ(φw ∧ α) − κ(α) 4 Note that b(α) is equivalent to P (α). = κ(α | φw ) + κ(w) − κ(α). A direct translation of the Bayes Rule would suggest that the above Example 1. Let the vocabulary be {q, r, s} and the current result is analogous to that rule. epistemic state κ1 = {(q̄r̄s̄, 0), (q̄rs̄, 1), (qr̄s̄, 2), (qrs̄, 3), (qrs, ∞), (qr̄s, ∞), (q̄rs, ∞), (q̄r̄s, ∞)}. Let d be defined as Ham- Definition 1. The tuple hW, Evt, TQ , EQ , OQ , osi is a hybrid qual- ming distance. Suppose the observation α received is (q ∧ r) ∨ (q ∧ itative belief change (HQBC) model, where ¬r ∧ s). Then • W is a set of possible worlds; Min(α, qrs, d) = {qrs} Min(α, qrs̄, d) = {qrs̄} • Evt is a set of atomic events; Min(α, qr̄s, d) = {qr̄s} Min(α, qr̄s̄, d) = {qrs̄, qr̄s} • TQ : W × Evt × W → N ∪ {∞} is a transition ranking such that Min(α, q̄rs, d) = {qrs} Min(α, q̄rs̄, d) = {qrs̄} for every w ∈ W and e ∈ Evt, minw0 ∈W {TQ (w, e, w0 )} = 0 Min(α, q̄r̄s, d) = {qr̄s} Min(α, q̄r̄s̄, d) = {qrs̄, qr̄s} and TQ (w, e, w0 ) models the plausibility of a transition to world w0 , given the occurrence of event e in world w; and • EQ : Evt × W → N ∪ {∞} is the event ranking such that for every w ∈ W , mine∈Evt {EQ (e, w)} = 0 and EQ (e, w) models • κGI 1α (qrs) = min{κ(qrs), κ(q̄rs)} = min{∞, ∞} = ∞, the plausibility of the occurrence of event e in w; • OQ : L × W → N ∪ {∞} is an observation ranking such that for • κGI 1α (qrs̄) = min{κ(qrs̄), κ(qr̄s̄), κ(q̄rs̄), κ(q̄r̄s̄)} every world w, minα∈L {O(α, w)} = 0 and OQ (α, w) models = min{3, 2, 1, 0} = 0, the plausibility of observing α in w and where if α ≡ β, then • κGI 1α (qr̄s) = min{κ(qr̄s), κ(qr̄s̄), κ(q̄r̄s), κ(q̄r̄s̄)} OQ (α, w) = OQ (β, w), for all worlds w5 ;6 = min{∞, 2, ∞, 0} = 0, • os : L × W → N, where os(α, w) is the agent’s ontic strength • κGI GI GI GI GI 1α (qr̄s̄) = κ1α (q̄rs) = κ1α (q̄rs̄) = κ1α (q̄r̄s) = κ1α (q̄r̄s̄) for α perceived in w (note that os is not a κ-function). = ∞. In the qualitative version of the hybrid belief change framework, epistemic strength is defined as the complement of ontic strength. Notice that (q ∧ r) ∨ (q ∧ ¬r ∧ s) does not contradict κ1 (i.e., Unfortunately, the notion of complement is not strictly defined for κ1 ((q ∧ r) ∨ (q ∧ ¬r ∧ s)) 6= ∞). To show that qualitative imaging ranking theory. We thus define epistemic strength as the complement can deal with observations contradicting the agent’s epistemic state, of ontic strength with respect to a ‘top’ value. consider the following example. Definition 2. Let τ be an even number in N, but do not let τ = ∞. Example 2. We consider the same setting as in Example 1. Suppose Epistemic strength is defined as the τ -complement of os. the observation β received is q ∧ s. Note that κ1 (β) = ∞, that is, q ∧ s is deemed impossible in κ1 . Then es(α, w) := τ − os(α, w). • κGI 1β (qrs) = min{κ(qrs), κ(qrs̄), κ(q̄rs), κ(q̄rs̄)} = for all possible observations α and for all worlds w ∈ W . min{∞, 3, ∞, 1} = 1, To specify that the agent has no preference for an observation • κGI 1β (qr̄s) = min{κ(qr̄s), κ(qr̄s̄), κ(q̄r̄s), κ(q̄r̄s̄)} being ontic or epistemic, choose os(α, w) = τ /2 for all w. Then = min{∞, 2, ∞, 0} = 0, es(α, w) = τ /2 for all w.7 • κGI GI GI GI GI 1β (qrs̄) = κ1β (qr̄s̄) = κ1β (q̄rs) = κ1β (q̄rs̄) = κ1β (q̄r̄s) = Let κ be regarded as an agent’s epistemic state and α a new piece GI κ1β (q̄r̄s̄) = ∞. of information to be accommodated. We can define the operation which revises an epistemic state using conditional plausibility: Now, qualitative generalized imaging can be weighted/modulated by the plausibility of the evidence in a particular world: κ CP α =: {(w, n) | w ∈ W, n = Q(w | α)}, κ OGI α := {(w, n) | w ∈ W, n = κGI α (w) + OQ (α, w) − δ}, as long as κ(α) 6= ∞, where where δ is a normalization factor defined as Q(w | α) := OQ (α, w) + κ(w) − min 0 {OQ (α, w0 ) + κ(w0 )} w ∈W δ := min {κGI α (w) + OQ (α, w)}. is justified by the translation of P (w | α) (Eq. 1) from probability w∈W theory to ranking theory. The definition of Q(w | α) can also be Example 3. Continuing with the previous examples, suppose derived from first principles, which we leave out here. OQ (q ∧ s, q̄r̄s̄) = 1 and for all w 6= q̄r̄s̄, OQ (q ∧ s, w) = 0. As with probabilistic conditionalization, plausibilistic condition- Then alization is undefined when the evidence/observation is inconsistent with the agent’s current epistemic state. A plausibilistic version of • κOGI 1β (qrs) = min{κ(qrs) + 0, κ(qrs̄) + 0, κ(q̄rs) + 0, κ(q̄rs̄) + imaging can deal with this problem in the qualitative setting: 0} − δ = min{∞ + 0, 3 + 0, ∞ + 0, 1 + 0} − δ = 1 − 1 = 0, Translate b GI α to n • κOGI 1β (qr̄s) = min{κ(qr̄s) + 0, κ(qr̄s̄) + 0, κ(q̄r̄s) + 0, κ(q̄r̄s̄) + κ GI α := (w, n) | w ∈ W, n = ∞ if w 6∈ JαK, 1} − δ = min{∞ + 0, 2 + 0, ∞ + 0, 0 + 1} − δ = 1 − 1 = 0, • κOGI OGI OGI OGI 1β (qrs̄) = κ1β (qr̄s̄) = κ1β (q̄rs) = κ1β (q̄rs̄) = o 0 else n = min {κ(w )} . OGI OGI 0 w ∈W κ1β (q̄r̄s) = κ1β (q̄r̄s̄) = ∞. w∈Min(α,w0 ,d) 5 ≡ denotes logical equivalence. In Example 2, qr̄s is most plausible in κGI 1β , but in Example 3, due 6 ≡ denotes logical equivalence. to q ∧ s being slightly less plausibly perceived in q̄r̄s̄ than in any 7 Due to τ being even, τ /2 is guaranteed to be a whole number. other world, q̄r̄s̄ becomes slightly less plausible in κOGI 1β . Thus, in Example 3, qrs and qr̄s share the status of being most plausible in Definition 3. We say the agent’s revised epistemic state. Finally, a qualitative version of BCI can be defined, which revises • event e is possible in κ iff there exists a world w ∈ W such that by conditional plausibility when the evidence does not contradict the κ(w) 6= ∞ and EQ (e, w) 6= ∞; agent’s current beliefs, and revises by qualitative imaging otherwise: • event e is event-rational when for all w ∈ W : there exists a w0 such that TQ (w, e, w0 ) 6= ∞ iff EQ (e, w) 6= ∞; κ CP α if κ(α) 6= ∞ • evidence α is an e-signal when for all w0 ∈ W : there exists a w κ CPI α := κ OGI α if κ(α) = ∞ such that TQ (w, e, w0 ) 6= ∞ iff OQ (α, w0 ) 6= ∞; Turning now to belief update, b α is translated to • evidence α is trustworthy iff for all w ∈ W , if w 6 α, then n OQ (α, w) = ∞; κ α := (w0 , n) | w0 ∈ W, n = (3) • evidence α is clear iff for all w ∈ W , if w α, then OQ (α, w) = o 0; OQ (α, w0 ) + min TQ (w, e, w0 ) + EQ (e, w) + κ(w) − δ 0 , w∈W • evidence α is weakly observable iff there exists a w ∈ W such e∈Evt that w α and O(α, w) 6= ∞; where δ 0 is a normalizing factor defined as • evidence α is strongly observable iff for all w ∈ W for which n o w α, O(α, w) 6= ∞. δ 0 := min 0 0 0 OQ (α, w )+ min TQ (w, e, w )+EQ (e, w)+κ(w) . w ∈W w∈W e∈Evt Except for possibility, the definitions in the list above are adapted from Rens [11]. Example 4. We continue, using the vocabulary and epistemic state of the previous examples. For illustrative purposes, we keep the ob- servation, transition and event models very simple, with an arbitrary Postulate (CM) If κ is a ranking function, then so is κ α. specification: For all w ∈ W , let OQ (β, w) = 0 if w β, else, Lemma 1. If α is strongly observable, then κ CPI α is a ranking OQ (β, w) = 1. Let there be two events: Evt = {e1 , e2 }. Let W = function. {w1 , w2 , . . . , wn }. EQ (ek , wi ) = i×k, except for EQ (e1 , w1 ) = 0. TQ (wi , ek , wj ) = i × j × k, except for TQ (w1 , e1 , w1 ) = 0. Then Proof. Omitted to save space; available on request. the ranks of the first two worlds are Lemma 2. Let the HQBC model be specified such that there exists • κβ (qrs) = OQ (β, qrs) + min w∈W TQ (w, e, qrs) + an event-rational event e ∈ Evt possible in κ, and α is an e-signal. e∈Evt EQ (e, w)+κ(w) −δ 0 = 0+min{0+0+∞, 2+2+3, . . . , 16+ Then κ α is a ranking function. 8 + 0} − δ 0 = 7 − δ 0 and • κβ (qrs̄) = OQ (β, qrs̄) + min w∈W TQ (w, e, qrs̄) + Proof. Note that the normalizing factor δ will ensure that κ α is 0 e∈Evt a ranking function as long as there exists a world w ∈ W for which EQ (e, w) + κ(w) − δ = 1 + min{2 + 1 + ∞, 4 + 2 + 3, 6 + κα (w) 6= ∞. It must thus be shown that if there exists an event 3 + ∞, . . . , 32 + 16 + ∞} − δ 0 = 10 − δ 0 . er ∈ Evt which is event-rational and α is an er -signal, then there We do not work out the ranks of the other worlds. must exist a world w ∈ W for which κα (w) 6= ∞. κ is assumed to be a ranking function. Let w− be a world for Given an epistemic state κ and a new observation α, we propose which κ(w− ) 6= ∞. By definition of the transition function, there the following HQBC operation. must exist a world w+ for which T (w− , e, w+ ) 6= ∞, for all e ∈ κ α := (w, n) | w ∈ W, n = Evt. Choose the er which is event-rational. Then E(er , w− ) 6= ∞. (4) Furthermore, because w− exists such that T (w− , er , w+ ) 6= ∞ and min{κCPI (w) + es(α, w), κα (w) + os(α, w)} − δ 00 , α we know that α is an er -signal, OQ (α, w+ ) 6= ∞. where δ 00 is a normalizing factor defined as By definition of operation (3), κα (w+ ) = OQ (α, w+ ) + min w∈W TQ (w, e, w+ )+EQ (e, w)+κ(w) −δ = OQ (α, w+ )+ δ 00 := min min{κCPI (w) + es(α, w), κα (w) + os(α, w)} . α e∈Evt min . . . , TQ (w− , er , w+ ) + EQ (er , w− ) + κ(w− ), . . . − δ 6= w∈W κ α (w) can be read as ‘The rank of w after revision if revision is ∞. more plausible given the epistemic strength of α at w, else, the rank of w after update (given the ontic strength).’ Proposition 1. If the HQBC model is specified such that α is strongly observable, there exists an event-rational event e ∈ Evt possible in κ, and α is an e-signal, then (CM) holds. 4 Analysis of HQBC w.r.t. Rationality Postulates Proof. κ α := {(w, n) | w ∈ W, n = min{κCPI α (w) + In this section we shall assess two fundamental postulates generally es(α, w), κα (w) + os(α, w)} − δ 00 }. Recall that neither es(α, w) agreed upon as necessary (but not sufficient) for belief change to nor os(α, w) can have a value of ∞. And given the antecedents be rational [6, e.g.]. The categorical matching postulate (CM) states of the proposition, by Lemmata 1 and 2, there must be a w0 for that the representation of an agent’s state of knowledge/belief should which κCPI α (w0 ) = 0 or κα (w0 ) = 0. Hence, either κCPI α (w0 ) + have the same formal structure before and after the application of the 0 0 0 0 es(α, w ) 6= ∞ or κα (w ) + os(α, w ) 6= ∞. Thus κα (w ) 6= ∞ belief change operation under consideration. The success postulate and due to the normalizing factor δ 00 , there exists a w s.t. κ α (w) = (S) states that the observation/evidence with which an agent’s state 0. is to be changed should be believed (with certainty) after the belief change operation.8 In the rest of this section, we assume that α ∈ L is any logically satisfiable piece of information. Postulate (S) If κ is a ranking function, then κ α (α) = 0. 8 Here it is assumed that the incoming information is certainly correct. Lemma 3. If α is strongly observable, then κCPI α (α) = 0. Proof. Omitted to save space; available on request. Lemma 5. TQ corresponds to E, and EQ corresponds to µ. The correspondence is in the sense that the values of functions are equal Lemma 4. Let the HQBC model be specified such that there exists for the same arguments, respectively, parameters. an event-rational event e ∈ Evt possible in κ, and α is a trustworthy e-signal. Then κα (α) = 0. Proof. Omitted to save space; available on request. Proof. Recall that κα (α) = minw∈W,w α κα (w). By Lemma 2, κα In the rest of the paper, due to Lemma 5, we shall assume that is a ranking function and thus there exists a w for which κα (w) = 0. TQ (w, e, w0 ) and κw,e (w0 ) are interchangeable, and that EQ (e, w) Hence, for κα (α) not to equal 0, there must exist a w0 ∈ W s.t. and κw (e) are interchangeable. w0 6 α and κα (w0 ) 6= ∞. But then OQ (α, w0 ) 6= ∞. Therefore, Boutilier calls the evolution of w into w0 , under event e, a transi- e for (S) not to hold, an agent needs to believe that OQ (α, w0 ) 6= ∞ tion, which he writes w → w0 . He defines (rhs in our notation) for some world w0 where w0 6 α. But then α cannot be trustworthy. e κ(w → w0 ) := TQ (w, e, w0 ) + EQ (e, w) + κ(w). Arguing by contradiction, (S) must hold. And he defines the set of possible α-transitions: Note that trustworthiness is required for Lemma 4, in addition to e Tr (α, κ) := {w → w0 |w, w0 ∈ W, e ∈ Evt, the antecedents required for Lemma 2. e w0 α, κ(w → w0 ) 6= ∞}. Proposition 2. If the HQBC model is specified such that α is strongly Tr (α, κ) is the set of transitions from one world to the next via an observable, there exists an event-rational event e ∈ Evt possible in event, such that the transition (TQ ) is possible, the event in the depar- κ, and α is a trustworthy e-signal, then (S) holds. ture world (EQ ) is possible, and the departure world (κ) is possible, Proof. Note that neither es(·) nor os(·) can have a value of ∞. and such that the arrival world is an α-world. Moreover, because α is trustworthy, by the definitions of CPI and Then Boutilier defines , κCPI α (w) = κα (w) = ∞ whenever w 6 α. Together with Lem- e result GU (α, κ) := {w | w0 → w ∈ min Tr (α, κ)} mata 3 and 4, one can thus infer that and defines generalized update as δ 00 = min {min{κCPI α (w) + es(α, w), κα (w) + os(α, w)}} w∈W Bel GU α (κ) := {β ∈ L | result GU (α, κ) ⊆ JβK}. (5) = min {min{κCPI α (w) + es(α, w), κα (w) + os(α, w)}} Proposition 3. A generalized update model can be realized via an w∈W w α HQBC model. Then Proof. Let G = hW, κ, E, µi be a generalized update model, where κ is a (current) epistemic state. Choose an HQBC model H = κ α (α) = min {κα (w)} w∈W hW, Evt, TQ , EQ , OQ , osi with implicit epistemic state κ and such w α that, for all w, w0 ∈ W and e ∈ Evt, TQ (w, e, w0 ) = κw,e (w0 ) and = min {min{κCPI α (w) + es(α, w), κα (w) + os(α, w)} − δ 00 } EQ (e, w) = κw (e). Then TQ corresponds to E and EQ corresponds w∈W w α to µ. G is thus realized via H. = min {min{κCPI α (w) + es(α, w), κα (w) + os(α, w)}} − δ 00 w∈W w α Lemma 6. Let H be the class of HQBC models specified such that there exists an event-rational event e ∈ Evt possible in κ and = 0. such that evidence α is an e-signal, trustworthy and clear. For ev- ery generalized update model realizable via an HQBC model in H, Bel GU α (κ) = Bel (κα ). 5 Comparison of HQBC with Generalized Update Proof. Let H ∈ H s.t. H = hW, Evt, TQ , EQ , OQ , osi with im- plicit epistemic state κ. Let G = hW, κ, E, µi be a generalized Boutilier [3] adopts an event-based approach where a set of events e update model realized via H. Note that κ(w → w0 ) 6= ∞ iff E is assumed. These events are allowed to be nondeterministic, and 0 TQ (w, e, w ) 6= ∞, EQ (e, w) 6= ∞ and κ(w) 6= ∞. each possible outcome of an event is ranked according to its plausi- Bel GU α (κ) = Bel (κα ) iff {β ∈ L | result GU (α, κ) ⊆ JβK} = bility via a ranking function. “As in the original event-based seman- −1 {β ∈ L | (κα ) (0) ⊆ JβK} (by their definitions: (5) and (2)) iff tics, we will assume each world has an event ordering associated with result GU (α, κ) = (κα )−1 (0) iff result GU (α, κ) = JBel (κα )K. it that describes the plausibility of various event occurrences at that And result GU (α, κ) = world,” [3, p. 14]. e A generalized update model is then defined as hW, κ, E, µi, where {w | w0 → w ∈ min Tr (α, κ)} (by definition of result GU ) e e ={w | w0 → w ∈ min{w0 → w | w0 , w ∈ W, • W is a set of possible worlds; e • κ is a ranking over W (the agent’s epistemic state); e ∈ Evt, w α, κ(w0 → w) 6= ∞}} (by definition of Tr ) • E is a mapping from w ∈ W and e ∈ Evt to rankings κw,e = arg min {TQ (w0 , e, w) + EQ (e, w0 ) + κ(w0 )} over W , where κw,e (w0 ) describes the plausibility that world w0 w,w0 ∈W, e∈Evt, w α w: TQ (w0 ,e,w)+EQ (e,w0 ) results when event e occurs at world w; +κ(w0 )6=∞ • µ is a mapping from w ∈ W to rankings κw over Evt, where e κw (e) captures the plausibility of the occurrence of event e at (by definition of κ(w0 → w)) world w. = arg min OQ (α, w) w∈W In this model, the set of events Evt is implicit and the (initial) epis- + min {TQ (w0 , e, w) + EQ (e, w0 ) + κ(w0 )} temic state κ explicit. w0 ∈W,e∈Evt (by definition of class H, w α and TQ (w0 , e, w) + EQ (e, w0 ) + κ(w0 ) 6= ∞) n o = arg min{κα (w)} = (κα (w))−1 (0) = JBel (κα )K. w∈W Proposition 4. Let H be the class of HQBC models specified such that there exists an event-rational event e ∈ Evt possible in κ and such that evidence α is an e-signal, trustworthy and clear. For ev- ery generalized update model realizable via an HQBC model in H, Bel GU α (κ) = Bel (κα ). Proof. Let os(α, w) = 0 for all possible α and for all w ∈ W . Let τ > κα (w) − κOGI α (w) for all w ∈ W for which κα (w) 6= ∞ OGI and κα (w) 6= ∞. Recall that τ may not equal ∞ and for all w ∈ Figure 1: Scenario with multiple events (with deterministic out- W, os(α, w) 6= ∞. comes), including event plausibility information. Then, for all w ∈ W , κ OGI α (w) = min{κα (w) + es(α, w), κα (w) + os(α, w)} − δ 00 Suppose the agent observes that the grass is wet (¬Dry(G)). This = min{κOGI α (w) + τ − os(α, w), κα (w) + os(α, w)} − δ 00 contradicts what the agent presently believes, so, with respect to re- (by Def. 2) vision, κ CPI ¬Dry(G) is interpreted as κ OGI ¬Dry(G). We deter- mine that = min{κOGI α (w) + τ, κα (w) + 2os(α, w)} − δ 00 • κOGI ¬Dry(G) (Patio(B) ∧ Dry(B) ∧ ¬Dry(G)) = 0 = min{κOGI α (w) + τ, κα (w)} − δ 00 • κOGI ¬Dry(G) (¬Patio(B) ∧ Dry(B) ∧ ¬Dry(G)) = 1 (by the above definition of os(α, w)) • κOGI ¬Dry(G) (w) = ∞ for all w ∈ W s.t. w 6 Patio(B)∧Dry(B)∧ =κα (w) − δ 00 (by the above definition of τ ) ¬Dry(G) and w 6 ¬Patio(B) ∧ Dry(B) ∧ ¬Dry(G) =κα (w) (by definition, min 0 κα (w0 ) = 0). w ∈W With respect to update, we determine that Then Bel (κ α) = Bel (κα ) and by Lemma 6, Bel (κα ) = • κ¬Dry(G) (Patio(B) ∧ Dry(B) ∧ ¬Dry(G)) = 1 Bel GU α (κ). • κ¬Dry(G) (Patio(B) ∧ ¬Dry(B) ∧ ¬Dry(G)) = 0 • κ¬Dry(G) (w) = ∞ for all w ∈ W s.t. w 6 Patio(B)∧Dry(B)∧ ¬Dry(G) and w 6 Patio(B) ∧ ¬Dry(B) ∧ ¬Dry(G) 6 Examples We use Boutilier’s two examples [3, § 3.3]. One can then compare Then combining these results gives his generalized update (GU) with our HQBC. • κ ¬Dry(G) (Patio(B) ∧ Dry(B) ∧ ¬Dry(G)) = 0 The first example involves a book (B) which might be inside the • κ ¬Dry(G) (Patio(B) ∧ ¬Dry(B) ∧ ¬Dry(G)) = 0 house or on the patio. There are three events: it rains, in which case the grass (G) and the patio get wet, the sprinkler comes on, in which • κ ¬Dry(G) (¬Patio(B) ∧ Dry(B) ∧ ¬Dry(G)) = 1 case only the grass gets wet, or nothing happens. In this example, and the other worlds are deemed impossible. If the agent were to events are deterministic. If the book is on the patio, it will get wet reflect on its new beliefs, it might reason as follows. when it rains, else not. If the book is inside and the book is dry, it will never get wet. Figure 1 illustrates the prior epistemic state of an I believe Patio(B) ∧ Dry(B) ∧ ¬Dry(G) because it is the agent who believes its book is on the patio and that both the grass ¬Dry(G)-world closest to my prior beliefs (and at least plau- and the book are dry (κ(Patio(B) ∧ Dry(B) ∧ Dry(G)) = 0), but sible, because it is plausibly explained by the sprinkler coming if the book is not on the patio, the agent believes it has left it inside on). I believe Patio(B) ∧ ¬Dry(B) ∧ ¬Dry(G) because it is (κ(Inside(B) ∧ Dry(B) ∧ Dry(G)) = 1). The other less plausible a ¬Dry(G)-world best explained by rain (in my prior beliefs). worlds are omitted. Event plausibility is ranked as EQ (null , w) = 0, I don’t fully believe ¬Patio(B) ∧ Dry(B) ∧ ¬Dry(G), but it EQ (rain, w) = 1, EQ (sprinkler , w) = 2, for all w (a ‘global’ or- is plausible because it is the ¬Dry(G)-world second closest to dering suitable for all worlds is assumed). This is the only informa- my prior beliefs (although ¬Patio(B) was previously not fully tion required for GU. believed, it was deemed plausible.). For HQBC, the observation function (OQ ), ontic strength (os) and its top value (τ ), and distance measure (d) are required, in ad- Now suppose the ontic strength of ¬Dry(G) is defined as dition. We let all observations be trustworthy and clear. For now, os(¬Dry(G), w) = 0, for all w ∈ W , and τ = 2. That is, per- let the agent have no opinion as to whether observations are on- ceiving wet grass is always deemed slightly more ontic than epis- tic or epistemic, that is, for all possible α and for all w ∈ W , temic. Then the resulting epistemic state is determined as in Ta- os(α, w) = es(α, w) = 1.9 d will be defined in accordance with ble 1. In the table, worlds are identified by three letters, such that, Hamming distance, as before. for instance, pdd |= Patio(B) ∧ Dry(B) ∧ Dry(G) and iww |= ¬Patio(B) ∧ ¬Dry(B) ∧ ¬Dry(G); OGI abbreviates κOGI α (w), 9 This implies that τ = 2. abbreviates κα (w), es abbreviates es(α, w) and os abbreviated os(α, w), where α is the incumbent observation and w is the world of the row. The “min” column indicates the minimum value between the two columns to its left, and is actually the rank assigned to the world w of the incumbent row (κ α (w)). Table 1: Agent prefers an ontic interpretation. World OGI + es + os min pdd ∞+2 ∞+0 ∞ pdw 0+2 1+0 1 pwd ∞+2 ∞+0 ∞ pww ∞+2 0+0 0 idd ∞+2 ∞+0 ∞ idw 1+2 ∞+0 3 iwd ∞+2 ∞+0 ∞ iww ∞+2 ∞+0 ∞ Figure 2: Scenario with single event (with non-deterministic out- comes), including event plausibility information. Finally, suppose ontic strength of ¬Dry(G) is defined as os(¬Dry(G), w) = 2 for all w ∈ W , with τ = 2 (which implies that es(¬Dry(G), w) = 0, for all w ∈ W ). That is, perceiving wet update permits observations to rule out possible transitions, or grass is always deemed more epistemic than ontic. Then the resulting previously epistemically possible worlds. As such, it is an ap- epistemic state is determined as in Table 2. propriate model for revision and expansion of beliefs due to information-gathering actions. An observed outcome of green Table 2: Agent prefers an epistemic interpretation. presents two competing explanations: either the test failed (the substance is an acid or a base, and we still dont know which) or World OGI + es + os min the beaker contains kryptonite. The most plausible explanation pdd ∞+0 ∞+2 ∞ pdw 0+0 1+2 0 and the updated epistemic state depend on the relative magni- pwd ∞+0 ∞+2 ∞ tudes of g and r. The figure suggests that g < r, so the a test pww ∞+0 0+2 2 idd ∞+0 ∞+2 ∞ failure is most plausible and the belief acid ∨ base is retained. idw 1+0 ∞+2 1 iwd ∞+0 ∞+2 ∞ If test failures are more rare (r < g), then this outcome would iww ∞+0 ∞+2 ∞ cause the agent to believe the beaker held kryptonite. [3, p. 18] We now analyze the results of the two tables/cases a little. Now we investigate how HQBC deals with this scenario for two We see that when the agent prefers to interpret or explain observations. We let all observations be trustworthy and clear, and ¬Dry(G) as an ontic observation, the agent considers world pww Hamming distance is used to define d. The three possible observa- as most plausible, that is, it fully believes that the book is on the pa- tions are red , blue and green. tio, the book is wet and the grass is wet. A reason could be that, given Tables 3 and 4 show the agent’s new epistemic state after perceiv- the agent’s most plausible prior belief that the book is on the patio ing the litmus paper turning red, respectively, green. In both tables, and dry and the grass is dry, it rained. This is the same result pro- the three right-most columns report the new state (κ α ) when the duced by generalized update [3, p. 17]; this correspondence makes agent (from left to right) (i) is indifferent about whether the obser- sense, given that our update () is ‘aligned’ with GU (Bel GU α (κ) vation is ontic or epistemic, (ii) prefers an ontic interpretation, (iii) = Bel (κα ) under reasonable conditions; Lem. 6). Notice that the prefers an epistemic interpretation. “os = x” (“es = x”) in a col- plausibility of pww due to revision is not in contention, because umn heading means that os(α, w) = x (resp., es(α, w) = x) for all κOGI ¬Dry(G) (pww) = ∞. w ∈ W . In the tables, worlds are identified by two letters, such that, We see that when the agent prefers to interpret or explain for instance, ar |= acid ∧red , ab |= acid ∧blue, bg |= base ∧green, ¬Dry(G) as an epistemic observation, the agent considers world ky |= krypt ∧ yellow . To save space, rows containing ∞ in every pdw as most plausible, that is, it fully believes that the book is on row are omitted. Of course, perceiving red, blue or green is inconsis- the patio, the book is dry and the grass is wet. It can be seen from tent with the current belief that the litmus paper is yellow; revision Table 2 that it is revision which causes the agent to believe pdw. No- operator CPI is thus interpreted as OGI. tice that pdw is the Hamming-closest ¬Dry(G)-world to the most plausible prior belief (pdd). Table 3: Agent perceives the litmus paper turning red. The second example is shown in Figure 2. Here only one possible event is assumed, the action of dipping litmus paper in a beaker. es = 1 es = 2 es = 0 World OGI The beaker is believed to contain either an acid or a base os = 1 os = 0 os = 2 (κ = 0); little plausibility (κ = r) is accorded the possibil- ar 0 0 0 0 0 br 0 ∞ 0 2 0 ity that it contains some other substance (say, kryptonite). The kr r ∞ r r+2 r expected outcome of the test is a color change of the litmus pa- per: it changes from yellow to red if the substance is an acid, We see that when the agent prefers to revise its beliefs (es = 0, to blue if it is a base, and to green if it is kryptonite. However, os = 2), and when it is indifferent about whether to revise or update the litmus test can fail some small percentage of the time, in (es = 1, os = 1), then its resulting beliefs seem unintuitive to us which case the paper also turns green. This outcome is also ac- humans—the agent believes as equally plausible that the substance corded little plausibility (κ = g). If the paper is dipped, and is acid and that it is base. However, when the agent prefers to update red is observed, the agent will adopt the new belief acid . Un- its beliefs (es = 2, os = 0), then it reasonably believes (only) that like KM update [of Katsuno and Mendelzon [8]], generalized the substance is acid. A reasonable agent should prefers to update its beliefs because it should consider all its observations in this scenario also benefit our future work. In their concluding section, they hint to be ontic, due to the ontic nature of dipping litmus paper. that their framework could ‘mix’ revision and update: “In this frame- work, belief change operations can be determined by choosing a Table 4: Agent perceives the litmus paper turning green. plausibility measure that captures the agent’s preferences among se- quences of worlds.... [T]here are prior plausibilities that, when con- ditioned on a surprising observation, allow the agent to revise some es = 1 es = 2 es = 0 World OGI os = 1 os = 0 os = 2 earlier beliefs and to assume that some change has occurred”, [5]. To ag 0 g 0 0 0 our knowledge, they never did investigate the ‘mixture’ approach. bg 0 g 0 0 0 Relationships to the change operations defined by Beierle and kg r r+r r r r Kern-Isberner [1, 2], which make use of knowledge bases, also need to be investigated. Note that for the case when es = 2 and os = 0, the values had to Although Lang’s work [?] is not directly applicable to ours in be normalized (δ 00 = 2). It is interesting to see that no matter what terms of ‘mixing’ revision and update, it does unpack and high- stance the agent takes on ontic/epistemic strength, when it perceives light several important characteristics of update. Lang also discusses green, it believes with equal plausibility that the substance is acid the relationship between update to revision. His insights might well and base. Assuming r > 0, the substance is less plausibly kryptonite, guide our future efforts in this area. He writes but not impossible. In the cases when the agent has an event-based attitude (i.e., it In complex environments, especially planning under in- prefers to interpret observations ontically), all results when HQBC is complete knowledge, actions are complex and have both on- applied align with the results when GU is applied (w.r.t. the examples tic and epistemic effects; the belief change process then is very in this paper). much like the feedback loop in partially observable planning and control theory: perform an action, project its effects on 7 Concluding Remarks, Related and Future Work the current epistemic state, then get the feedback, and revise the projected epistemic state by the feedback. Clearly, update A hybrid qualitative belief change (HQBC) construction was allows for projection only. Or, equivalently, if one chooses to presented—based on ranking theory and which trades revision off separate the ontic and the epistemic effects of actions, by hav- with update, according to the agent’s confidence for whether the re- ing two disjoint sets of actions (ontic and epistemic), then ontic ceived observation/evidence is ontic (due to a physical event) or epis- actions lead to projection only, while epistemic actions lead to temic (due to an announcement). We proved that HQBC is, in a par- revision only. Therefore, if one wants to extend belief update so ticular sense, an extension of Boutilier’s generalized update (GU). In as to handle feedback, there is no choice but integrating some other words, the HQBC model in a class of ‘reasonable’ models can kind of revision process, as in several recent works [. . . ] [?] be specified to perform exactly the same belief change as GU would. Moreover, HQBC allows for more sophisticated belief change than This act-update-perceive-revise “feedback loop” is the default ap- GU, in particular, with respect to rankic belief revision (based on proach when complex actions/events are considered; it is fundamen- conditional plausibility and generalized imaging, for instance) and tally different to the simultaneous, hybrid belief change approach. with respect to employing a notion of ontic strength. The examples Yet, we have not come across a convincing argument against the hy- in this paper support our propositions concerning the relationship be- brid approach. It seems that the traditional “feedback loop” approach tween HQBC and GU. assumes that there is always certainty about the ontic/epistemic sta- tus of every piece of information received. A major question for fu- Determining os(α, w) for every foreseen α in every possi- ture research is, Is there a theory or framework to synthesize the two ble world w will be challenging for a designer. Some deep ques- approaches? tions are: Should the designer/agent provide the strengths (via Nayak [10] proves that, given an appropriate function for measur- stored values or programmed reasoning), or do these strengths ing distance between worlds, classical revision (∗) can be reduced to come to the agent attached to the new information? What is the classical update (). Formally, he proves that (x ∗ k) x = k ∗ x, reasoning process we go through to determine whether infor- where k, x ∈ L, k is an agent’s knowledge and x is the (new) evi- mation is epistemic or ontic, if at all? In general, how does an dence. Nayak points out that the “nice storyline that cleanly demar- agent know when information is epistemic (requiring revision) cates revision from update appears not to be such a good story after or ontic (requiring update)? [11] all,” [10]. Nayak’s surprising result is just one more reason to inves- tigate the hybrid belief change approaches. One direction to investigate as a possible answer to the questions “We can regard imaging as a probabilistic version of update, and above is to condition ontic/epistemic strength on particular propo- conditionalization as a probabilistic version of revision,” [8]. And ac- sitions. For instance, the more plausible the proposition, the more cording to Nayak, KM-update “is known to be the non-probabilistic likely that the received information is ontic. For such an approach counterpart of the account of [probabilistic] imaging propounded by to work, the framework would presumably have to accommodate the David Lewis in order to develop a theory of conditionals [. . . ]” [10]. specification of condition propositions for every observation of in- Dubois and Prade [4] give a version of imaging for belief update in terest. Revision and update would then be traded off depending on the possibilistic framework. Rens [11] uses imaging (GI) on the re- the plausibility/probability of the condition of the observation under vision side; his justification is because imaging can deal with contra- consideration. dictory evidence, whereas conditioning cannot. We are not convinced Friedman and Halpern [5] investigate belief revision and update that imaging is strictly an update process. Where exactly imaging lies employing a framework based on time-stamps and runs of possible on the revision-update spectrum is, to our minds, another deep ques- evolutions of a system. They provide some interesting insights re- tion still to be answered. garding the relationship between revision and update, which may REFERENCES [1] C. Beierle and G. Kern-Isberner, ‘On the modelling of an agent’s epis- temic state and its dynamic changes’, Electronic Communications of the European Association of Software Science and Technology, 12, (2008). [2] C. Beierle and G. Kern-Isberner, ‘Towards an agent model for belief management’, in Advances in Multiagent Systems, Robotics and Cyber- netics: Theory and Practice. (Volume III), eds., G. Lasker and J. Pfalz- graf, IIAS, Tecumseh, Canada, (2009). [3] C. Boutilier, ‘A unified model of qualitative belief change: A dynamical systems perspective’, Artif. Intell., 98(1–2), 281–316, (1998). [4] D. Dubois and H. Prade, ‘Belief revision and updates in numerical formalisms: An overview, with new results for the possibilistic frame- work’, in Proceedings of the Thirteenth Intl. Joint Conf. on Artif. Intell. (IJCAI-93), pp. 620–625, San Francisco, CA, USA, (1993). Morgan Kaufmann Publishers Inc. [5] N. Friedman and J. Halpern, ‘Modeling belief in dynamic systems. Part II: Revision and update’, Journal of Artif. Intell. Research (JAIR), 10, 117–167, (1999). [6] P. Gärdenfors, Knowledge in Flux: Modeling the Dynamics of Epis- temic States, MIT Press, Massachusetts/England, 1988. [7] M. Goldszmidt and J. Pearl, ‘Qualitative probabilities for default rea- soning, belief revision, and causal modeling’, Artificial Intelligence, 84, 57–112, (1996). [8] H. Katsuno and A. O. Mendelzon, ‘On the difference between updating a knowledge base and revising it’, in Belief Revision, ed., P. Gärdenfors, 183–203, Cambridge University Press, (1992). [9] D. Lewis, ‘Probabilities of conditionals and conditional probabilities’, Philosophical Review, 85(3), 297–315, (1976). [10] A. C. Nayak, ‘Is revision a special kind of update?’, in AI 2011: Ad- vances in Artif. Intell.: Proceedings of the Twenty-fourth Australasian Joint Conf., LNAI, pp. 432–441, Berlin/Heidelberg, (2011). Springer- Verlag. [11] G. Rens, ‘On stochastic belief revision and update and their combina- tion’, in Proceedings of the Sixteenth Intl. Workshop on Non-Monotonic Reasoning (NMR), eds., G. Kern-Isberner and R. Wassermann, pp. 123– 132. Technical University of Dortmund, (2016). [12] G. Rens, T. Meyer, and G. Casini, ‘Revising incompletely specified convex probabilistic belief bases’, in Proceedings of the Sixteenth Intl. Workshop on Non-Monotonic Reasoning (NMR), eds., G. Kern-Isberner and R. Wassermann, pp. 133–142. Technical University of Dortmund, (2016). [13] W. Spohn, ‘Ordinal conditional functions: A dynamic theory of epis- temic states’, in Causation in Decision, Belief Change, and Statistics, eds., W. Harper and B. Skyrms, volume 42 of The University of West- ern Ontario Series in Philosophy of Science, 105–134, Springer Nether- lands, (1988). [14] W. Spohn, ‘A survey of ranking theory’, in Degrees of Belief, eds., F. Huber and C. Schmidt-Petri, 185–228, Springer Netherlands, Dor- drecht, (2009).