=Paper= {{Paper |id=Vol-1212/paper1 |storemode=property |title=Nonmonotonic Desires - A Possibility Theory Viewpoint |pdfUrl=https://ceur-ws.org/Vol-1212/DARe-14-paper-1.pdf |volume=Vol-1212 |dblpUrl=https://dblp.org/rec/conf/ecai/DuboisLP14 }} ==Nonmonotonic Desires - A Possibility Theory Viewpoint== https://ceur-ws.org/Vol-1212/DARe-14-paper-1.pdf
    Nonmonotonic desires - A possibility theory viewpoint

                    Didier Dubois, Emiliano Lorini, and Henri Prade

                  IRIT, CNRS & Université Paul Sabatier, France
       dubois@irit.fr, emiliano.lorini@irit.fr, prade@irit.fr




       Abstract. If an agent desires that ϕ and desires that ψ, this agent often also de-
       sires that ϕ and ψ hold at the same time (ϕ ∧ ψ). However, there are cases where
       ϕ ∧ ψ may be found less satisfactory for the agent than each of ϕ or ψ alone.
       This paper is a first attempt at modeling such nonmonotonic desires. The ap-
       proach is developed in the setting of possibility theory, since it has been recently
       pointed out that guaranteed (or strong) possibility measures are a good candidate
       for modeling graded desires. Although nonmonotonic reasoning has been stud-
       ied extensively for knowledge, and that preferential nonmonotonic consequence
       relations can be faithfully represented in the possibilistic setting, nonmonotonic
       desires appear to require a different approach.



1    Introduction

In a recent work [11], we have advocated the idea that if an agent is satisfied with ϕ∨ψ,
it should be also the case that the agent is satisfied with ϕ and that the agent is satisfied
with ψ. This claim is justified if by “the agent is satisfied with ϕ” we mean: “in all
situations where ϕ holds”. This has led us to propose the modeling of desires by means
of guaranteed possibility measures ∆ that obey axiom ∆(ϕ ∨ ψ) = min(∆(ϕ), ∆(ψ)).
     This approach leads us to consider that ϕ is desirable for the agent, it is still the
case in any context χ, i.e. saying that having ϕ true is satisfactory forces us to consider
that having χ ∧ ϕ true is satisfactory as well, whatever χ. Even if it may be generally
the case, it may happen that when an agent specifies that it has the desire to have ϕ
true, it may not specify that in the abnormal / rare case where χ is true, it rather desires
having ¬ϕ true. Another worth considering option is when, in the context where χ is
true, it becomes indifferent which of ϕ or ¬ϕ is true. This means that although the agent
generally desires to have ϕ true, there may exist some particular circumstances where
this desire no longer exists, or even that an opposite desire may take place.
     Such a concern sounds like a nonmonotonic reasoning issue [7], where a set of
explicit desires may express some apparent logical contradiction (“I definitely desire
ϕ, but not necessarily in any circumstances”, in particular “I definitely desire ¬ϕ if χ
occurs”). It is not clear that nonmonotonic reasoning approaches, which have been de-
veloped for handling default knowledge in the presence of incomplete information, still
apply to computing a preference rank-ordering between situations. The paper intends to
discuss this situation, and to provide an approach suitable for handling non monotonic
desires, a problem that has been mentioned by philosophers [21], but not apparently
considered for its specific features in artificial intelligence until now.
     The paper is organized as follows. We first provide a short background on the possi-
bility theory framework and its suitability for representing desires. We then recall how
reasoning with default knowledge in the presence of incomplete information can be
properly handled in the possibilistic reasoning setting. Then we examine the problems
arising in the treatment of nonmonotonic desires, propose and discuss an approach.

2     Background on possibility theory and the modeling of desires
Let π be a mapping from a set of worlds W to [0, 1] that rank-orders them. Note that this
encompasses the particular case where π reduces to the characteristic function of a sub-
set E ⊆ W . The possibility distribution π may represent a plausibility ordering (E the
set of situations considered not impossible) when modeling epistemic uncertainty, or a
preference ordering (E is then the subset of satisfactory worlds) when modeling pref-
erences. Let us recall the complete system of the 4 set functions underlying possibility
theory [13] and their characteristic properties:
  i) The (weak) possibility measure (or potential possibility) Π(A) = maxw∈A π(w)
     evaluates to what extent there is a world in A that is possible. When π reduces to E,
     Π(A) = 1 if A ∩ E 6= ∅, which expresses the consistency of the event A with E,
     and Π(A) = 0 otherwise. Possibility measures are characterized by the following
     decomposability property: Π(A ∪ B) = max(Π(A), Π(B)).
 ii) The dual (strong or or actual) necessity measure N (A) = minw6∈A 1 − π(w) = 1 −
     Π(A) evaluates to what extent it is certain (necessarily true) that all possible worlds
     are in A. When π reduces to E, N (A) = 1 if E ⊆ A, which expresses that E entails
     event A (when E represents evidence), and N (A) = 0 otherwise. The duality of
     N w. r. t. Π expresses that A is all the more certain as the opposite event A is
     impossible. Necessity measures are characterized by the following decomposability
     property: N (A ∩ B) = min(N (A), N (B)).
iii) The strong (or actual, or “guaranteed”) possibility measure ∆(A) = minw∈A π(w)
     evaluates to what extent all situations in A are possible. When π reduces to E,
     ∆(A) = 1 if A ⊆ E, and ∆(A) = 0 otherwise. Strong possibility measures are
     characterized by the following property: ∆(A ∪ B) = min(∆(A), ∆(B)).
iv) The dual (weak) (or potential) necessity measure ∇(A) = maxw6∈A 1 − π(w) =
     1 − ∆(A) evaluates to what extent there is a situation outside A that is impossible.
     When π reduces to E, ∇(A) = 1 if A∪E 6= U , and ∇(A) = 0 otherwise. Weak ne-
     cessity measures are characterized by property: ∇(A ∩ B) = max(∇(A), ∇(B)).
∆, ∇ are decreasing set functions, while the (weak) possibility and (strong) necessity
measures are increasing. A modal logic counterpart of these 4 modalities has been pro-
posed in the binary-valued case (things are possible or impossible) [9]. There is a close
link between Spohn functions and (weak) possibility / (strong) necessity measures [12].

2.1   Possibility theory as basis for a logical theory of desires
The possibility and necessity operators Π and N have a clear epistemic meaning both in
the frameworks of possibility theory, and of Spohn’s uncertainty theory [22] (also refer-
red to as ‘κ calculus’, or as ‘rank-based system’ and ‘qualitative probabilities’). Differ-
ently from the operators Π and N , the operators ∆ and ∇ are less employed to model
epistemic attitudes. In fact while the epistemic use of N models the idea of “knowing at
least”, the one of ∆ accounts for the idea of “knowing at most”. Combining both modal-
ities provides a representation of the idea of “only knowing” [1, 15]. In epistemic logic,
the notion of strong belief put forward by Battigalli and Siniscalchi [2], corresponds to
the inequality ∆(A) > Π(A) (which provably implies N (A) > N (A) = 0).
     Here we advocate the idea that ∆ and ∇ can be viewed as operators modeling
motivational mental attitudes such as goals or desires.1 In particular, we claim that
∆ can be used to model the notion of desire, whereas ∇ can be used to model the
notion of potential desire. 2 According to the philosophical theory of motivation based
on Hume [18], a desire can be conceived as an agent’s motivational attitude which
consists in an anticipatory mental representation of a pleasant (or desirable) state of
affairs (representational dimension of desires) that motivates the agent to achieve it
(motivational dimension of desires). In this perspective, the motivational dimension of
an agent’s desire is realized through its representational dimension. For example when
an agent desires to be at the Japanese restaurant eating sushi, he imagines himself eating
sushi at the Japanese restaurant and this representation gives him pleasure. This pleasant
representation motivates him to go to the Japanese restaurant in order to eat sushi.
     Intuitively speaking, with the term potential desire, we refer to a weaker form of
motivational attitude. We assume that an agent considers a given property ϕ potentially
desirable if ϕ does not conflict with the agent’s current desires. In this sense, ϕ is poten-
tially desirable if it is not incompatible with the agent’s current desires. Following ideas
presented in [11], let us explain why the operator ∆ is a good candidate for modeling
the concept of desire and why ∇ is a good candidate for modeling the idea of potential
desire.


Mental states We define an agent’s mental state as a tuple M = (E, D) where:
 – E ⊆ W is a non-empty subset of the set of all worlds, and
 – D ⊂ W is a proper subset of the set of all worlds.

The set E defines the set of worlds not ruled out by the agent (i.e., the maximal set of
worlds that the agent considers possible), whereas D is the set of desirable worlds for
the agent. Let M denote the set of all mental states. We here assume for every mental
state M there exists a world with a minimal degree of desirability 0 corresponding to
indifference (this is why D 6= W ). This type of normality constraint for guaranteed pos-
sibility distributions is usually assumed in possibility theory. More generally, a graded
mental state is a pair M̃ = (π, δ) where:
  – π : W → L is a normal possibility distribution over the set of all worlds, and
  – δ : W → L is a function mapping every world w to its desirability (or pleasantness)
     degree in L, with δ(w) = 0 for some w ∈ W .
 1
   We use the term ‘motivational’ mental attitude (e.g., a desire, a goal or an intention) in order
   to distinguish it from an ‘epistemic’ mental attitude such as knowledge or belief.
 2
   Here, the word potential does not refer to the idea that ϕ would be desired by the agent as
   a consequence of his mental state, but the agent has not enough deductive power to become
   aware of it. It is more the idea that the agent has no reason not to desire ϕ. Another possible
   term is desire admissibility or desire compatibility.
  – L is a bounded chain acting as a qualitative scale for possibility and desirability,
    that make these notions commensurate.
Let us stress the point that while δ(w) = 1 expresses complete desirability, δ(w) = 0
expresses indifference, rather than repulsion. The condition δ(w) = 0 for some w ∈ W
accounts for the claim that desire presupposes not everything is desired.

Modeling desire using ∆ function We here assume that in order to determine how
much a proposition ϕ is desirable an agent takes into consideration the worst situation in
which ϕ is true. In other words, ∆(||ϕ||) = α means that the agent desires all situations
where ϕ holds to at least level α. Thus, denoting by ||ϕ|| the set of situations where ϕ is
true, for all graded mental states M̃ = (π, δ) and for all propositions ϕ, we can interpret
∆(||ϕ||) = minu∈||ϕ|| δ(u) as the extent to which the agent desires ϕ to be true. Let us
justify the following two properties for desires:

                            ∆(||ϕ ∨ ψ||) = min(∆(||ϕ||), ∆(||ψ||))

and
                           ∆(||ϕ ∧ ψ||) ≥ max(∆(||ϕ||), ∆(||ψ||)).
According to the first property, an agent desires ϕ to be true with a given strength α
and desires ψ to be true with a given strength β if and only if the agent desires ϕ ∨ ψ
to be true with strength equal to min(α, β). Note that the copula “and” in ”desiring ϕ
and desiring ψ” corresponds to performing the disjunction of the propositions, as we
must perform the set-union of all situations where ϕ is true and all those where ψ is
true (these propositions are not viewed as events referring to a single real world, but as
collections of situations). Clearly, in the case of representing uncertainty, this property
would not make any sense because the plausibility of ϕ ∨ ψ should be clearly at least
equal to the maximum of the plausibilities of ϕ and ψ. For the notion of desires, it
seems intuitively satisfactory to have the opposite, namely the level of desire of ϕ ∨ ψ
should be at most equal to the minimum of the desire levels of ϕ and ψ. Indeed, we
only deal with here with “positive”3 desires (i.e., desires to reach something with a
given strength).
    Under this proviso, the level of desire of ϕ ∧ ψ cannot be less than the maximum of
the levels of desire of ϕ and ψ. Here ϕ ∧ ψ does not refer to the disjunction “desiring
ϕ or desiring ψ”, but to the idea of desiring ϕ and ψ simultaneously. According to the
second property, the joint occurrence of two desired events ϕ and ψ is more desirable
than the single occurrence of one of the two events. This is the reason why in the right
side of the equality we have the max. The latter property does not make any sense in
the case of epistemic attitudes like beliefs, as the joint occurrence of two events ϕ and
ψ is epistemically less plausible than the occurrence of a single event. On the contrary
the opposite inequality makes perfect sense for motivational attitudes like desires.
    By way of example, suppose Peter wishes to go to the cinema in the evening
with strength α (i.e., ∆(||goToCinema||) = α) and, at the same time, he wishes to
 3
     The distinction between positive and negative desires is a classical one in psychology. Negative
     desires correspond to state of affairs the agent wants to avoid with a given strength, and then
     desires the opposite to be true. However, we do not develop this bipolar view here.
spend the evening with his friend with strength β (i.e., ∆(||stayWithFriend ||) = β).
Then, according to the preceding property, Peter wishes to to go the cinema with his
friend with strength at least max{α, β} (i.e., ∆(||goToCinema ∧stayWithFriend ||) ≥
max{α, β}). This is a reasonable conclusion because the situation in which Peter achieves
his two desires is (for Peter) at least as pleasant as the situation in which he achieves
only one desire. A similar intuition can be found in [8] about the min-decomposability
of disjunctive desires, where however it is emphasized that it corresponds to a pes-
simistic view.
     From the normality constraint of δ, we can deduce the following inference rule:
Proposition 1. For every M ∈ M, if ∆(||ϕ||) > 0 then ∆(||¬ϕ||) = 0.
This means that if an agent desires ϕ to be true — i.e., with some strength α > 0 — then
he does not desire ϕ to be false. In other words, an agent’s desires must be consistent.
     The operator ∆ satisfies the following additional property:
Proposition 2. For every M ∈ M, if ||ϕ|| = ∅ then ∆(||ϕ||) = 1.
i.e. in the absence of actual situations where ϕ is true, the property ϕ is desirable by
default.

Modeling potential desire using ∇ As pointed out above, we claim that the operator
∇ allows us to capture a concept of potential desire (or non-incompatibility with desire):
∇(||ϕ||) represents the extent to which an agent considers ϕ a potentially desirable
property or, alternatively, the extent to which the property ϕ is not incompatible with the
agent’s desires. An interesting situation is when the property ϕ is maximally potentially
desirable for the agent (i.e., ∇(||ϕ||) = 1). This is the same thing as saying that the
agent does not desire ϕ to be false (i.e., ∆(||¬ϕ||) = 0). Intuitively, this means that ϕ is
totally potentially desirable in as much as the level of desire for ¬ϕ is 0. In particular,
given a graded mental state M = (π, δ), let D = {w ∈ W : δ(w) > 0} be the set of
somewhat satisfactory or desirable worlds in M . Then, we have ∇(||ϕ||) = 1 if and
only if D ∩ ||¬ϕ|| =6 ∅, i.e., ¬ϕ is consistent with what is not desirable, represented by
the set D.
     Another interesting situation is when the property ϕ is maximally desirable for the
agent (i.e., ∆(||ϕ||) = 1). This is the same thing as saying that ¬ϕ is not at all poten-
tially desirable for the agent (i.e., ∇(||¬ϕ||) = 0). It is worth noting that if an agent
desires ϕ to be true, then ϕ should be maximally potentially desirable. This property is
expressed by the following valid inference rule which follows straightforwardly from
the previous one and from the definition of ∇(||ϕ||) as 1 − ∆(||¬ϕ||):
Proposition 3. For every M̃ , if ∆(||ϕ||) > 0 then ∇(||ϕ||) = 1.
     Let us now consider the case in which the agent does not desire ϕ (i.e., ∆(||ϕ||) =
0). In this case two different situations are possible: either ∆(||¬ϕ||) = 0 and ϕ is fully
compatible with the agent’s desires (i.e., ∇(||ϕ||) = 1), or ∆(||¬ϕ||) > 0 and then ϕ is
not fully compatible with the agent’s desires (i.e., ∇(||ϕ||) < 1).

Some valid inference rules for desires The following is a valid inference rule for
∆-based logic, see [9] for the proof:
Proposition 4. For every M ∈ M, if ∆(||ϕ ∧ ψ||) ≥ α and ∆(||¬ϕ ∧ χ||) ≥ β then
∆(||ψ ∧ χ||) ≥ min(α, β).
    Therefore, if we interpret ∆ as a desire operator, we have that if an agent desires
ϕ∧ψ with strength at least α and desires ¬ϕ∧χ with strength at least β, then he desires
ψ ∧ χ with strength at least min(α, β). This seems a reasonable property of desires. By
way of example, suppose Peter is planning what to do in the weekend. He has two con-
comitant desires. On the one hand, Peter desires to go to the contemporary art museum
on saturday afternoon and to have dinner at a japanese restaurant on saturday evening
with strength at least α. On the other hand, Peter desires not to go the contemporary art
museum on saturday afternoon but to go to the sea on sunday morning with strength at
least β. Then, it is reasonable to conclude that Peter desires to have dinner at a japanese
restaurant on saturday evening and to go to the sea on sunday morning with strength at
least min(α, β).


3   Nonmonotonic reasoning with incomplete knowledge

Nonmonotonic reasoning has been extensively studied in AI in relation with the prob-
lem of reasoning with rules having exceptions under incomplete information [17], or
for dealing with the frame problem in dynamic worlds [7]. In the following, we recall
the possibilistic approach [6], which has been proved [5] to provide a faithful represen-
tation of the postulate-based approach proposed by Kraus, Lehmann and Magidor [19],
and completed in [20].
    A default rule “if ϕ then ψ, generally”, denoted by ϕ ; ψ, is then understood
formally as the constraint
                                Π(ϕ ∧ ψ) > Π(ϕ ∧ ¬ψ)                                  (1)
on a possibility measure Π describing the semantics of the available knowledge. It
expresses that in the context where ϕ is true, there exist situations where having ψ true
is strictly more likely than any situation where ψ is false in the same context.
     This constraint can be shown to be equivalent to N (ψ | ϕ) > 0, when Π(ψ | ϕ) is
defined as the greatest solution of the min-based equation

                           Π(ϕ ∧ ψ) = min(Π(ψ | ϕ), Π(ϕ))                                  (2)

and the duality N (ψ | ϕ) = 1 − Π(¬ψ | ϕ) holds. There also exists a product-based
definition instead of (2). But only the min-based conditioning is used in the following.
    Let us consider the following classical example with default rules d1: “birds fly”,
d2: “penguins do not fly”, d3: “penguins are birds”, symbolically written

                          d1 : b ; f ; d2 : p ; ¬f ; d3 : p ; b.

    The set of three defaults is thus represented by the following set C of constraints:

                 b ∧ f >Π b ∧ ¬f ; p ∧ ¬f >Π p ∧ f ; p ∧ b >Π p ∧ ¬b.

    Let Ω be the finite set of interpretations of the considered propositional language,
generated by b, f, p in the example. If this language is made of the literals ϕ1 , . . . , pn ,
these interpretations correspond to the possible worlds (i.e., the completely described
situations) where the conjunctions ∗p1 ∧ . . . ∧ ∗pn are true, where ∗ stands for the
presence of the negation sign ¬ or its absence. In our example, Ω = {ω0 : ¬b∧¬f ∧¬p,
ω1 : ¬b ∧ ¬f ∧ p, ω2 : ¬b ∧ f ∧ ¬p, ω3 : ¬b ∧ f ∧ p, ω4 : b ∧ ¬f ∧ ¬p, ω5 : b ∧ ¬f ∧ p,
ω6 : b ∧ f ∧ ¬p, ω7 : b ∧ f ∧ p}. Any interpretation ω thus corresponds to a particular
proposition. A possibility distribution π is a mapping from Ω to [0, 1]. It induces a
ranking of Ω according to the level of normality of each situation. Let >Π denote a
ranking of Ω, such that ω >Π ω 0 iff π(ω) > π(ω 0 ) on Ω. π is indeed the restriction
to propositions describing complete situations of a possibility measure Π defined by
Π(ϕ) = maxω|=ϕ π(ω), where ω |= ϕ means ω is an interpretation which makes ϕ
true. For instance Π(b ∧ f ) = max(π(ω6 ), π(ω7 )).
    Then the set of constraints C on interpretations is:

   C1 : max(π(ω6 ), π(ω7 )) > max(π(ω4 ), π(ω5 )),

   C2 : max(π(ω5 ), π(ω1 )) > max(π(ω3 ), π(ω7 )),

   C3 : max(π(ω5 ), π(ω7 )) > max(π(ω1 ), π(ω3 )).

    Any finite consistent set of constraints of the form ϕi ∧ψi >Π pi ∧¬ψi , representing
a set of defaults ϕi ; ψi induces a partially defined ranking >Π on Ω, that can
be completed according to the principle of minimal specificity, e.g. [5]. This principle
assigns to each world ω the highest possibility level (in forming a well-ordered partition
of Ω) without violating the constraints. This defines a unique >Π .
    The well ordered partition of Ω which is obtained in the example is

   {ω0 , ω2 , ω6 } >Π {ω4 , ω5 } >Π {ω1 , ω3 , ω7 }.

    Let E1 , . . . , Em be the obtained partition in the general case. A numerical coun-
terpart to >Π can be defined by π(ω) = m+1−i       m    if ω ∈ Ei , i = 1, . . . , m. In our
example we have m = 3 and π(ω0 ) = π(ω2 ) = π(ω6 ) = 1; π(ω4 ) = π(ω5 ) = 2/3;
π(ω1 ) = π(ω3 ) = π(ω7 ) = 1/3. Note that it is purely a matter of convenience to
use a numerical scale, and any other numerical counterpart such that π(ω) > π(ω 0 ) iff
ω >Π ω 0 will work as well. Namely the range of π is used as an ordinal scale.
    From this possibility distribution π, we can compute for any proposition ϕ its ne-
cessity degree N (ϕ). For instance,

                  N (¬p ∨ ¬f ) = min{1 − π(ω)|ω |= p ∧ f }
                                = min(1 − π(ω3 ), 1 − π(ω7 )) = 2/3,

    while N (¬b ∨ f ) = min{1 − π(ω) | ω |= b ∧ ¬f } = min(1 − π(ω4 ), 1 − π(ω5 )) =
1/3
    and N (¬p ∨ b) = min(1 − π(ω1 ), 1 − π(ω3 )) = 2/3.
    The default rule-base can then be encoded in possibilistic logic [10]. The method
consists in turning each default ϕi ; ψi into a possibilistic clause (¬ϕi ∨ ψi , N (¬ϕi ∨
ψi )), where N is computed from the greatest possibility distribution π induced by the
set of constraints corresponding to the default knowledge base, as already explained.
Then we apply the possibilistic inference machinery for reasoning with the defaults
together with the available factual knowledge. In our example, we obtain the possibilis-
tic logic base K = {(¬p ∨ ¬f, 2/3), (¬p ∨ b, 2/3), (¬b ∨ f, 1/3)}. This encodes the
generic knowledge embedded in the default rules. Suppose that all we know about the
factual situation under consideration is that “Tweety” is a bird, which is encoded by
(b, 1). Then we apply the possibilistic logic resolution rule [10]
                        (¬a ∨ b, α), (a ∨ c, β)`(b ∨ c, min(α, β)).
    Then, we can check that K ∪{(b, 1)}`(f, 1/3), i.e., we conclude that if all we know
about “Tweety” is that it is a bird, then it flies. If we are said that “Tweety” is in fact a
penguin, encoded by (p, 1), then K ∪ {(b, 1)} ∪ {(p, 1)}`( ⊥, 1/3), which means that
K augmented with the available factual information is now inconsistent (at level 1/3).
    However, the conclusions which can be obtained with a certainty level strictly greater
than the level of inconsistency are safe (the level of inconsistency of a possibilistic logic
base is the greatest weight with which ⊥ can be derived from the base, applying the
resolution rule repeatedly). Namely, here, we have K ∪ {(b, 1)} ∪ {(p, 1)}`(¬f, 2/3).
Thus, knowing that “Tweety” is a penguin, we now conclude that it does not fly (since
2/3 > level of inconsistency(K ∪ {(b, 1)} ∪ {(p, 1)}) = 1/3). Roughly speaking, the
most specific rules w.r.t. a given context remain above the level of inconsistency.


4     Nonmonotonic desires
As recalled in Section 2, expressing that ϕ true is desired can be arguably represented
by ∆(ϕ) > 0. Still ∆(ϕ) > 0 means that any situation where ϕ is true is somewhat
desirable, which may appear quite strong, since even if ϕ true is indeed desired in
general, there may exist quite particular situations where this is no longer the case.
Thus, one may need to make the modeling nonmonotonic.
    This raises the question of defining conditioning for the ∆ function, an issue that
has been only briefly discussed in [3].

Conditional desires Conditioning can be also defined for guaranteed possibility mea-
sures. Since they are decreasing, it should work in a reversed way w.r.t. Π. Namely,
conditioning obeys the following equation
                           ∆(ϕ ∧ ψ) = max(∆(ψ|ϕ), ∆(ϕ)).
     Since ∆(ϕ) = min(∆(ϕ ∧ ψ), ∆(ϕ ∧ ¬ψ)), it expresses that
    – either the minimal desirability level over the models of ϕ is reached on ϕ ∧ ψ, so,
      ¬ψ is preferred in the context ϕ and ∆(ψ|ϕ) should be small enough (e.g. 0, for
      normalisation purposes),
    – or that ∆(ϕ) = ∆(ϕ ∧ ¬ψ) < ∆(ϕ ∧ ψ) and then ∆(ψ|ϕ) should be equal to
      ∆(ϕ ∧ ψ).
Thus, we have
                      (
                       ∆(ϕ ∧ ψ)         if ∆(ϕ) < ∆(ψ ∧ ϕ)
             ∆(ψ|ϕ) =
                       =0               if ∆(ϕ ∧ ¬ψ) ≥ ∆(ϕ ∧ ψ) = ∆(ϕ)
It means that ϕ and ψ are simultaneously desired because ϕ is so, and in context ϕ, ψ
is desired as well.
     It can be checked that if ∆(ψ|ϕ) = 0 then ∆(ϕ ∧ ψ) = max(∆(ϕ), ∆(ψ)), i.e., it
is the case when the agent desires ϕ and ψ simultaneously at least as much as ϕ or ψ
separately.
     Moreover, as recalled in the previous section, we have N (ψ|ϕ) > 0 iff Π(ϕ ∧ ψ) >
Π(ϕ ∧ ¬ψ) iff N (ϕ → ψ) > N (ϕ → ¬ψ),
where → denotes material implication. It expresses that ψ is somewhat certain in con-
text ϕ iff ϕ ∧ ψ is strictly more plausible than ϕ ∧ ¬ψ. An analogous relation holds for
guaranteed possibility:

      ∆(ψ|ϕ) > 0 iff ∆(ϕ ∧ ψ) > ∆(ϕ ∧ ¬ψ) iff ∇(ϕ → ψ) > ∇(ϕ → ¬ψ).

    This points out the fact that the definition of conditioning enforces the following
normalization condition: min(∆(ψ|ϕ), ∆(¬ψ|ϕ)) = 0. It means that everything (i.e. ψ
and ¬ψ) cannot be simultaneously desired in any context, and the fact that ψ is desired
in context ϕ means that it is more desired than ¬ψ in this context.

Nonmonotonic constraints: an example We are now in a position to investigate non-
monotonic constraints expressed by means of ∆ functions. Consider the example in the
abstract, where ϕ and ψ are supposedly logically independent. Assume we have several
apparently conflicting pieces of desire
    - ϕ and ψ are separately desired (rather than ¬ϕ, ¬ψ), which translates into ∆(ϕ) >
∆(¬ϕ) and ∆(ψ) > ∆(¬ψ);
    - in context ψ, ¬ϕ is desired rather than ϕ, and likewise for context ϕ, which trans-
lates into
                  ∆(¬ϕ ∧ ψ) > ∆(ϕ ∧ ψ); ∆(ϕ ∧ ¬ψ) > ∆(ϕ ∧ ψ).
Letting ∆(ϕ ∧ ψ) = x, ∆(ϕ ∧ ¬ψ) = y, min(∆(¬ϕ ∧ ψ) = z, ∆(¬ϕ ∧ ¬ψ) = t, and
since ∆(ϕ) = min(∆(ϕ ∧ ψ), ∆(ϕ ∧ ¬ψ)), we get the system of constraints

                                min(x, y) > min(z, t);
                                min(x, z) > min(y, t);
                                         z > x.
                                         y>x

    This is equivalent to z > x > t and y > x > t. We can reason assuming that a
situation is never desired beyond what it is explicitly claimed (this is also called the
maximal specificity principle), we obtain desirability values z = y > x > t = 0, i.e.,

            ∆(¬ϕ ∧ ψ) = ∆(ϕ ∧ ¬ψ) > ∆(ϕ ∧ ψ) > ∆(¬ϕ ∧ ¬ψ) = 0.

    As can be seen, we solve the apparent conflict resulting from desiring ϕ and ψ
separately but not at the same time, by giving the highest desirability to ¬ϕ ∧ ψ and
ϕ ∧ ¬ψ, while preserving the ones of ϕ and ψ; interestingly, it implies that ¬ϕ ∧ ¬ψ is
not desired at all, while ϕ ∧ ψ remains slightly desirable (otherwise, it would contradict
the claims that ϕ is desirable on the one hand and ψ too on the other hand).
Inferring what is desired Given a set of default desires modelled by D = {∆d (ϕi ∧
ψi ) > ∆d (ϕi ∧ ¬ψi ) | i = 1, m} and a formula χ describing a context of interest, the
problem is to infer what is desirable in this context.
    Note that the input information is here of two distinct kinds:

 i) conditional desires encoded in terms of a guaranteed possibility measure ∆d , and
ii) a given context described by the set of situations where a certain formula χ is true.

    The inference procedure is then the following:

     Step 1. Compute the maximally specific possibility distribution δd , solution of the
set of constraints D by introducing the minimal number of distinct levels necessary for
satisfying the set of constraints. If the set of constraints is consistent, this qualitative
distribution δd exists and is unique. Compute the corresponding ∆d (ϕi ∧ ψi ) associated
with δd , and encode them as a set T of possibilistic ∆-formulas [ϕi ∧ ψi , ∆d (ϕi ∧ ψi )]
[4].
     Step 2. Move the variables pertaining to the description of the context of each rule to
the weight part of the possibilistic ∆-formulas, thus defining T ∗ . Namely, [ϕi ∧ ψi , αi ]
semantically entails [ψi , min(v(ϕi ), αi )] where v(ϕi ) = 1 is true, and v(ϕi ) = 0
otherwise [4].
     Step 3. Project T ∗ on the context χ of interest. Namely if χ  pi , then v(ϕi ) is set
to 1, otherwise it is set to 0. We thus obtain Tχ∗ .
     Step 4. Compute the level of ∆-inconsistency inc(Tχ∗ ) of Tχ∗ , where inc(Tχ∗ ) =
max{α|Tχ∗  [>, α]}. For doing it, use the cut rule associated with ∆-formulas [14],
namely
                       [ϕ ∧ ψ, α], [¬ϕ ∧ φ, β]  [ψ ∧ φ, min(α, β)]
As a consequence [ϕ, α], [¬ϕ, β]  [>, min(α, β)].
   Step 5. Infer the desires [ψi , αi ] such as αi > inc(Tχ∗ ).

    Example:
    D = {∆d (ϕ) > ∆d (¬ϕ), ∆d (ψ ∧ ¬ϕ) > ∆d (ψ ∧ ϕ)}
    We get T = {[ϕ, β], [ψ ∧ ¬ϕ, α]}, with α > β.
    Then T ∗ = {[ϕ, β], [¬ϕ, min(v(ψ), α)]}
    Assume χ ≡ >, one can only infer [ϕ, β], since T>∗ = {[ϕ, β]}, (indeed [¬ϕ, 0],
equivalent to ∆d ≥ 0, is a trivial formula which can be deleted, since v(ψ) is set to 0).
    Now suppose χ ≡ ψ, then v(ψ) = 1 and Tψ∗ = {[ϕ, β], [¬ϕ, α]}. Since Tψ∗ 
[>, min(α, β)] = [>, β], only [¬ϕ, α] is safe from inconsistency, as α > inc(Tψ∗ ) = β.
    We thus get the expected nonmonotonic behavior.


5   Concluding remarks

In this note we have outlined an approach for handling the nonmonotonic behavior of
desires, a problem which has apparently not be much considered in AI. Starting with
the fact that desires can be appropriately described by means of guaranteed possibil-
ity measures in the sense of possibility theory, and keeping the lesson of the encoding
of knowledge-oriented nonmonotonic reasoning in the setting of possibility theory, we
have proposed a possibilistic approach to the nonmonotonic handling of desires. This
approach, based on the notion of conditional guaranteed possibility measures, some-
what parallels the nonmonotonic handling of default knowledge under incomplete in-
formation, but with some noticeable differences due to the decreasingness of guaranteed
possibility measures wrt entailment, and the need of distinguishing between desires and
contexts of interest in which we consider them.
    Although the outlined approach is promising, it remains preliminary in many re-
spects. Wee still need a formal proof that the deduction method in possibilistic ∆-based
logic outlined above does compute then inference of ∆(ϕ|χ) > 0 from the conditional
desire base. A postulate-based approach to reasoning about conditional desires in a way
that would parallel Kraus, Lehmann and Magidor postulates for reasoning with default
knowledge, is a natural objective as well. The nonmonotonic handling of graded desires
might also parallel the approach used in [16] for dealing with more or less certain pieces
of default knowledge.


References
 1. Banerjee, M., Dubois, D.: A simple logic for reasoning about incomplete knowledge. Inter-
    national Journal of Approximate Reasoning 55, 639–653 (2014)
 2. Battigalli, P., Siniscalchi, M.: Strong belief and forward induction reasoning. J. of Economic
    Theory 106(2), 356–391 (2002)
 3. Benferhat, S., Dubois, D., Kaci, S., Prade, H.: Bipolar possibilistic representations. In: Dar-
    wiche, A., Friedman, N. (eds.) Proc. 18th Conf. in Uncertainty in Artificial Intelligence (UAI
    ’02), Edmonton, Alberta, Aug. 1-4. pp. 45–52. Morgan Kaufmann (2002)
 4. Benferhat, S., Dubois, D., Kaci, S., Prade, H.: Modeling positive and negative information
    in possibility theory. Int. J. Intell. Syst. 23(10), 1094–1118 (2008)
 5. Benferhat, S., Dubois, D., Prade, H.: Nonmonotonic reasoning, conditional objects and pos-
    sibility theory. Artif. Intell. 92(1-2), 259–276 (1997)
 6. Benferhat, S., Dubois, D., Prade, H.: Practical handling of exception-tainted rules and inde-
    pendence information in possibilistic logic . Applied Intelligence 9, 101–127 (1998)
 7. Brewka, G., Marek, V., Truszczynski, eds., M.: Nonmonotonic Reasoning. Essays Celebrat-
    ing its 30th Anniversary., Studies in Logic, vol. 31. College Publications (2011)
 8. Casali, A., Godo, L., Sierra, C.: A graded BDI agent model to represent and reason about
    preferences. Artificial Intelligence 175, 1468–1478 (2011)
 9. Dubois, D., Hajek, P., Prade, H.: Knowledge-driven versus data-driven logics. Journal of
    Logic, Language, and Information 9, 65–89 (2000)
10. Dubois, D., Lang, J., Prade, H.: Possibilistic logic. In: Handbook of Logic in Artificial Intel-
    ligence and Logic Programming, Vol. 3, pp. 439–513. Oxford Univ. Press (1994)
11. Dubois, D., Lorini, E., Prade, H.: Bipolar possibility theory as a basis for a logic of desires
    and beliefs. In: Liu, W., Subrahmanian, V.S., Wijsen, J. (eds.) Proc. 7th Int. Conf. on Scalable
    Uncertainty Management (SUM’13), Washington, DC, Sept. 16-18. LNCS, vol. 8078, pp.
    204–218. Springer (2013)
12. Dubois, D., Prade, H.: Epistemic entrenchment and possibilistic logic. Artificial Intelligence
    50, 223–239 (1991)
13. Dubois, D., Prade, H.: Possibility theory: qualitative and quantitative aspects. In: Handbook
    of Defeasible Reasoning and Uncertainty Management Systems, vol. 1, pp. 169–226. Kluwer
    (1998)
14. Dubois, D., Prade, H.: Possibilistic logic: a retrospective and prospective view. Fuzzy Sets
    and Systems 144, 3–23 (2004)
15. Dubois, D., Prade, H., Schockaert, S.: Reasoning about uncertainty and explicit ignorance
    in generalized possibilistic logic. In: Proceedings of the European Conference on Artificial
    Intelligence (2014)
16. Dupin de Saint-Cyr, F., Prade, H.: Handling uncertainty and defeasibility in a possibilistic
    logic setting. Int. J. Approx. Reasoning 49(1), 67–82 (2008)
17. Group Léa Sombé: Besnard, P., Cordier, M.O., Dubois, D., Fariñas del Cerro, L., Froidevaux,
    C., Moinard, Y., Prade, H., Schwind, C., Siegel, P.: Reasoning under Incomplete Informa-
    tion in Artificial Intelligence: A Comparison of Formalisms Using a Single Example. Wiley
    (1990)
18. Hume, D.: A Treatise of Human Nature. Clarendon Press, Oxford (1978)
19. Kraus, S., Lehmann, D., Magidor, M.: Nonmonotonic reasoning, preferential models and
    cumulative logics. Artificial Intelligence 44, 167–207 (1990)
20. Lehmann, D., Magidor, M.: What does a conditional knowledge base entail? Artificial Intel-
    ligence 55, 1–60 (1992)
21. McDaniel, K., Bradley, B.: Desires. Mind 117(466), 267–302 (2008)
22. Spohn, W.: Ordinal conditional functions: a dynamic theory of epistemic states. In: Causation
    in Decision, Belief Change and Statistics, vol. 1, pp. 105–134. Kluwer (1988)