=Paper= {{Paper |id=Vol-2194/paper_3 |storemode=property |title=Towards a Unified Algebraic Framework for Non-Monotonicity |pdfUrl=https://ceur-ws.org/Vol-2194/paper_3.pdf |volume=Vol-2194 |authors=Nourhan Ehab,Haythem Ismail,Gavin Rens,Thomas Meyer,Ingo J. Timm,Lukas Reuter,Patrick Koopmann |dblpUrl=https://dblp.org/rec/conf/ki/EhabI18 }} ==Towards a Unified Algebraic Framework for Non-Monotonicity== https://ceur-ws.org/Vol-2194/paper_3.pdf
    Towards a Unified Algebraic Framework for
               Non-Monotonicity

                   Nourhan Ehab1 and Haythem O. Ismail2,1
                        1
                        German University in Cairo, Egypt
                 Department of Computer Science and Engineering
                           2
                              Cairo University, Egypt
                     Department of Engineering Mathematics
                  {nourhan.ehab,haythem.ismail}@guc.edu.eg


      Abstract. Tremendous research e↵ort has been dedicated over the years
      to thoroughly investigate non-monotonic reasoning. With the abundance
      of non-monotonic logical formalisms, a unified theory that enables com-
      paring the di↵erent approaches is much called for. In this paper, we
      present an algebraic graded logic we refer to as LogA G capable of en-
      compassing various non-monotonic logics. One of the most widely studied
      logics of uncertainty is possibilistic logic tailored for non-monotonic rea-
      soning under incomplete and partially inconsistent knowledge. We show
      how to encode possibilistic theories as LogA G theories, and prove that
      the LogA G logical consequence relation subsumes its possibilistic coun-
      terpart. Since possibilistic logic subsumes any non-monotonic inference
      relation satisfying Makinson’s rationality postulates, our results prove
      that LogA G subsumes such inference relations as well while remaining
      immune to the drowning problem.


1   Introduction
Non-monotonic logics are attempts to model commonsense defeasible reason-
ing that allows making plausible, albeit possibly fallible, assumptions about the
world in the absence of complete knowledge. The term “non-monotonic” refers
to the fact that new evidence can retract previous contradicting assumptions.
This contrasts with classical logics where new evidence never invalidates previous
assumptions about the world. Modelling non-monotonicity has been the focus
of extensive studies in the knowledge representation and reasoning community
for many years giving rise to a vast family of non-monotonic formalisms. The
currently existing approaches to representing non-monotonicity can be classified
into two orthogonal families: fixed point logics and model preference logics [4].
Fixed point logics define a fixed point operator by which possibly multiple sets
of consistent beliefs can be constructed. Typical non-monotonic logics taking the
fixed point approach are Reiter’s default logic [25] and Moore’s autoepistemic
logic [22, 19]. Model preference logics, on the other hand, define non-monotonic
logical inference relations with respect to selected preferred models of the world.
Typical model preference logics are probabilistic logic [1, 24], McCarthy’s cir-
cumscription [20], system P proposed by Kraus, Lehmann and Magidor [15],

                                          26
and Pearl’s system Z [23]. The wide diversity of all of these logics in addition
to their non-standard semantics has rendered the task of gaining a good un-
derstanding of them quite hard. For this reason, a unified theory that enables
comparing the di↵erent approaches is much called for. The purpose of this paper
is to present an algebraic graded logic we refer to as LogA G [14, 10, 11] capa-
ble of encompassing the previously-mentioned non-monotonic logics providing a
general framework for non-monotonicity.
    Another widely studied approach to non-monotonicity developed indepen-
dently from the mainstream of the non-monotonic formalisms research is possi-
bilistic logic [7]. In possibilistic logic, propositions are associated with a weight
representing the degree to which these propositions are believed to be true.
The possibilistic logical consequence relation is non-monotonic and can be ex-
pressed by Shoham’s preferential entailment [27, 3]. In fact, in [3] it is proved
that any non-monotonic inference relation satisfying some widely accepted ra-
tionality postulates according to Makinson [18] can be encoded in possibilistic
logic. It was also shown how to encode default rules in possibilistic logic to yield
the same logical consequences as the rational closure of system P, probabilistic
logic, and system Z, proving that possibilistic logic o↵ers a unified understanding
of non-monotonic logical consequence relations.
    In this paper, we will show how to encode possibilistic theories as LogA G
theories, and prove that the LogA G logical consequence relation subsumes its
possibilistic counterpart. In doing so, we prove that LogA G captures any non-
monotonic inference relation satisfying Makinson’s rationality postulates as well,
while staying immune to the drowning problem which plagues all logics based
on such relations [2, 3]. This acts as the first step towards proving that LogA G
can be regarded as a unified framework for non-monotonic formalisms. We leave
relating LogA G to default logic, circumscription, and autoepistemic logic to an
extended version of this paper.
    In Section 2, LogA G will be reviewed describing its syntax and semantics.
Section 3 will briefly review possibilistic logic. In Section 4, the main results of
this paper, proving that LogA G subsumes possibilistic logic, will be presented.
Finally, some concluding remarks are outlined in Section 5.


2   LogA G

LogA G is a graded logic for reasoning with uncertain beliefs. “Log” stands for
logic, “A” for algebraic, and “G” for grades. In LogA G, a classical logical for-
mula could be associated with a grade representing a measure of its uncertainty.
Non-graded formulas are taken to be certain. In this way, LogA G is a logic for
reasoning about graded propositions. LogA G is algebraic in that it is a language
of only terms, some of which denote propositions. Both propositions and their
grades are taken as individuals in the LogA G ontology. While some multimodal
logics such as [6, 21] may be used to express graded grading propositions too,
unlike LogA G, the grades themselves are embedded in the modal operators and
are not amenable to reasoning and quantification. This makes LogA G a quite

                                         27
expressive language that is still intuitive and very similar in syntax to first-order
logic. LogA G is demonstrably useful in commonsense reasoning including default
reasoning, reasoning with information provided by a chain of sources of varying
degrees of trust, and reasoning with paradoxical sentences as discussed in [10,
14].
    While most of the graded logics we are aware of employ non-classical modal
logic semantics by assigning grades to possible worlds [8], LogA G is a non-modal
logic with classical notions of worlds and truth values. This is not to say that
LogA G is a classical logic, but it is closer in spirit to classical non-monotonic
logics such as default logic and circumscription. Following these formalisms,
LogA G assumes a classical notion of logical consequence on top of which a more
restrictive, non-classical relation is defined selecting only a subset of the classical
models. In defining this relation we take the algebraic, rather than the modal,
route. The remaining of this section is dedicated to reviewing the syntax and
semantics of LogA G. A sound and complete proof theory for LogA G is presented
in [14, 10]. In [14], it was proven that LogA G is a stable and well-behaved logic
observing Makinson’s properties of reflexivity, cut, and cautious monotony.

2.1   LogA G Syntax
LogA G consists of algebraically constructed terms from function symbols. There
are no sentences in LogA G; instead, we use terms of a distinguished syntactic
type to denote propositions. Propositions are included as first-class individu-
als in the LogA G ontology and are structured in a Boolean algebra giving us
all standard truth conditions and classical notions of consequence and validity.
Additionally, grades are introduced as first-class individuals in the ontology. As
a result, propositions about graded propositions can be constructed, which are
themselves recursively gradable.
     A LogA G language is a many-sorted language composed of a set of terms
partitioned into three base sorts: P is a set of terms denoting propositions, D
is a set of terms denoting grades, and I is a set of terms denoting anything
else. A LogA G alphabet includes a non-empty, countable set of constant and
function symbols each having a syntactic sort from the set = { P , D , I } [
{⌧1 ! ⌧2 | ⌧1 2 { P , D , I }} and ⌧2 2 } of syntactic sorts. Intuitively,
⌧1 ! ⌧2 is the syntactic sort of function symbols that take a single argument
of sort P , D , or I and produce a functional term of sort ⌧2 3 . In addition, an
alphabet includes a countably infinite set of variables of the three base sorts; a
set of syncategorematic symbols including the comma, various matching pairs of
brackets and parentheses, and the symbol 8; and a set of logical symbols defined
as the union of the following sets: {¬} ✓ P ! P , {^, _} ✓ P ! P !
          .
  P , { , =} ✓   D    ! D ! P , and {G} ✓ P ! D ! P . Terms
involving ) 4 , ,, and 9 can always be expressed in terms of the above logical
operators and 8.
3
  Given the restriction of the first argument of function symbols to base sorts, LogA G
  is, in a sense, a first-order language.
4
  Through out this paper we will use ) to denote material implication.


                                          28
2.2    From Syntax to Semantics

A key element in the semantics of LogA G is the notion of a LogA G structure.

Definition 1. A LogA G structure is a quintuple
S = hD, A, g, <, ei, where

 – D, the domain of discourse, is a set with two disjoint, non-empty, countable
   subsets: a set of propositions P, and a set of grades G.
 – A = hP, +, ·, , ?, >i is a complete, non-degenerate Boolean algebra.
 – g : P ⇥ G ! P is a grading function.
 – <: G ⇥ G ! P is an ordering function.
 – e : G ⇥ G ! {?, >} is an equality function, where for every g1 , g2 2 G:
   e(g1 , g2 ) = > if g1 = g2 , and e(g1 , g2 ) = ? otherwise.

    A valuation V of a LogA G language is a triple hS, Vf , Vx i, where S is a
LogA G structure, Vf is a function that assigns to each function symbol an ap-
propriate function on D, and Vx is a function mapping each variable to a cor-
responding element of the appropriate block of D. A valuation V = hS, Vf , Vx i
is called a natural valuation if P is made up of three disjoint sets: the set of
base propositions PB forming a subalgebra that does not contain any grading
proposition, the set of grading propositions PG = Range(g), and the set of all
other propositions PB [ PG , and Vf maps all propositional terms that do not
contain G to PB , all grading propositional terms to a proposition in PG , and
all other propositional terms to PB [ PG . An interpretation of LogA G terms is
given by a function [[·]]V .
Definition 2. Let L be a LogA G language and let V be a valuation of L. An
interpretation of the terms of L is given by a function [[·]]V :

 – [[x]]V = Vx (x), for a variable x
 – [[c]]V = Vf (c), for a constant c
 – [[f (t1 , . . . , tn )]]V = Vf (f )([[t1 ]]V , . . . , [[tn ]]V ), for an n-adic (n   1) function
   symbol f
 – [[(t1 ^ t2 )]]V = [[t1 ]]V · [[t2 ]]V
 – [[(t1 _ t2 )]]V = [[t1 ]]V + [[t2 ]]V
 – [[¬t]]V = [[t]]     Y
                         V

 – [[8x(t)]]V =             [[t]]V[a/x]
                    a2D
 – [[G(t1 , t2 )]]V = g([[t1 ]]V , [[t2 ]]V )
 – [[t1 t2 ]]V = [[t1 ]]V < [[t2 ]]V
        .
 – [[t1 = t2 ]]V = e([[t1 ]]V , [[t2 ]]V )


2.3    Beyond Classical Logical Consequence

We define logical consequence using the familiar notion of filters from Boolean
algebra [26].

                                                   29
Definition 3. A filter of a boolean algebra A = hP, +, ·, , ?, >i is a subset F
of P such that: (1) > 2 F ; (2) If a, b 2 F , then a · b 2 F ; and (3) If a 2 F and
a  b, then b 2 F .
   A propositional term is a logical consequence of a set of propositional terms
  if it is a member of the filter of the interpretation of , denoted F ([[ ]]V ).
Definition 4. Let L be a LogA G language. For every           2 P and      ✓ P,
    is a logical consequence of Y   , denoted   |= , if, for every L-valuation V,
[[ ]]V 2 F ([[ ]]V ) where [[ ]]V =    [[ ]]V .
                                  2

    Unfortunately, the definition of logical consequence presented in the previous
section cannot address uncertain reasoning with graded propositions. To see that,
consider the following situation. You see a bird from far away that looks a lot
like a penguin. You know that any penguin does not fly. To make sure that what
you see is indeed a penguin, you ask your brother who tells you that this bird
must not be a penguin since your sister told him that she saw the same bird
flying. This situation can be represented in LogA G by a set of propositions Q as
shown in Figure 1 where p denotes that the bird is a penguin, and f denotes that
the bird flies. For the ease of readability of Figure 1, we write ¬f instead of f
and p ) ¬f instead of p + f . Since you are uncertain about whether the bird
you see is a penguin, this is represented as a graded proposition g(p, d1) where
d1 is your uncertainty degree in what you see. What your brother tells you is
represented by the grading chain g(g(f, d2), d3) where d2 represents how much
you trust your brother, and d3 represents how much you trust your sister. Now,
consider an agent reasoning with the set Q. Initially, it would make sense for the
agent to be able to conclude p even if p is uncertain (and, hence, graded) since
it has no reason to believe ¬p. The filter F (Q), however, contains the classical
logical consequences of Q, but will never contain the graded proposition p. For
this reason, we extend our classical notion of filters into a more liberal notion of
graded filters to enable the agent to conclude, in addition to the classical logical
consequences of Q, propositions that are graded in Q (like p) or follow from
graded propositions in Q (like ¬f ). This should be done without introducing
inconsistencies. Due to nested grading, graded filters come in degrees depending
on the depth of nesting of the admitted graded propositions. In Figure 1, F 1 (Q)
is the graded filter of degree 1. F 1 (Q) contains everything in F (Q) in addition to
the nested graded propositions at depth 1, p and g(¬q, d1). ¬f is also admitted
to F 1 (Q) since it follows classically from {p, p ) ¬f }. Hence, at degree 1, we
end up believing that the bird is a penguin that does not fly. To compute the
graded filter of degree 2, F 2 (Q), we take everything in F 1 (Q) and try to add
the graded proposition f at depth 2. The problem is, once we do that, we have
a contradiction with ¬f (we now believe that bird flies and does not fly at
the same time). To resolve the contradiction, we admit to F 2 (Q) either p (and
consequently ¬f ) or f . In deciding which of p or f to kick out we will allude to
their grades. The grade of p is d1, and f is graded in a grading chain containing
d2 and d3. To get a fused grade for f , we will combine both d2 and d3 using an

                                         30
appropriate fusion operator. If d1 is less than the fused grade of f , p will not
be admitted to the graded filter, together with it consequence ¬f . Otherwise, f
will not be admitted, and p and ¬f will remain. If we try to compute F 3 (Q),
we get everything in F 2 (Q) reaching a fixed point.




                               Fig. 1. Graded Filters

    In general, the elements of F i (Q) will be referred to as the graded conse-
quences at depth i. The rest of this section is dedicated to formally defining
graded filters together with our graded consequence relation based on graded fil-
ters. In the sequel, for every p 2 P and g 2 G, g(p, g) will be taken to represent a
grading proposition that grades p. Moreover, if g(p, g) 2 Q ✓ P, then p is graded
in Q. The set of p graders in Q is defined to be the set Graders(p, Q) = {q|q 2 Q
and q grades p}. Throughout, a LogA G structure S = hD, A, g, <, ei is assumed.
    As a building step towards formalizing the notion of a graded filter, the
structure of graded propositions should be carefully specified. For this reason,
the following notion of an embedded proposition is defined.
Definition 5. Let Q ✓ P. A proposition p 2 P is embedded in Q if (i) p 2 Q
(ii) or if, for some g 2 G, g(p, g) is embedded in Q. Henceforth, let E(Q) = {p|p
is embedded in Q}.
   Since a graded proposition p might be embedded at any depth n 2 N, the
degree of embedding of a graded proposition p is defined as follows.
Definition 6. For Q ✓ P, let the degree of embedding of p in Q be a function
 Q : E(Q) ! N, where

1. if p 2 Q, then   Q (p) = 0; and
2. if p 2
        / Q, then   Q (p) = e + 1, where e = minq2Graders(p,E(Q)) { Q (q)}.

For notational convenience, we let the set of embedded propositions at depth n
be E n (Q) = {p 2 E(Q) | Q (p)  n}, for every n 2 N.
Example 1. Consider Q = {g(g(g(p, 2), 3), 4), g(g(g(r, 2), 3), 5),
g(g(g(p, 2), 3), 5), g(g(p, 2), 6), q}.
 – E 0 (Q) = Q.

                                        31
 – E 1 (Q) = E 0 (Q) [ {g(g(p, 2), 3), g(g(r, 2), 3), g(p, 2)}.
 – E 2 (Q) = E 1 (Q) [ {g(p, 2), g(r, 2), p}.
 – E 3 (Q) = E 2 (Q) [ {r}.
 – E(Q) = E 3 (Q).
 – The degree of embedding Q (q) = 0.
 – The degree of embedding Q (p) = 2 (since the length of the minimum chain
   grading p is 2).
 – The degree of embedding Q (r) = 3.                                     t
                                                                          u
    The key to defining graded filters is the intuition that the set of consequences
of a proposition set Q may be further enriched by telescoping Q and accepting
some of the propositions graded therein. For this, we need to define (i) the
process of telescoping, which is a step-wise process that considers propositions at
increasing grading depths, and (ii) a criterion for accepting graded propositions
which, as mentioned before, depends on the grades of said propositions. Since
the nesting of grading chains is permissible in LogA G, it is necessary to compute
the fused grade of a graded proposition p in a chain C to decide whether it will
be accepted or not. The fusion of grades in a chain is done according to an
operator ⌦. Further, since a graded proposition p might be graded by more than
one grading chain, we define the notion of the fused grade of p across all the
chains grading it by an operator .
Definition 7. Let S be a LogA G structure with a depth- and fan-out-bounded
P 5 . A telescoping structure for S is a quadruple T = hT , O, ⌦, i, where
 – T ✓ P, referred to as the top theory;
 – O is an ultrafilter of the subalgebra induced by Range(<) (an ultrafilter is a
   maximal
      S1 filter with respect to S1 not including ?) [26];
 – ⌦ : i=1 G i ! G; and : i=1 G i ! G.
    Recasting the familiar notion of a kernel of a belief base [13] into the context
of LogA G structures, we say that a ?-kernel of Q ✓ P is a subset-minimal in-
consistent set X ✓ Q such that F (E(F (X ))) is improper (= P) where E(F (X ))
                                                                          ✏




is the set of embedded graded propositions in the filter of X . Let Q ? be the
set of Q kernels that entail ?. A proposition p 2 X survives X in T if p is not
the weakest proposition (with the least grade) in X . In what follows, the fused
grade of a proposition p in Q ✓ P according to a telescoping structure T will be
referred to as fT (p, Q).
Definition 8. For a telescoping structure T = hT , O, ⌦, i and a fan-in-bounded
6
  Q ✓ P, if X ✓ Q, then p 2 X survives X given T if
1. p is ungraded in Q; or
5
  P is depth-bounded if every grading chain has at most d distinct grading propositions
  and is fan-out-bounded if every grading proposition grades at most fout propositions
  where d, fout 2 N [10].
6
  Q is fan-in-bounded if every graded proposition is graded by at most fin grading
  propositions where fin 2 N [10].


                                         32
2. there is some ungraded q 2 X such that q 2
                                            / F (T ); or
3. there is some graded q 2 X such that q 2
                                          / F (T ) and
   (fT (q, Q) < fT (p, Q)) 2 O.

The set of kernel survivors of Q given T is the set




                                           ✏
       (Q, T) = {p 2 Q | if p 2 X 2 Q ? then p survives X given T}.

   The notion of a proposition p being supported in Q is defined as follows.

Definition 9. Let Q, T ✓ P. We say that p is supported in Q given T if

1. p 2 F (T ); or
2. there is a grading chain hq0 , q1 , . . . , qn i of p in Q with q0 2 F (R) where every
   member of R is supported in Q.

The set of propositions supported in Q given T is denoted by &(Q, T ).

    The T-induced telescoping of Q is defined as the set of propositions supported
given T in the set of kernel survivors of E 1 (F (Q)).

Definition 10. Let T be a telescoping structure for S. If Q ⇢ P such that
E 1 (F (Q)) is fan-in-bounded, then the T-induced telescoping of Q is given by

                          ⌧T (Q) = &((E 1 (F (Q)), T), T ).

Observation 1 If Q is consistent (F (Q) 6= P), then ⌧T (Q) = E 1 (F (Q)).

    If F (Q) has finitely-many grading propositions, then ⌧T (Q) is defined, for
every telescoping structure T. Hence, provided that the right-hand side is defined,
let                              ⇢
                         n         Q              if n = 0
                        ⌧T (Q) =
                                   ⌧T (⌧Tn 1 (Q)) otherwise

Definition 11. A graded filter of a top theory T , denoted F n (T), is defined as
the filter of the T-induced telescoping of T of degree n.

    In the following example, we now go back to the situation we introduced at
the beginning of this section in Figure 1. We show how the formal construction
of the graded filters matches the intuitions we pointed out earlier.

Example 2. Consider Q = { p + f, g(p, 2), g(g(f, 2), 3)} and T = hQ, O, ⌦, i
where = max, and ⌦ = mean. In what follows, let ⌧Tn be an abbreviation for
⌧Tn (Q).

 – ⌧T0 = Q

                                          33
 – ⌧T1 = ⌧T (⌧T0 ) = &((E 1 (F (Q)), T), T )
   F (Q) = Q [ { p + f.g(p, 2), g(g(f, 2), 3) + g(p, 2), ....}
   E 1 (F (Q) = F (Q) [ {p, g(f, 2)}
   (E 1 (F (Q), T) = E 1 (F (Q))
   ⌧T1 = (E 1 (F (Q), T) = &((E 1 (F (Q), T), Q)
   F 1 (T) = F (⌧T1 )
   Upon telescoping to degree 1, there are no contradictions in E 1 (F (Q)) (no
   ? kernel X ✓ E 1 (F (Q)). Hence, everything in E 1 (F (Q)) survives telescop-
   ing and is supported. At level 1, we believe that the bird we saw is indeed a
   penguin.
 – ⌧T2 = ⌧T (⌧T1 )
   E 1 (F (⌧T1 )) = F (⌧T1 ) [ {f }
   (E 1 (F (⌧T1 )), T) = F (⌧T1 ) {p}
   ⌧T2 = &((E 1 (F (⌧T1 )), T), Q) = (E 1 (F (⌧T1 )), T) { f }
   F 2 (T) = F (⌧T2 )
   Upon telescoping to degree 2, there are two ? kernels {f, f } and {p, p +
      f, f }. f survives the first kernel as it is not graded in Q. f survives the
   first kernel as well as it is the only graded proposition in the kernel with
   another member f 2        / F (Q). p does not survive the second kernel as the
   kernel contains another graded proposition f and the grade of p (2) is less
   than the fused grade of f (mean(2, 3) = 2.5). Accordingly, f loses its
   support and is not supported in the set of kernel survivors. The graded filter
   of degree 2 does not contain p or f . At level 2, we start taking into account
   the information our brother told us. Since our combined trust in our brother
   and sister is higher that our trust in what we saw, we end up not believing
   that the bird we saw is a penguin since we believe that it flies.
 – ⌧T3 = ⌧T (⌧T2 )
   ⌧T3 = &((E 1 (F (⌧T2 )), T), Q) = (E 1 (F (⌧T2 )), T)
   F 3 (T) = F (⌧T3 ) = F 2 (T) reaching a fixed point.                          t
                                                                                 u

   We use graded filters to define graded consequence as follows. Given a LogA G
theory T ✓ P and a valuation V = hS, Vf , Vx i, let V(T) = {[[p]]V | p 2 T}.
Further, for a LogA G structure S, an S grading canon is a triple C = h⌦, , ni
where n 2 N and ⌦ and are as indicated in Definition 7.

Definition 12. Let T be a LogA G theory. For every p 2 P , valuation V =
hS, Vf , Vx i where S has a set P which is depth- and fan-out-bounded, and S
grading canon C = h⌦, , ni, p is a graded consequence of T with respect to C,
               C
denoted T |' p, if F n (T) is defined and [[p]]V 2 F n (T) for every telescoping
structure T = hV(T), O, ⌦, i for S where O extends F (V(T) \ Range(<))7 .
                                  C
   It is worth noting that |' reduces to |= if n = 0 or if F (E(V(T))) does not
                                                          C
contain any grading propositions. However, unlike |=, |' is non-monotonic in
general.
7
    An ultrafilter U extends a filter F , if F ✓ U .


                                             34
3   Possibilistic Logic
Possibilistic logic is a weighted logic that handles uncertainty, in a qualitative
way by associating certainty levels, to classical logic formulas [7]. At the seman-
tic level of possibilistic logic, an epistemic state is represented by a possibility
distribution ⇡ assigning to each world ! in the set of possible worlds ⌦ a pos-
sibility degree in the interval [0,1]. ⇡(!) = 0 means that the interpretation ! is
impossible, and ⇡(!) = 1 means that the nothing prevents the interpretation !
to be possible in the world. The less ⇡(!), the less possible w is.
    Given the possibility distribution ⇡, the following two measures can be de-
fined on the formulas in the language.
 – The possibility degree ⇧⇡ ( ) = max{⇡(!) | ! 2 [ ]} (where [ ] is the set of
   models of ) evaluates the extent to which is consistent with the available
   information expressed by the possibility distribution ⇡.
 – The necessity degree N⇡ ( ) = 1 ⇧⇡ (¬ ) evaluates the extent to which
   is entailed by the available information.
  The semantic determination of a belief set is defined as the set of formulas
whose possibility measures are greater than the possibility of their negations.
                          BS(⇡) = { | ⇡( ) > ⇡(¬ )}
    An epistemic state can also be represented syntactically in possibilistic logic.
On the syntactic level, a possibilistic knowledge base is defined as a finite set
of weighted formulas ⌃ = {( i , ai )} where ai is a lower bound on the necessity
measure of i . A possibilistic knowledge base is said to be consistent if the
classical knowledge base, obtained by omitting the weights, is consistent. Each
possibilistic knowledge base is associated with an inconsistency degree which is
defined as followed.
                    ⇢
                      0                               if ⌃ is consistent
          Inc(⌃) =
                      max{a | ⌃ a is inconsistent}            otherwise
where ⌃ a are the formulas in ⌃ with weights greater than or equal to a.
    The syntactic computation of a belief set induced by ⌃ is defined as the
logical consequences of the formulas in ⌃ with weights higher than Inc(⌃).
                          BS(⌃) = { | ⌃>Inc(⌃) ✏ }

4   Representing Possibilistic Logic in LogA G
In this section, we show how to encode possibilistic theories as LogA G theories,
and prove that the LogA G subsumes possibilistic logic. Moreover, we discuss the
drowning problem that plagues possibilic logic and show that LogA G does not
su↵er from the same problem.
    In what follows, for any possibilistic knowledge base ⌃P L , let (a, ⌃P L ) =
|A| where A = {ai | ( i , ai ) 2 ⌃P L and ai > a}. Intuitively, (a, ⌃P L ) denotes
the number of distinct weights appearing in a possibilistic knowledge base ⌃P L
greater than a.

                                        35
Definition 13. Let ⌃P L be a possibilistic knowledge base. The corresponding
LogA G theory is defined as ⌃LogA G = {chain( i , d) | ( i , ai ) 2 ⌃P L and d =
  (ai , ⌃P L )} where chain is a function mapping a possibilistic formula (without
its weight) and d to a LogA G term denoting a grading proposition as follows:
                                ⇢
                                   G( , 1)               if d = 0
                  chain( , d) =
                                   G(chain( , d 1), 1) otherwise

Example 3. Consider ⌃P L = {(p, 1), (p ) b, 1), (p ) ¬f, 0.6), (b ) f, 0.3), (b )
w, 0.1)} where b denotes “bird”, p denotes “penguin”, f denotes “flies”, and w
denotes “has wings”. The following table shows the corresponding LogA G theory
terms constructed according to Definition 13.

                ( i , ai )   (ai , ⌃P L )      ⌃LogA G term
                 (p, 1)          0                G(p, 1)
              (p ) b, 1)         0              G(p ) b, 1)
            (p ) ¬f, 0.6)        1           G(G(p ) ¬f, 1), 1)
             (b ) f, 0.3)        2          G(G(G(b ) f, 1), 1), 1)
             (b ) w, 0.1)        3        G(G(G(G(b ) w, 1), 1), 1), 1)

Observation 2 Let ⌃P L be a possibilistic knowledge base with a corresponding
LogA G theory ⌃LogA G , and Q = {[[ ]]V | 2 ⌃LogA G } where V is a natural
valuation. For any weighted formula ( i , ai ) 2 ⌃P L , the degree of embedding of
of the interpretation of in Q, Q ([[ i ]]V ), is (ai , ⌃P L ) + 1.

    It is worth noting that the corresponding ⌃LogA G theory will always be con-
sistent using this construction. The propositions in the corresponding LogA G
theory can be associated with any grade as we rely on only the degree of em-
bedding to reflect the weight of the proposition in the original possiblistic logic
knowledge base. For simplicity, we just associate the grade 1 with all graded
propositions in ⌃LogA G . It follows directly from Observation 2 that the interpre-
tations of all formulas with the same weight in ⌃P L have the same embedding
degree in Q. The less the weight ai of a formula i in ⌃P L , the higher the
embedding degree of [[ i ]]V in Q.

Observation 3 Let ⌃P L be a possibilistic knowledge base with a corresponding
LogA G theory ⌃LogA G , and Q = {[[ ]]V | 2 ⌃LogA G } where V is a natural
valuation. The graded filter F n (T) of degree n = (Inc(⌃P L ), ⌃P L ) for some
telescoping structure T of Q, is identical to F (E n (Q)).

Proof. We prove this by induction on n.
Base case (n=0): By Definition 11, F 0 (T) = F (⌧T0 (Q)) = F (Q) = F (E 0 (Q))
since E 0 (Q) = Q.
Induction Hypothesis: For n 0, suppose that F n (T) = F (E n (Q)).
Induction Step: F n+1 (T) = F (⌧Tn+1 (Q)) = F (⌧T (⌧Tn (Q)))
= F (&((E 1 (F (⌧Tn (Q)))) = F (&((E 1 (F n (T)))). By the induction hypothesis,
F n+1 (T) = F (&((E 1 (F (E n (Q)))))). Since Q is consistent given it is the set

                                        36
of interpretations of ⌃LogA G which is consistent according to how it was con-
structed, then F n (T) must be consistent according to Definition 11. Conse-
quently, F (E n (Q)) must be consistent too. Since E n (Q) contains only the propo-
sitions in Q (which are only grading propositions according to how ⌃LogA G was
constructed) in addition to the propositions with embedding degrees less than
or equal to n, then F (E n (Q)) contains nothing other than the propositions
in E n (Q) in addition to their trivial consequences. Hence, E 1 (F (E n (Q))) =
F (E 1 (E n (Q))) = F (E n+1 (Q)), and F n+1 (T) = F (&((F (E n+1 (Q))))).
Let ( , a) 2 ⌃P L where Inc(⌃P L ) = a. According to Observation 2, ( , Q) =
  (Inc(⌃P L ), ⌃P L ) + 1 > n + 1 (since n + 1 = (Inc(⌃P L ), ⌃P L )). It fol-
lows then that E n+1 (Q) is consistent. By Observation 1, &((F (E n+1 (Q)))) =
F (E n+1 (Q)). Therefore, F n+1 (T) = F (F (E n+1 (Q))) = F (E n+1 (Q)).         t
                                                                                 u

Theorem 1. Let ⌃P L be a possibilistic knowledge base and ⌃LogA G be its cor-
responding LogA G theory. For every non-grading proposition     and grading
canon C = h⌦, , ni with n = (Inc(⌃P L ), ⌃P L ), 2 BS(⌃P L ) if and only
             C
if ⌃LogA G |' .

Proof. In what follows, let Q = {[[ ]]V | 2 ⌃LogA G } be the valuation of ⌃LogA G
where V is a natural valuation.
    Suppose that 2 BS(⌃P L ). Then, by definition of BS(⌃P L ), it must be
that is logically implied by a set S = { | ( , a) 2 ⌃P L and a > Inc(⌃P L )}.
According to how ⌃LogA G was constructed by and Observation 2, for every
formula 2 S in Q, Q ([[ ]]V ) = (a, ⌃P L ) + 1 where a is the weight of in
⌃P L . Since a > Inc(⌃P L ), it follows that Q ([[ ]]V )  n and [[ ]]V 2 E n (Q).
Hence, by Observation 3, [[ ]]V 2 F n (T) for all telescoping structures T of Q. By
                                           C
Definition 12, it must be that ⌃LogA G |' .
                                   C
    Now suppose that ⌃LogA G |' , then according to Definition 12, it must be
          V     n
that [[ ]] 2 F (T) for all telescoping structures T of Q. According to Observa-
tion 3, [[ ]]V 2 F (E n (Q)). Let S = {p | p 2 E n (Q) \ PB }. It follows that is
logically implied by a set of propositional terms whose interpretations are in S,
and each p 2 S has an embedding degree m less than or equal to n. Let be
the propositional term denoting p. It must be that ( , a) 2 ⌃P L since the only
embedded non-grading propositions in ⌃LogA G are the formulas appearing in
⌃P L according to Definition 13. Further, by Observation 2, m = (a, ⌃P L ) + 1.
Therefore, m < n and a > Inc(⌃P L ). By the definition of BS(⌃P L ), it must be
that 2 BS(⌃P L ).                                                                 t
                                                                                  u

Example 4. Consider the possibilistic knowledge base ⌃P L and its corresponding
LogA G theory in Example 3. The inconsistency degree Inc(⌃P L ) = 0.3. There-
fore, BS(⌃P L ) = { | {p, p ) b, p ) ¬f } ✏ } since p, p ) b, and p ) ¬f
are the formulas in ⌃P L with weights higher than the inconsistency degree. The
graded filter of degree n = (Inc(⌃P L ), ⌃P L ) = 2, F 2 (T) = F (E 2 (Q)) where
T is a telescoping structure of Q denoting the valuation of ⌃LogA G . The only
non-grading propositions in E 2 (Q) are p, p ) b, and p ) ¬f . Hence, the set of
propositional terms whose interpretations are in F (E 2 (Q)) is equal to BS(⌃P L ).

                                        37
In both BS(⌃P L ) and F 2 (T) we end up believing that a penguin is a bird that
does not fly.                                                                t
                                                                             u
    It should be obvious that the formulas in a possibilistic knowledge base ⌃P L
with weights less than Inc(⌃P L ) are blocked even if they do not contribute
to the inconsistency. This problem is one limitation of possibilistic logic and
is referred to in the literature as the drowning problem [2]. A special e↵ect
of the drowning problem is the property inheritance blocking problem [23, 12]
arising in Example 4. The rule b ) w is blocked (drowned) in both BS(⌃P L )
and F 2 (T) even though it has nothing to do with the inconsistency caused by
p, p ) b, p ) ¬f, and b ) f because its weight (0.1) is less than Inc(⌃P L )
(0.3). Accordingly, the inference that a penguin has wings is blocked as well
and the exceptional penguin subclass with respect to the flying property fails to
inherit the property of having wings from its bird super class even though the
having wings property does not contribute to the inconsistency. The drowning
problem persists in all logical consequence relations observing rational monotony
[3]. It turns out that LogA G does not su↵er from the drowning problem (and
does not observe rational monotony). To see this, consider the relevant graded
consequences shown in Figure 2 of the LogA G theory ⌃LogA G in Example 3.
                         n                          C
In what follows, let ⌃Log   AG
                                 = { | ⌃LogA G |'       } with the grading canon
C = h1/sum, , ni where is any operator.
                                        0
                           n = 0 0.1 ⌃Log
                                           AG
                                        0
                           n = 1 1.1. ⌃Log
                                           AG
                                 1.2. p
                                 1.3. p ) b
                                 1.4. G(p ) ¬f, 1)
                                 1.5. G(G(b ) f, 1), 1)
                                 1.6. G(G(G(b ) w, 1), 1), 1)
                                 1.7 b
                                         1
                           n = 2 2.1   ⌃Log
                                            AG
                                 2.2   p ) ¬f
                                 2.3   G(b ) f, 1)
                                 2.4   G(G(b ) w, 1), 1)
                                 2.5   ¬f
                                      2
                           n = 3 3.1 ⌃Log
                                          AG
                                 3.2 G(b ) w, 1)
                                       3
                           n = 4 4.1 ⌃Log
                                          AG
                                 4.2 b ) w
                                 4.3 w


    Fig. 2. Graded consequences of the LogA G theory ⌃LogA G from Example 3
    Upon telescoping to degree 3, we believe that a penguin is a bird that does
not fly. The rule b ) f does not survive telescoping as it has a lower grade ( 13 )
than p (1), p ) b (1), p ) ¬f ( 12 ). Upon telescoping to degree 4, we believe
that a penguin has wings since the rule b ) w survives telescoping and we still
believe that penguins have birds. According to the semantics of LogA G and the
definition of graded filters, no proposition will ever be discarded unless it directly
contributes to a contradiction.

                                           38
    Note that the LogA G logical consequence relation is only equivalent to its
possibilistic counterpart for a grading canon C = h⌦, , ni with
n = (Inc(⌃P L ), ⌃P L ) as illustrated in Example 4. This is by no means the rec-
ommended use of LogA G. If we continue telescoping beyond n = (Inc(⌃P L ), ⌃P L )
as illustrated in Figure 2 (which is what should naturally happen in LogA G),
the drowning problem is avoided.

5    Conclusion

We presented an algebraic graded non-monotonic logic we refer to as LogA G. It
is our conviction that LogA G provides an interesting alternative to the currently-
existing approaches for handling non-monotonicity. LogA G can be regarded as a
unified framework for non-monotonicity as it can capture various non-monotonic
logics. In this paper, we showed how possibilistic theories can be encoded in
LogA G, and we proved that the LogA G logical consequence relation captures
possibilistic inference. Since possibilistic logic is proven to capture any non-
monotonic inference relation satisfying Makinson’s rationality postulates, our
results prove that LogA G captures such non-monotonic inference relations as
well while staying immune to the drowning problem. We are currently working
on relating LogA G to circumscription, default logic, and auto-epistemic logic.
In [17], Levesque presented the logic of “only knowing” and proved that it cap-
tures auto-epistemic logic. Later, it was proved that only knowing can capture
a class of default logic [16], and circumscription [5]. Our current approach is to
relate LogA G to Levesque’s logic of only knowing. In doing so, we can prove
that LogA G subsumes default logic, circumscription, and autoepistemic logic.
Another idea we have in mind is to relate LogA G to generalized possibilistic logic
[9] which is capable of encompassing KLM non-monotonic logics and a fragment
of Lifschitz’s logic of minimal belief and negation as failure.

References
 1. Ernest Adams. The logic of conditionals. Inquiry, 8(1-4):166–197, 1965.
 2. Salem Benferhat, Claudette Cayrol, Didier Dubois, Jerome Lang, and Henri Prade.
    Inconsistency management and prioritized syntax-based entailment. In IJCAI,
    volume 93, pages 640–645, 1993.
 3. Salem Benferhat, Didier Dubois, and Henri Prade. Nonmonotonic reasoning, condi-
    tional objects and possibility theory. Artificial Intelligence, 92(1-2):259–276, 1997.
 4. Gerhard Brewka, Ilkka Niemelä, and Miroslaw Truszczyński. Handbook of knowl-
    edge representation. chapter 6, pages 239–284. Elsevier, 2008.
 5. Jianhua Chen. Minimal knowledge+ negation as failure= only knowing (some-
    times). 2nd International Workshop on Logic Programming and Non-Monotonic
    Reasoning, pages 132–150, 1993.
 6. Robert Demolombe and Churnjung Liau. A logic of graded trust and belief fu-
    sion. In Proceedings of the 4th Workshop on Deception, Fraud and Trust in Agent
    Societies, pages 13–25, 2001.
 7. D. Dubois, Lang J., and Prade H. Possibilistic logic. In D. Gabbay, Hogger C.J.,
    and Robinson J.A., editors, Nonmonotonic Reasoning and Uncertain Reasoning,


                                           39
    Handbook of Logic in Artificial Intelligence and Logic Programming, volume 3, page
    439–513. Oxford University Press, 1994.
 8. Didier Dubois, Lluis Godo, and Henri Prade. Weighted logics for artificial
    intelligence–an introductory discussion. International Journal of Approximate Rea-
    soning, 55(9):1819–1829, 2014.
 9. Didier Dubois, Henri Prade, and Steven Schockaert. Generalized possibilistic logic:
    foundations and applications to qualitative reasoning about uncertainty. Artificial
    Intelligence, 252:139–174, 2017.
10. Nourhan Ehab. On the use of graded propositions in uncertain non-monotonic
    reasoning: With an application to plant disease forecast. Master’s thesis, German
    University in Cairo, Egypt, 2016.
11. Nourhan Ehab and Haythem O. Ismail. LogAG: An algebraic non-monotonic logic
    for reasoning with uncertainty. Proceedings of the 13th International Symposium
    of Commonsense Reasoning, 2017.
12. Hector Ge↵ner and Judea Pearl. Conditional entailment: Bridging two approaches
    to default reasoning. Artificial Intelligence, 53(2-3):209–244, 1992.
13. Sven Ove Hansson. Kernel contraction. The Journal of Symbolic Logic, 59(03):845–
    859, 1994.
14. Haythem O. Ismail and Nourhan Ehab. Algebraic semantics for graded proposi-
    tions. Proceedings of the KI 2015 Workshop on Formal and Cognitive Reasoning,
    pages 29–42, 2015.
15. Sarit Kraus, Daniel Lehmann, and Menachem Magidor. Nonmonotonic reasoning,
    preferential models and cumulative logics. Artificial intelligence, 44(1-2):167–207,
    1990.
16. Gerhard Lakemeyer and Hector J Levesque. Only-knowing: Taking it beyond au-
    toepistemic reasoning. In AAAI, volume 5, pages 633–638, 2005.
17. Hector J Levesque. All i know: a study in autoepistemic logic. Artificial intelligence,
    42(2-3):263–309, 1990.
18. David Makinson. General patterns in nonmonotonic reasoning, volume III, pages
    35–110. Oxford University Press, 1994.
19. Wiktor Marek and Miroslaw Truszczyński. Autoepistemic logic. Journal of the
    ACM (JACM), 38(3):587–618, 1991.
20. John McCarthy. Circumscription–a form of nonmonotonic reasoning. Artificial
    Intelligence, 13:27–39, 1980.
21. Miloš Milošević and Zoran Ognjanović. A first-order conditional probability logic.
    Logic Journal of IGPL, 20(1):235–253, 2012.
22. Robert C Moore. Possible-world semantics for autoepistemic logic. Technical
    report, SRI International Menlo Park CA Artificial Intelligence Center, 1984.
23. Judea Pearl. System z: A natural ordering of defaults with tractable applications to
    nonmonotonic reasoning. In Proceedings of the 3rd Conference on Theoretical As-
    pects of Reasoning about Knowledge, pages 121–135. Morgan Kaufmann Publishers
    Inc., 1990.
24. Judea Pearl. Probabilistic reasoning in intelligent systems: networks of plausible
    inference. Morgan Kaufmann, 2014.
25. Raymond Reiter. A logic for default reasoning. Artificial intelligence, 13(1):81–132,
    1980.
26. H.P. Sankappanavar and Stanley Burris. A course in universal algebra. Graduate
    Texts Math, 78, 1981.
27. Yoav Shoham. Nonmonotonic logics: Meaning and utility. In IJCAI, volume 10,
    pages 388–393. Citeseer, 1987.



                                            40