<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>DL</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Defeasible Reasoning with Prototype Descriptions: A New Preference Order</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Gabriele Sacco</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Loris Bozzato</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oliver Kutz</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>DiSTA - Università dell'Insubria</institution>
          ,
          <addr-line>Via O. Rossi 9, 21100 Varese</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Fondazione Bruno Kessler</institution>
          ,
          <addr-line>Via Sommarive 18, 38123 Trento</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Free University of Bozen-Bolzano</institution>
          ,
          <addr-line>Piazza Domenicani 3, 39100, Bolzano</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>37</volume>
      <fpage>18</fpage>
      <lpage>21</lpage>
      <abstract>
        <p>The representation of defeasible information in Description Logics is a well-known issue and many formal approaches have been proposed. However, in these proposals, little attention has been devoted to studying their capabilities in capturing the interpretation of typicality and exceptions from an ontological and cognitive point of view. In this regard, we are developing a model of defeasible knowledge for description logics based on combining ideas from prototype theory, weighted description logic (aka 'tooth logic'), and earlier work on justifiable exceptions. This machinery is then used to determine exceptions in case of conflicting axioms. In this paper, we analyse this formalisation with respect to some interesting cases where the defeasible properties to which we may have exceptions are also present as features in prototype descriptions. The analysis will suggest that a new preference order, which considers what happens inside the models, may be best suited and we outline how this new preference order can be defined.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Non-monotonic logic</kwd>
        <kwd>Typicality</kwd>
        <kwd>Description Logics</kwd>
        <kwd>Exceptions</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The study of defeasible reasoning and its modelling through non-monotonic logics has a long story
in Artificial Intelligence [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. More recently, approaches for representing non-monotonicity have
been proposed also in the context of Descriptions Logics (DL) [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. Following this direction, in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] we
started developing a formal approach for defeasible reasoning in DLs which satisfies some desiderata
extracted from the studies on generics [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and which moreover takes into account some insights we
could identify within the philosophical and cognitive research on phenomena related to exceptions and
defeasibility.
      </p>
      <p>
        In the present paper, we build on that formalism by proposing a more elaborate preference relation
on models which is able to address some shortcomings of the previous formulation. Therefore, we firstly
describe the approach of [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] in section 2, proposing also a new terminology for some key definitions
in order to simplify the presentation. Then, section 3 treats the new preference by firstly describing a
problematic case for the previous one, then giving the formal definitions, and finally by showing that
through those new definitions we are able to reach the desired conclusion in the outlined problematic
case.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. DLs with Weighted Prototypes</title>
      <p>In our approach, we distinguish two parts in the knowledge bases: the actual DL knowledge base, which
represents the knowledge of interest and can contain defeasible axioms and information about features
of individuals, and a separate set containing prototype definitions.</p>
      <p>In this section we outline the syntax and semantics of such enriched KBs.</p>
      <sec id="sec-2-1">
        <title>2.1. Syntax: Features, Prototype Definitions, Prototype Knowledge Bases</title>
        <p>The following definitions are independent from the DL language used for representing the main
knowledge base: we consider a fixed concept language ℒΣ based on a DL signature Σ with disjoint
and non-empty sets NC of concept names, NR of role names, and NI of individual names. We identify
a subset of the concept names as denoting prototype names by assuming a subset NP ⊆ NC. For
simplicity, we call general concepts the concepts composed only of concepts in NC ∖ NP.</p>
        <p>The features associated with prototypes together with the degree of their importance are given in
prototype definitions .</p>
        <p>Definition 1 (Positive prototype definition) . Let  ∈ NP be a prototype name, let 1, . . . ,  be general
concepts of ℒΣ and let  = (1, . . . , ) ∈ Q be a weight vector of rational numbers, where for every
 ∈ {1, . . . , } we have  &gt; 0. Then, the expression</p>
        <p>(1 : 1, . . . ,  : )
is called a (positive) prototype definition for  .</p>
        <p>
          Note that this definition of prototypes is similar to the definition of concepts by the tooth operator
defined in [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. Intuitively, the weights associated to the features can be then combined to compute a
score denoting the degree of typicality of an instance w.r.t. the prototype: for the current definition,
weights are assumed to be positive and features are independent.
        </p>
        <p>
          In the present work we are agnostic with respect to where the weights come from. However, in future
we plan to study sources for getting these weights as for example, learning models.
Moreover, we use here rational weights, which is suficient for practical purposes. Real numbers could
be allowed as well, but this would not substantially change the formal setup; this is also the case for the
related perceptron logic [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ].
        </p>
        <p>Another important remark is that, since some features could be mutually exclusive (e.g. the color of an
apple can be red or green, but not both), prototype definitions should not be seen as denoting a “perfect
individual”.</p>
        <p>
          A final point on prototype definitions is that to allow for a direct comparison across scores of diferent
prototypes, these need to be normalised to a common value interval, possibly with a scoring function
that does not depend on the number of features defining diferent prototypes. In an initial proposal, we
simply constrained the weights of features to be in the [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] interval and to prescribe further that they
would add up to 1, i.e. prototypes were simply assumed to be given as positive and normalised [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]: in
the following sections, we provide instead a more general proposal for normalising prototype scores.
With respect to [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], in the current definition of prototype definition we allow any general concept to
characterise the weighted features.
        </p>
        <p>In the knowledge part of the KB, we can use prototype names in DL axioms to describe properties of
the members of such classes. Here we consider the case in which prototype names are only used as
primitive concepts on the left-hand side of concept inclusions.</p>
        <p>Definition 2 (Prototype axiom). A concept inclusion of the type  ⊑  is a prototype axiom of ℒΣ if
 ∈ NP and  is a general concept of ℒΣ.</p>
        <p>
          Intuitively, these axioms are not absolute and can be “overridden” by prototype instances (cf. defeasible
axioms in [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]), also depending on the “degree of membership” of the individual to the given prototype
(i.e., the satisfaction of its features).
        </p>
        <p>As noted above, we consider knowledge bases which can contain prototype axioms and which are
enriched with an accessory KB, the PBox  providing prototype definitions. Formally:
Definition 3 (Prototyped Knowledge Base, PKB). A prototyped knowledge base, PKB for short, in
language ℒΣ is a triple K = ⟨ , , ⟩ where:
–  =  ⊎  is a DL TBox consisting of concept inclusion axioms of the form  ⊑ , and partitioned
into the disjoint sets  of prototype axioms and  of general concept inclusions based on general
concepts;
–  =  ⊎  is a set of ABox assertions partitioned into the disjoint sets  of prototype assertions
(of the form  () with  ∈ NP and  ∈ NI) and  of assertions for general concepts and roles;
–  is a set of prototype definitions, exactly one for each prototype name  ∈ NP appearing in the
prototype TBox  .</p>
        <p>Remark. Note that a PKB ⟨ , , ∅⟩ can be seen as a standard DL knowledge base.
Example 1. Consider the following prototyped knowledge base  = ⟨ , , ⟩:
 = { Dog ⊑ Trusted, Wolf ⊑ ¬Trusted, Dog ⊑ hasLegs, Wolf ⊑ hasLegs },
 = { Dog(balto), Wolf(balto), Dog(pluto), Wolf(alberto), Dog(cerberus),
livesInWoods(balto), hasLegs(balto), isTamed(balto),
hasCollar(pluto), hasLegs(pluto), isTamed(pluto),
hasLegs(alberto), Hunts(alberto)
¬Trusted(cerberus)},
 = { Wolf(livesInWoods : 10, hasLegs : 4, livesInPack : 8, Hunts : 11),</p>
        <p>Dog(hasCollar : 33, livesInHouse : 22, hasLegs : 11, isTamed : 44) }
Below we give a semantics for this kind of PKB which will entail and justify the conclusion that balto is
a trusted dog which is a wolf, without being inconsistent, and that cerberus is an exceptional dog with
respect to the property of dogs of being trusted. Note that in the case of the instances pluto and alberto
no contradiction arises, thus we want that the axioms in  are applied to them normally. Observe that the
conflict regarding balto has the same structure of the so called Nixon diamond. ◇</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Semantics: Justified Models</title>
        <p>The semantics of PKBs is based on standard interpretations for the underlying DL ℒΣ. In fact,
interpretations of a PKB are DL interpretations for its knowledge base part.</p>
        <p>
          Definition 4 (Interpretation). An interpretation ℐ is pair ⟨∆ ℐ , · ℐ ⟩ for signature Σ with a non-empty
domain, ∆ ℐ , ℐ ∈ ∆ ℐ for every  ∈ NI, ℐ ⊆ ∆ ℐ for every  ∈ NC, ℐ ⊆ ∆ ℐ × ∆ ℐ for every  ∈ NR,
and where the extension of complex concepts is defined recursively as usual for language ℒΣ.
We are not giving a DL interpretation to the prototype definition expressions in , however, we need to
introduce additional semantic structure to manage exceptions to prototype axioms in  , exploiting the
prototype definition expressions in . We consider the notion of axiom instantiation as defined in [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]:
intuitively, for an axiom  ∈ ℒΣ the instantiation of  with  ∈ NI, written  (), is the specialization
of  to .1 In other words, we “apply” the axiom to the individual , e.g. if the axiom means that dogs
are trusted and we instantiate it to Balto, we are saying that if Balto is a dog, then it is trusted.
Definition 5 (Exception assumptions and clashing sets). An exception assumption is a pair ⟨,  ⟩ where
 ∈  is a prototype axiom,  ∈  is an individual name and such that  () is an axiom instantiation
of  .
        </p>
        <p>
          A clashing set for ⟨,  ⟩ is a satisfiable set ⟨, ⟩ of ABox assertions s.t. ⟨, ⟩ ∪ { ()} is unsatisfiable.
1As in [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ],  () can be formally specified via the FO-translation of  .
        </p>
        <p>Intuitively, an exception assumption ⟨ ⊑ , ⟩ states that we assume that  is an exception to the
prototype axiom  ⊑  in a given interpretation. Then, the fact that a clashing set ⟨ ⊑,⟩ for
⟨ ⊑ , ⟩ is derived by such an interpretation gives a “justification” of the validity of the assumption
of overriding in terms of ABox assertions. This intuition is reflected in the definition of models: we first
extend interpretations with a set of clashing assumptions.</p>
        <p>Definition 6 ( -interpretation). A  -interpretation is a structure ℐ = ⟨ℐ,  ⟩ where ℐ is an
interpretation and  is a set of exception assumptions.</p>
        <p>Then,  -models for a PKB K are those  -interpretations that verify “strict” axioms in  and defeasibly
apply prototype axioms in  (excluding the exceptional instances in  ).</p>
        <p>Definition 7 ( -model). Given a PKB K, a  -interpretation ℐ = ⟨ℐ,  ⟩ is a  -model for K (denoted
ℐ |= K), if the following holds:
(i) for every  ∈  ∪  of ℒΣ, ℐ |=  ;
(ii) for every  =  ⊑  ∈  , if ⟨,  ⟩ ∈/  , then ℐ |=  ().</p>
        <p>Two DL interpretations ℐ1 and ℐ2 are NI-congruent, if ℐ1 = ℐ2 holds for every  ∈ NI. This extends
to  -interpretations ℐ = ⟨ℐ,  ⟩ by considering interpretations ℐ.</p>
        <p>Intuitively, we say that a  -interpretation is justified if all of its exception assumptions have a clashing
set that is verified by the interpretation.</p>
        <p>Definition 8 (Justifications) . We say that ⟨,  ⟩ ∈  is justified for a  -model ℐ , if some clashing set
⟨, ⟩ exists such that, for every ℐ′ = ⟨ℐ′,  ⟩ of K that is NI-congruent with ℐ , it holds that ℐ′ |= ⟨, ⟩.</p>
        <p>A  -model ℐ of a PKB K is justified , if every ⟨,  ⟩ ∈  is justified in K.</p>
        <p>We define the consequence from justified  -models: K |=  if ℐ |=  for every justified  -model ℐ
of K.</p>
        <p>Remark. Note that there can be more than one justified model, in particular for diferent valid
combinations of exception assumptions and justifications. As will be shown in examples, this allows
reasoning by cases: scores defined over prototype definitions’ values allow to define a preference over
such cases.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Semantics: Prototype Score and Preference</title>
        <p>
          The main intuition of prototype definitions is that each individual which is an instance of a prototype
is associated with a score which denotes the “degree of typicality” of the individual with respect to the
concept described by the prototype. As in [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], such a degree is computed from the prototype features
that are satisfied by the instances and their score. Ideally, the prototype score of an individual allows us
to determine preferences over models: for an individual, axioms on prototypes with higher score are
preferred to the ones on lower scoring prototypes; thus the measure needs to be comparable across
diferent prototypes.
        </p>
        <p>Given the set of prototypes  ∈ , a family of prototype score functions { } ∈ is composed by
functions  : NI → R for each prototype name  ∈  such that every function of the family has
range in a fixed interval [, ..., ] ∈ R.</p>
        <p>Ideally, these families of functions can then be used to define preferences over models: diferent
preference criteria can be defined, in particular, by using the results of score functions on the exceptional
individuals in the exception assumptions’ sets  of  -interpretations.</p>
        <p>In general, we define a preference on exception assumption sets as a partial order  1 &gt;  2 on the sets
 for K. Given two  -interpretations ℐ1 = ⟨ℐ1,  1⟩ and ℐ2 = ⟨ℐ2,  2⟩, we say that ℐ1 is preferred to
ℐ2 (denoted ℐ1 &gt; ℐ2 ) if  1 &gt;  2.</p>
        <p>Finally, we define the notion of PKB model as a minimal justified model for the PKB.
Definition 9 (PKB model). An interpretation ℐ is a PKB model of K (denoted, ℐ |= K) if
– K has some justified  -model ℐ = ⟨ℐ,  ⟩.
– there exists no justified ℐ′ = ⟨ℐ′,  ′⟩ that is preferred to ℐ .</p>
        <p>The consequence from PKB models of K (denoted K |=  ) characterizes the "preferred" consequences
of the PKB, on the basis of the degree of typicality of instances.</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Semantics: Model Independent Prototype Score</title>
        <p>A simple score function can be defined by considering the features that are inferable from the KB (in all
justified models):
Definition 10 ((Model independent) Prototype score). Given a prototype definition  (1 : 1, ...,  :
), we define the (model independent) score function score : NI → R for prototype  as:
score () =</p>
        <p>∑︁
K |= ()</p>
        <p>This measure, however, depends on the value interval over which the prototype weights have been
defined: in order to compare the score of an individual with scores relative to other prototypes, this
value needs to be normalized. We do so by computing the maximum score  and minimum score
 for all prototypes.</p>
        <p>The maximum score  denotes the score of the maximum value of  obtainable from the
weights of consistent subset of features of  . Formally, given a prototype definition  (1 : 1, ...,  :
), let  be the set of sets  ⊆ { 1, . . . , } s.t.  ∪ K is consistent. Then:
The minimal score  denotes the sum of the weights for “unavoidable” features, namely those that
are strictly implied by the membership to the prototype concept. Formally:
 = max( ∑︁  |  ∈  )</p>
        <p>∈
 =</p>
        <p>∑︁
K |=  ⊑</p>
        <p>() =
 () −
 −</p>
        <p>A normalized score function  can be derived from  as:</p>
        <p>
          Consider that with such normalisation we obtain a family of prototype score functions with range in
the same interval [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ], allowing for comparison of prototype scores on the same individual.
        </p>
        <p>We can then define a simple preference using the model independent score defined above: we prefer
justified  -models where the exceptions appear on elements of the less scoring prototypes.
Definition 11 (Preference SP).  1 &gt;  2 if, for every ⟨ ⊑ , ⟩ ∈  1 ∖  2 such that there exists a
⟨ ⊑ , ⟩ ∈  2 ∖  1 with K ∪ {(), ()} unsatisfiable, it holds that  () &lt; ().
The intuition behind the condition K ∪ {(), ()} unsatisfiable is that we want to make the
comparison between the clashing assumptions that are directly in conflict.</p>
        <p>Example 2. Considering the PKB reported in the example above, assume to have two PKB interpretations
ℐ1 and ℐ2 associated respectively with the following two sets of clashing assumptions
 1 = {⟨Wolf ⊑ ¬Trusted, balto⟩, ⟨Dog ⊑ Trusted, cerberus⟩} and</p>
        <p>2 = {⟨Dog ⊑ Trusted, balto⟩, ⟨Dog ⊑ Trusted, cerberus⟩}.</p>
        <p>We have now two  -interpretations corresponding to ⟨ℐ1,  1⟩ and ⟨ℐ2,  2⟩.
Assuming that they are also  -models, we can check if the two are also
justiifed . Since, the clashing assumptions have the following clashing sets, respectively
{Wolf(balto), Trusted(balto), Dog(cerberus), ¬Trusted(cerberus)} for the clashing
assumptions in  1 and {Dog(balto), ¬Trusted(balto), Dog(cerberus), ¬Trusted(cerberus)} for
those in  2, they are both justified.</p>
        <p>In order to decide which model is preferred, we need to compute the prototype scores for balto and for
cerberus: we have score  () = 14, score() = 55, score() = 11. Then we
need to normalise them:</p>
        <p>() =
  () =
() =
score() −</p>
        <p>−</p>
        <p>≈ 0, 4
score  () −
  −</p>
        <p />
        <p>≈ 0, 3
score() −
 −</p>
        <p>= 0

Consequently score  () &lt; score() and, since ⟨Dog ⊑ Trusted, cerberus⟩ is present in
both  1 and  2 so it does not influence the preference order, then we can conclude that  1 &gt;  2. This means
that the preferred model, i.e. the PKB model, is ℐ1 where balto is an exception to Wolf ⊑ ¬Trusted and
cerberus is an exception to Dog ⊑ Trusted. Consequently, it holds that K |= Trusted(balto) and
K |= ¬Trusted(cerberus).</p>
        <p>Moreover, we can note that for pluto and alberto we can standardly infer Trusted(pluto) and
¬Trusted(alberto). The reason is that the clashing assumptions are referred to specific individuals, and
since there are no contradicting assertions for pluto and alberto, there are no clashing sets that justify
their assumptions as exceptions. Therefore, axioms in  apply to them standardly. ◇
Remark. For simplicity, we are assuming independence of scores across the features: for example, the
weight of a feature hasWhiteTail is not dependent on the weight of a more general hasTail. Dependence
across features and their impact on the evaluation of weights is indeed an interesting extension to our
work and we plan to provide a characterization in our future work.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. A New Preference Order</title>
      <p>The preference relation defined above may be too coarse-grained with respect to some cases. For
instance, consider the following example which is a modified version of the example considered above:
Example 3. For the concepts Dog and Wolf we use here the letters  and  respectively and for Trusted
we use T. Moreover, balto is here simplified to .</p>
      <p>Now, we can imagine to have two new prototype axioms talking of house animals () and wild animals
( ), where the first are considered docile ( ), while the second not docile. So, we now have the four
prototype axioms</p>
      <p>⊑ T,  ⊑ ¬T,  ⊑  and   ⊑ ¬;
the individual causing the conflict because it is an instance of ,  ,  and  :</p>
      <p>(),  (), () and  ();
and the four prototype descriptions, with the third one that includes T among its features:
( : 2,  : 8),  ( : 4,  : 6), (T : 3,  : 7) and  ( : 8,  : 2).
where the features , , , , ,  ,  do not have a specific meaning. Moreover, we know that (),
(), () and  ().</p>
      <p>From this KB we can see that we have four sets of justified exception assumptions, that is the sets of
exception assumptions which have an associated clashing set:
We can now compute the normalised typicality scores, which are respectively:
Consequently, the order we have on sets of exception assumptions is:
 1 = {⟨ ⊑ T⟩,  &gt;, ⟨ ⊑ , ⟩}
 2 = {⟨ ⊑ T, ⟩, ⟨  ⊑ ¬, ⟩}
 3 = {⟨ ⊑ ¬T, ⟩, ⟨ ⊑ , ⟩}
 4 = {⟨ ⊑ ¬T, ⟩, ⟨  ⊑ ¬, ⟩}
() = 0, 8;  () = 0, 4;
() = 0, 7;  () = 0, 8.</p>
      <p>3
 2
from which, ℐ 3 results to be the preferred model.</p>
      <p>We have two key observations regarding this example: first, T() is not considered in the computation of
the scores because K ⊭ T(). In fact, in ℐ 1 and ℐ 2 we have ¬T().</p>
      <p>Second, the fact that the preferred model is the one where T() and ¬() hold is counter-intuitive. The
reason is that if we can conclude T() in a model, it should mean that we can add the weight associated
with that feature in the prototype description of . Consequently,  would result as a more typical 
than  , and so we would like to conclude (). Consequently, the desired interpretation would be
◇
The problem derives from the use of consequence to define the score of individuals: while this assures a
uniform score across the models, this score does not consider the satisfaction of features in the single
interpretations. Thus, to deal with such cases, we can define a new preference order, which considers
what holds inside the models and consequently can be called model-dependent.</p>
      <p>The first step is to change the definition of prototype score in order to have a diferent score for each
model:
Definition 12 ((Model dependent) Prototype score). Given a prototype definition  (1 : 1, ...,  :
), we define the score function scoreℐ  : NI → R for prototype  and a justified  model ℐ  as:
scoreℐ  () =</p>
      <p>∑︁
ℐ  |= ()</p>
      <p>We leave the other steps of the computation of the typicality score as they are, such that we will have
a family of normalised score functions which now are relative to the  -models they are in, making the
score an intra-interpretations score. The idea is to measure the typicality of the individual according to
the hypothetical situation we are considering, that is according to the hypotheses regarding what is
exceptional and especially to what it is exceptional.</p>
      <p>In fact, remember that a  -model is a DL interpretation with an associated set of hypothetical exceptions.
This would precisely address the problem arising in the case above, since if we are supposing that  is
exceptional with respect to  ⊑ ¬T, we should assume T(). However, now the problem is how to
compare the scores, which are dependent from interpretations, in a consistent and meaningful way
with respect to the role that the comparison has in our system. This role is that of ordering those
interpretations with the goal to find out which hypothetical exceptions are reasonably actual exceptions.
In the first formalisation, we relied on comparing typicality scores that were independent from the
diferent interpretations and hypotheses and so were, so to say, the outcome of the strict knowledge
that is certain in all the interpretations. Therefore, the typicality scores could be compared consistently
since they depended on the same knowledge.</p>
      <p>Endorsing this intuition, also in the new preference order we can compare the scores that do not
change across the interpretations. The reason is that, since they do not change it means that they are
independent from the particular interpretation and so we are in the same position as in the
modelindependent preference. From a formal perspective, this means that the computation of the typicality
scores do not depend on other prototype axioms. Then, we can order the interpretations using the
existing preference function slightly adjusted to consider only the scores that do not change across the
models, individuating in this way the set of the local preferred models, that is those that are preferred
with respect of only the strict knowledge. At this point, we can apply recursively the previous step,
that is comparing the scores that do not change across these locally preferred models and order them
according to the preference function we have, individuating the set of the new locally preferred models.
We continue with this method iteratively until we remain with the set of globally preferred models.</p>
      <p>Now we can give a more precise definition of this new preference order. Firstly, we need a definition
of the scores we would consider:
Definition 13 (Stable score). Given a set  = {ℐ | ℐ is a justified  -model}, score () is a stable
score if ∀ℐ , ℐ ∈  score () = score ()</p>
      <p>ℐ ℐ</p>
      <p>Now, we can define the new preference mechanism by modifying the previous one with the addition
of the constraint that we are comparing only the scores that are stable across all the justified  -models.
Definition 14 (Local Preference).  1 &gt;  2 if, for every ⟨ ⊑ , ⟩ ∈  1 ∖  2 such that there exists a
⟨ ⊑ , ⟩ ∈  2 ∖  1 with K ∪ {(), ()} unsatisfiable and such that score () and score() are
stable scores, it holds that  () &lt; ().</p>
      <p>As before, a preferred justified  -model is a justified  -model that has no justified  -model which is
preferred to it: formally, a justified  -model ℐ = ⟨ℐ,  ⟩ is a locally preferred justified  -model of K if
there exists no justified ℐ ′ = ⟨ℐ′,  ′⟩ that is preferred to ℐ . Therefore, we can define the set of these
preferred models:
Definition 15 (Set of locally preferred justified  -models). We denote the set of locally preferred justified
 -models of a set of justified  -models  of K with  ( ).</p>
      <p>Note that  ( ) ⊆  and so  ( ) ∈ ( ), where ( ) stands for the power-set of  .</p>
      <p>Now we are ready to define the global preference between the models, thanks to an iterative
application of the local preference:
Definition 16 (Global Preference). Given the set ℳ of all the justified  -models of K, consider the
sequence of sets of models 0, ...,  where  ⊆ ℳ such that (i) 0 = ℳ; (ii) +1 =  (); (iii)
 is the fixed point such that  () = .</p>
      <p>A justified  -model ℐ is a globally preferred model of K if ℐ ∈ 
Proposition 1. The global preference construction above has a fixed point .</p>
      <p>Proof. Assume that there is no fixed point. This can happen in two ways: either (i) there are infinitely
many , or (ii) there is a loop such that + =  where  &gt; 1.</p>
      <p>Consider situation (i): we can notice that 0 ⊇  ⊇ +1 and so on ad infinitum since we never
produce new justified  -models, but we select among the elements of the ith-set those that are preferred
and we use them to build the new ith+1-set. However, this selection depends only on the  s of the
justified  -models, that we recall are sets of exception assumptions. Since the latter are defined on
axioms and individual names in K, the exceptional assumptions are finite and consequently also the  s.
Therefore, there cannot be infinitely many justified  -models.</p>
      <p>Now, consider situation (ii). A loop would have a form like this:  () = +1;  (+1) = +2 =
− 1 and  (− 1) = .</p>
      <p>Since ∀( ⊇ +1),  ⊇ +1 ⊇ − 1 ⊇ . But this means that  = +1 and this is
inconsistent with the assumption.</p>
      <p>By considering the globally preferred models as those preferred tout court for the Definition 9 of the
PKB models, we now have a new preference order which allows to reach the desired conclusion in
cases like those in Example 3 above. To illustrate that this is the case and how the new method works,
we can apply the new preference to the example above:
Example 4. Consider the knowledge base presented in Example 3, we have the same four exception
assumptions  1,  2,  3 and  4. So, now we can start applying the new preference order: we have the
elements of our set 0 = ℳ = {ℐ 1 , ℐ 2 , ℐ 3 , ℐ 4 }.</p>
      <p>Then, we can compute the normalised typicality scores, but now each justified  -model will have its set of
typicality scores thanks to Definition 12, which are respectively:
()
 ()
()
 ()
ℐ 1
0, 8
0, 4
0, 7
0, 8
ℐ 2
0, 8
0, 4
0, 7
0, 8
ℐ 3
0, 8
0, 4</p>
      <p>1
0, 8
Now we can apply the new definition of preference, which will compare only the stable scores. In this case
the stable score are () and  (). Note that  3 and  4 assume that Balto is exceptional
with respect to wolves being not trusted and the stable score with respect to the prototype  is smaller
than that of the prototype . Therefore, the locally preferred models are ℐ 3 and ℐ 4 , or, in other words,
 (0) = 1 = {ℐ 3 , ℐ 4 }</p>
      <p>In the next step, we have to select the locally preferred justified  -models, but in the new set 1. So,
now, we compare also () and  () which are stable normalised scores in 1 and we
have  4 &gt;  3. Therefore,  (1) = 2 = {ℐ 4 }.</p>
      <p>Again, we search for the preferred models in 2. In this case, the preferred model is the only one
in the set, since it is trivially true that there is no other model in the set that is preferred to it. So,
 (2) = 3 = {ℐ 4 } = 2, which means that 2 is our fixed point.</p>
      <p>Thus, we can conclude that ℐ 4 is the globally preferred justified  -model and therefore the PKB model
of K as expected.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions and Future Work</title>
      <p>
        In this paper, we built on the work in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] by refining the terminology, simplifying the formal setting,
and by proposing a new preference mechanism that allows to overcome certain shortcomings of the
previous one.
      </p>
      <p>
        Regarding future work, we would like to further develop the formalism through a relaxation of some
of the requirements found in the current version, such as the restriction that the individuals found
in the exceptional assumptions must be named entities in the knowledge base. Moreover, we will
study the formal properties enjoyed by the resulting logic in order to compare it with other approaches
which use some notion of typicality in DLs for defeasible reasoning such as [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ]. A logical analysis
comprehending a consistency proof and a discussion of the coherence between the knowledge base and
the weights of the features is needed too, along with a study of the computational costs. Finally, we will
propose an implementation of our approach in the framework of Answer Set Programming.
Acknowledgements. This work was partially supported by the GULP - Gruppo Ricercatori e Utenti
Logic Programming. The support is gratefully acknowledged.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Strasser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Antonelli</surname>
          </string-name>
          ,
          <article-title>Non-monotonic Logic</article-title>
          , in: E. N.
          <string-name>
            <surname>Zalta</surname>
          </string-name>
          (Ed.),
          <source>The Stanford Encyclopedia of Philosophy</source>
          , Summer 2019 ed., Metaphysics Research Lab, Stanford University,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>McCarthy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. J.</given-names>
            <surname>Hayes</surname>
          </string-name>
          ,
          <source>Some Philosophical Problems from the Standpoint of Artificial Intelligence</source>
          , Morgan Kaufmann Publishers Inc., San Francisco, CA, USA,
          <year>1987</year>
          , p.
          <fpage>26</fpage>
          -
          <lpage>45</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Gliozzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Olivetti</surname>
          </string-name>
          , G. Pozzato,
          <article-title>Semantic characterization of rational closure: From propositional logic to description logics</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>226</volume>
          (
          <year>2015</year>
          )
          <fpage>1</fpage>
          -
          <lpage>33</lpage>
          . URL: https://www. sciencedirect.com/science/article/pii/S0004370215000673. doi:https://doi.org/10.1016/j. artint.
          <year>2015</year>
          .
          <volume>05</volume>
          .001.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>K.</given-names>
            <surname>Britz</surname>
          </string-name>
          , G. Casini, T. Meyer, K. Moodley,
          <string-name>
            <given-names>U.</given-names>
            <surname>Sattler</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Varzinczak</surname>
          </string-name>
          ,
          <article-title>Principles of klm-style defeasible description logics</article-title>
          ,
          <source>ACM Trans. Comput. Logic</source>
          <volume>22</volume>
          (
          <year>2020</year>
          ). URL: https://doi.org/10.1145/3420258. doi:
          <volume>10</volume>
          .1145/3420258.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>G.</given-names>
            <surname>Sacco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bozzato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          ,
          <article-title>Defeasible reasoning with prototype descriptions: First steps</article-title>
          ,
          <source>in: Proceedings of the 36th International Workshop on Description Logics (DL</source>
          <year>2023</year>
          ), volume
          <volume>3515</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>G.</given-names>
            <surname>Sacco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bozzato</surname>
          </string-name>
          ,
          <string-name>
            <surname>O. Kutz,</surname>
          </string-name>
          <article-title>Generics in defeasible reasoning</article-title>
          . exceptionality, gradability, and content sensitivity,
          <year>2023</year>
          . Accepted at 7th CAOS Workshop 'Cognition and Ontologies',
          <source>9th Joint Ontology Workshops (JOWO</source>
          <year>2023</year>
          ), co-located
          <source>with FOIS</source>
          <year>2023</year>
          ,
          <volume>19</volume>
          -
          <issue>20</issue>
          <year>July</year>
          ,
          <year>2023</year>
          , Sherbrooke, Québec, Canada.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>P.</given-names>
            <surname>Galliani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Righetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Porello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Troquard</surname>
          </string-name>
          ,
          <article-title>Perceptron connectives in knowledge representation</article-title>
          , in: C.
          <string-name>
            <surname>M. Keet</surname>
          </string-name>
          , M. Dumontier (Eds.),
          <source>Knowledge Engineering and Knowledge Management</source>
          , Springer International Publishing, Cham,
          <year>2020</year>
          , pp.
          <fpage>183</fpage>
          -
          <lpage>193</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D.</given-names>
            <surname>Porello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          , G. Righetti,
          <string-name>
            <given-names>N.</given-names>
            <surname>Troquard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Galliani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Masolo</surname>
          </string-name>
          ,
          <article-title>A toothful of concepts: Towards a theory of weighted concept combination</article-title>
          ,
          <source>in: Description Logics</source>
          , volume
          <volume>2373</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G.</given-names>
            <surname>Sacco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bozzato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          ,
          <article-title>Introducing weighted prototypes in description logics for defeasible reasoning</article-title>
          , in: A. F. Agostino
          <string-name>
            <surname>Dovier</surname>
          </string-name>
          (Ed.),
          <source>Proceedings of the 38th Italian Conference on Computational Logic, CEUR Workshop Proceedings</source>
          , Udine, Italy,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>L.</given-names>
            <surname>Bozzato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Eiter</surname>
          </string-name>
          , L. Serafini,
          <article-title>Enhancing context knowledge repositories with justifiable exceptions</article-title>
          ,
          <source>Artif. Intell</source>
          .
          <volume>257</volume>
          (
          <year>2018</year>
          )
          <fpage>72</fpage>
          -
          <lpage>126</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. Theseider</given-names>
            <surname>Dupré</surname>
          </string-name>
          ,
          <article-title>Weighted defeasible knowledge bases and a multipreference semantics for a deep neural network model</article-title>
          ,
          <source>in: Logics in Artificial Intelligence: 17th European Conference, JELIA</source>
          <year>2021</year>
          ,
          <string-name>
            <given-names>Virtual</given-names>
            <surname>Event</surname>
          </string-name>
          , May
          <volume>17</volume>
          -20,
          <year>2021</year>
          , Proceedings 17, Springer,
          <year>2021</year>
          , pp.
          <fpage>225</fpage>
          -
          <lpage>242</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>K.</given-names>
            <surname>Britz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Heidema</surname>
          </string-name>
          , T. Meyer,
          <article-title>Modelling object typicality in description logics</article-title>
          , in: A.
          <string-name>
            <surname>Nicholson</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          <string-name>
            <surname>Li</surname>
          </string-name>
          (Eds.),
          <source>AI 2009: Advances in Artificial Intelligence</source>
          , Springer Berlin Heidelberg, Berlin, Heidelberg,
          <year>2009</year>
          , pp.
          <fpage>506</fpage>
          -
          <lpage>516</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>