<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Journal of Exper</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1017/S1471068422000163</article-id>
      <title-group>
        <article-title>Introducing Weighted Prototypes in Description Logics for Defeasible Reasoning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Gabriele Sacco</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Loris Bozzato</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oliver Kutz</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Fondazione Bruno Kessler</institution>
          ,
          <addr-line>Via Sommarive 18, 38123 Trento</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Free University of Bozen-Bolzano</institution>
          ,
          <addr-line>Piazza Domenicani 3, 39100, Bolzano</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>2776</volume>
      <fpage>21</fpage>
      <lpage>23</lpage>
      <abstract>
        <p>The representation of defeasible information in Description Logics is a well-known issue and many formal approaches have been proposed, mostly emerging from existing formalisms in non-monotonic logic. However, in these proposals little attention has been devoted to study their capability in capturing the interpretation of typicality and exceptions from an ontological and cognitive point of view. In this regard, we are currently studying defeasible reasoning as discussed in the linguistic and cognitive literature in order to understand the important desiderata of defeasibility in commonsense reasoning. In this paper, we provide an initial formalisation of a defeasible semantics for description logics which aims at fulfilling such desiderata. The solution is based on the idea of weighted prototypes, a new form of perceptron operator which is used to represent a notion of graded typicality of concept instances.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Description Logics</kwd>
        <kwd>Weighted Logics</kwd>
        <kwd>Perceptron Operators</kwd>
        <kwd>Defeasible Reasoning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Considering logic-based ontology representation languages, in Description Logics (DLs) many
proposals for defining defeasibility and typicality have been formalised: as a matter of fact, most
of them emerge from existing approaches in non-monotonic logics, as in [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. On the other
hand, little attention has been devoted to study the capability of these approaches in capturing
the interpretation of typicality and exceptions from the point of view of formal ontology and
cognitive aspects. Consequently, the philosophical and cognitive assumptions of this kind of
reasoning are often overlooked and need a committed discussion in order to understand the
capabilities of the current approaches.
      </p>
      <p>
        Considering this, we recently initiated this discussion with an analysis of generics [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ],
sentences reporting a regularity regarding particular facts that can be generalised but tolerate
exceptions. Our analysis (presented in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]) highlighted three desiderata for non-monotonic
reasoning:
D1. Exceptionality: generics and non-monotonic reasoning both admit exceptions and much
of the efort in the research has been dedicated to explain and model how exceptions
can be tolerated. We think that another important aspect that should be considered is
why something is an exception, i.e. how to include in the formal representation also the
justification or explanation of why an instance is considered exceptional or not.
D2. Gradability: normality is a graded notion in the case of typicality, for example, instead of
typical individuals and atypical ones with respect to some concept, we have more or less
typical individuals. For instance, it would not be possible to divide wolves between typical
wolves and atypical ones in absolute terms, but there would be wolves that are more or
less typical according to the specific features of each individual.
      </p>
      <p>D3. Content sensitivity: non-monotonic reasoning cannot be modelled by using only an
extensional approach. This means that we cannot rely on pure extensional semantics, i.e. seeing
the relation among concepts only in the light of relationships between sets. We need to
take into account the semantics of the concepts involved in a broader sense, for example
by relying on notions like typicality and saliency. The intuition here is that to explain why
an individual is exceptional, for example, one would need some insights into the meaning
(or, the content) of the statements of which the individual is an exception.</p>
      <p>According to these desiderata, in this paper we sketch a new formal account for non-monotonic
reasoning in DLs based on a graded reading of typicality. Intuitively, in the case of a conflict
between two facts about an individual, we can decide which one should be accepted according
to how much the individual in question is typical w.r.t. such facts. For example: we know that
dogs are trusted, whereas wolves are not; we know also that Balto is a wolfdog hybrid; we can
ask ourselves now, should we infer that Balto is trusted or not? In our approach, we want to
use the additional information we have about Balto being a dog and Balto being a wolf to see if
he is a more typical instance of a dog or wolf and, according to this, infer if he is trusted or not.</p>
      <p>
        More specifically, our approach is based on two main elements: prototype definitions and a
typicality score. Prototype definitions are inspired by the prototype theory of concepts [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and
its representation based on the tooth operator as introduced, for example, in [6]. According to
the endorsers of the prototype theory about concepts, being a member of a concept does not
mean to satisfy a precise definition, but rather to satisfy enough features or constituents of that
concept [7]. The second key element is the typicality score for individuals: this is calculated by
inspecting to what extent the individual satisfies the features of the prototype. The aim of the
score is to measure how typical the individual is with respect to the prototype considered: in
case of a conflict on prototype-related properties, the score provides a preference determining
which conclusion should prevail for that specific individual.
      </p>
      <p>We remark that the current presentation of the formalisation is still an initial proposal and
includes some constraints to simplify its exposition: some of the possible refinements and
extensions are briefly discussed in the conclusions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. DLs with Weighted Prototypes</title>
      <p>On the base of the idea above, we distinguish two parts in our knowledge bases: the actual
DL knowledge base, which represents the knowledge of interest and can contain defeasible
axioms and information about features of individuals, and a separate set containing prototype
definitions. In the following we sketch a proposal for a syntax and semantics of such enriched
KBs.</p>
      <sec id="sec-2-1">
        <title>2.1. Syntax</title>
        <p>The following definitions are independent from the DL language used for representing the main
knowledge base: we consider a fixed concept language ℒΣ (such as for example ℒ) based on
a DL signature Σ with disjoint and non-empty sets NC of concept names, NR of role names,
and NI of individual names. Furthermore, we identify a subset of the concept names as denoting
prototype names by assuming a subset NP ⊆ NC and a set of feature names NF ⊆ NC with
NP ∩ NF = ∅.</p>
        <p>Definition 1 (Features). A basic feature is a concept name  ∈ NF. A general feature is a
complex concept in language ℒΣ using only basic features as concept names.</p>
        <p>For simplicity, we call general concepts the concepts composed only of concepts in NC∖NP∪NF.</p>
        <p>
          The features associated with prototypes together with the degree of their importance are
given in prototype definitions .1 In particular, to allow for a direct comparison across prototype
scores, we here constrain the weights of features to be in the [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] interval and to add up to 1,
i.e. prototypes are positive and normalised.
        </p>
        <p>Definition 2 (Positive normalised prototype definition). Let  ∈ NP be a prototype name,
let 1, . . . ,  be general features of ℒΣ and let  = (1, . . . , ) ∈ R be a weight vector,
where for every  ∈ {1, . . . , } we have  ∈ (0, 1] and ∑︀∈{1,...,}  = 1. Then, the expression
 (1 : 1, . . . ,  : )
is called a prototype definition for  .</p>
        <p>In the knowledge part of the KB, we can use prototype names in DL axioms to describe
properties of the members of such classes. Here we consider the case in which prototype names
are only used as primitive concepts on the left hand side of concept inclusions.</p>
        <p>In particular, we call a concept inclusion of the type  ⊑  a prototype axiom if  ∈ NP
and  is a (possibly general) concept of ℒΣ. Intuitively, these axioms are not absolute and
can be “overridden” by prototype instances (cf. defeasible axioms in [8]), also depending on
the “degree of membership” of the individual to the given prototype (i.e., the satisfaction of
its features). Prototype axioms can be seen as corresponding to generic sentences since they
express generalisations that admits exceptions. Such exceptions can thus override the truth of a
prototype axiom for that specific individual.</p>
        <p>As noted above, we consider knowledge bases which can contain prototype axioms and
which are enriched with an accessory KB, the PBox  providing prototype definitions.</p>
        <sec id="sec-2-1-1">
          <title>Definition 3 (Prototyped Knowledge Base, PKB).</title>
          <p>short, in language ℒΣ is a triple K = ⟨ , , ⟩ where:
A prototyped knowledge base, PKB for
1Note that this definition of prototypes is similar to the definition of concepts by the tooth operator defined in [
–  =  ⊎  is a DL TBox consisting of concept inclusion axioms of the form  ⊑  and
partioned into the disjoint sets  of prototype axioms and  of general concept inclusions
based on arbitrary concepts;
–  =  ⊎  ⊎  is a set of ABox assertions of the form (), where  ∈ NI is an individual
name, and partitioned into the disjoint sets  of prototype assertions (where  ∈ NP),  of
basic feature assertions (where  ∈ NF) and  of general assertions (where  is a general
concept);
–  is a set of prototype definitions, exactly one for each prototype name  ∈ NP appearing in
the prototype TBox  .</p>
          <p>Note that a PKB ⟨ , , ∅⟩ can be seen as a standard DL knowledge base.</p>
          <p>Example 1. We can now represent the example described in the introduction as a prototyped
knowledge base  = ⟨ , , ⟩ as follows:</p>
          <p>=  = { Dog ⊑ Trusted, Wolf ⊑ ¬Trusted },
 = { Dog(balto), Wolf(balto), Dog(pluto), Wolf(alberto),
livesInWoods(balto), hasLegs(balto), isTamed(balto),
hasCollar(pluto), hasLegs(pluto), isTamed(pluto),
hasLegs(alberto), Hunts(alberto) },
 = { Wolf(livesInWoods : 0.3, hasLegs : 0.1, livesInPack : 0.2, Hunts : 0.4),</p>
          <p>Dog(hasCollar : 0.3, livesInHouse : 0.2, hasLegs : 0.1, isTamed : 0.4) }
Below we will construct a semantics for this kind of PKB which will entail and justify the conclusion
that balto is a trusted dog which is a wolf, without being inconsistent. Note that in the case of
the instances pluto and alberto no contradiction arises, thus we want that the axioms in  are
applied to them normally. ◇</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Semantics</title>
        <p>The semantics of PKBs is based on standard interpretations for the underlying DL ℒΣ. However,
we need to introduce additional semantic structure to manage exceptions to prototype axioms,
exploiting the prototype definition expressions in .</p>
        <p>Definition 4 (PKB interpretations). A PKB interpretation is a description logic interpretation
ℐ = ⟨∆ ℐ , · ℐ ⟩ for signature Σ with a non-empty domain, ∆ ℐ , ℐ ∈ ∆ ℐ for every  ∈ NI, ℐ ⊆ ∆ ℐ
for every  ∈ NC, ℐ ⊆ ∆ ℐ × ∆ ℐ for every  ∈ NR, and where the extension of complex
concepts is defined recursively as usual for language ℒΣ.</p>
        <p>Note that we are not giving a DL interpretation to the prototype definition expressions in .</p>
        <p>We consider the notion of axiom instantiation and clashing assumptions as defined in [ 8].
Given an axiom  ∈ ℒΣ with FO-translation ∀x. (x), the instantiation of  with a tuple e
of individuals in NI, written  (e), is the specialisation of  to e, i.e.,  (e), depending on the
type of  .</p>
        <p>Definition 5 (Clashing assumptions and clashing sets). A clashing assumption is a pair
⟨, e⟩ such that  (e) is an axiom instantiation of  , and  ∈  is a prototype axiom.</p>
        <p>A clashing set for ⟨, e⟩ is a satisfiable set  of ABox assertions s.t.  ∪ { (e)} is unsatisfiable.
Intuitively, a clashing assumption ⟨ ⊑ , ⟩ states that we assume that  is an exception to
the prototype axiom  ⊑  in a given PKB interpretation. Then, the fact that a clashing set 
for ⟨ ⊑ , ⟩ is verified by such an interpretation gives a “justification” of the validity of the
assumption of overriding. This intuition is reflected in the definition of models: we first extend
PKB interpretations with a set of clashing assumptions.</p>
        <p>Definition 6 (CAS-interpretation). A CAS-interpretation is a structure ℐCAS = ⟨ℐ,  ⟩ where
ℐ is a PKB interpretation and  is a set of clashing assumptions.</p>
        <p>Then, CAS-models for a PKB K are CAS-interpretations that verify “strict” axioms in  and
defeasibly apply prototype axioms in  (excluding the exceptional instances in  ).
Definition 7 (CAS-model). Given a PKB K, a CAS-interpretation ℐCAS = ⟨ℐ,  ⟩ is a
CASmodel for K (denoted ℐCAS |= K), if the following holds:
(i) for every  ∈  ∪  of ℒΣ, ℐ |=  ;
(ii) for every  =  ⊑  ∈  , if ⟨,  ⟩ ∈/  , then ℐ |=  ().</p>
        <p>Two DL interpretations ℐ1 and ℐ2 are NI-congruent, if ℐ1 = ℐ2 holds for every  ∈ NI. This
extends to CAS interpretations ℐCAS = ⟨ℐ,  ⟩ by considering PKB interpretations ℐ. Intuitively,
we say that a CAS-interpretation is justified if all of its clashing assumptions admit a clashing
set that is verified by the interpretation.</p>
        <p>Definition 8 (Justifications). We say that ⟨, e⟩ ∈  is justified for a CAS -model ℐCAS , if
some clashing set  , e⟩ exists such that, for every ℐC′AS = ⟨ℐ′,  ⟩ of K that is NI-congruent with
⟨
ℐCAS , it holds that ℐ′ |=  , e⟩. A CAS model ℐCAS of a PKB K is justified , if every ⟨, e⟩ ∈ 
⟨
is justified in K.</p>
        <p>We define the consequence from justified CAS-models: K |=  if ℐCAS |=  for every
justified CAS-model ℐCAS of K.</p>
        <p>
          The main intuition of prototype definitions is that each member of a prototype is associated
with a score which denotes the “degree of typicality” of the instance with respect to the concept
described by the prototype. As in [6], such a degree is computed from the prototype features
that are satisfied by the instances and their score. Ideally, the prototype score of an individual
allows us to determine a preference over models: axioms on prototypes with higher score are
preferred to the ones on lower scoring prototypes. Formally, a simple score function can be
defined as follows:
Definition 9 (Prototype score). Given a prototype definition  (1 : 1, ...,  : ), we
define the score function score : NI → [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] for prototype  as:
score () =
        </p>
        <p>∑︁
K |= ()</p>
        <p>The scoring function can then be used to define preferences over models: in particular, we
want to prefer justified CAS models where the exceptions appear on elements of the less scoring
prototypes. This can be encoded as follows:
Definition 10 (Preference SP).  1 &gt;  2 if, for every ⟨ ⊑ , ⟩ ∈  1 ∖  2 such that there
exists a ⟨ ⊑ , ⟩ ∈  2 ∖  1, it holds that scoreK () &lt; scoreK().</p>
        <p>Given two CAS-interpretations ℐC1AS = ⟨ℐ1,  1⟩ and ℐC2AS = ⟨ℐ2,  2⟩, we say that ℐC1AS is
preferred to ℐC2AS (denoted ℐC1AS &gt; ℐC2AS ) if  1 &gt;  2.</p>
        <p>Finally, we define the notion of PKB model as a minimal justified model for the PKB.</p>
        <sec id="sec-2-2-1">
          <title>Definition 11 (PKB model).</title>
          <p>An interpretation ℐ is a PKB model of K (denoted, ℐ |= K) if
– K has some justified CAS model ℐCAS = ⟨ℐ,  ⟩.
– there exists no justified ℐC′AS = ⟨ℐ′,  ′⟩ that is preferred to ℐCAS .</p>
          <p>The consequence from PKB models of K (denoted K |=  ) allows us to use the degree of
typicality of instances to verify which of the conflicting prototype axioms should apply.
Example 2. Considering the PKB reported in the example above, assume to have two PKB
interpretations ℐ1 and ℐ2 associated respectively with the following two sets of clashing assumptions
 1 = {⟨Wolf ⊑ ¬Trusted, balto⟩} and  2 = {⟨Dog ⊑ Trusted, balto⟩}.
We have now two CAS-interpretations corresponding to ⟨ℐ1,  1⟩ and ⟨ℐ2,  2⟩. Assuming that they
are also CAS-models, we can check if the two are also justified . Since, the clashing assumptions
have the following clashing sets, respectively {Wolf(balto), Trusted(balto)} for the clashing
assumptions in  1 and {Dog(balto), ¬Trusted(balto)} for that in  2, they are both justified.
In order to decide which model is preferred, we need to compute the prototype scores for balto:
we have score  () = 0.4, score() = 0.5, consequently score  () &lt;
score() and  2 &gt;  1. This means that the preferred model, i.e. the only PKB
model, is ℐ1 where balto is an exception to Wolf ⊑ ¬Trusted. Consequently, it holds that
K |= Trusted(balto).</p>
          <p>Moreover, we can note that for pluto and alberto we can standardly infer Trusted(pluto)
and ¬Trusted(alberto). The reason is that the clashing assumptions are referred to specific
individuals, and since there are no contradicting assertions for pluto and alberto, there are no
clashing sets that justify their assumptions as exceptions. Therefore, axioms in  apply to them
standardly. ◇</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Discussion and Conclusions</title>
      <p>We presented an initial formalisation for a non-monotonic extension of DLs with the aim of
satisfying three desiderata extracted from a critical discussion on generics and the prototype
theory about concepts. We note that our formalism meets the desiderata: (D1). the formalisation
is based on the idea that we need to justify an exception to an axiom by looking at how typical
it is: in other words, we use typicality to decide with respect to which of the conflicting axioms
(which correspond to generics) the individual is an exception; (D2). we are using a graded
notion of typicality: we do not simply have typical and atypical individuals, but we compute a
score which is comparable across prototypes; (D3). the notion of typicality that we introduce is
not extensional: by using the scores to represent it, we are relying on a characteristic which
goes beyond an extensional set-theoretic treatment.</p>
      <p>In future work, we want to extend the cognitive and ontological study of exceptions also
by comparing it with other accounts for typicality and defeasibility in DLs. Regarding our
formalisation, we need to explore and refine the formal consequences of our approach in greater
detail. In particular, we need to discuss what are the best options to compute the scores in order to
have a balanced score for every prototype and how to extend this computation to roles, possibly
following some of the ideas outlined in [9, 10], where novel tooth-operators for role-successor
counting are studied. The preference relation can also be refined: for example, comparisons
on clashing assumptions can be restricted to the axioms that are actually incompatible. We
need also to understand better how to allow for more interaction between concepts used for
prototypes and features, for example by allowing nested definitions of prototypes, use prototype
concepts as features, and compute scores with defeasible features.</p>
      <p>Finally, we need an extensive comparison with related works. On the one hand, we will
confront our approach to existing formalisms for defeasible reasoning in DLs like [11, 12]. Of
particular interest for this purpose are formalisms using weights and having a multi-preferential
relation over the individuals with respect to the concepts they are instances of, as, for instance,
[13, 14]. On the other hand, we will analyse also works that share our approach to taking
into account, in a central way, the results coming from cognitive science and philosophy for
developing formal systems in the field of knowledge representation and in particular using the
language of DLs. Examples of such works, particularly interested in the notion of typicality, are
[15, 16].</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Gliozzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lieto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Olivetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. L.</given-names>
            <surname>Pozzato</surname>
          </string-name>
          ,
          <article-title>Reasoning about typicality and probabilities in preferential description logics</article-title>
          ,
          <year>2020</year>
          . URL: https://arxiv.org/abs/
          <year>2004</year>
          .09507. doi:
          <volume>10</volume>
          .48550/ARXIV.
          <year>2004</year>
          .
          <volume>09507</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>K.</given-names>
            <surname>Britz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Heidema</surname>
          </string-name>
          , T. Meyer,
          <article-title>Modelling object typicality in description logics</article-title>
          , in: A.
          <string-name>
            <surname>Nicholson</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          <string-name>
            <surname>Li</surname>
          </string-name>
          (Eds.),
          <source>AI 2009: Advances in Artificial Intelligence</source>
          , Springer Berlin Heidelberg, Berlin, Heidelberg,
          <year>2009</year>
          , pp.
          <fpage>506</fpage>
          -
          <lpage>516</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.-J.</given-names>
            <surname>Leslie</surname>
          </string-name>
          , Generics: Cognition and acquisition,
          <source>Philosophical Review</source>
          <volume>117</volume>
          (
          <year>2008</year>
          )
          <fpage>1</fpage>
          -
          <lpage>47</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Sacco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bozzato</surname>
          </string-name>
          ,
          <string-name>
            <surname>O. Kutz,</surname>
          </string-name>
          <article-title>Generics in defeasible reasoning</article-title>
          . Exceptionality, gradability, and content sensitivity,
          <year>2023</year>
          . 7th CAOS Workshop 'Cognition and Ontologies',
          <source>9th Joint Ontology Workshops (JOWO</source>
          <year>2023</year>
          ), co-located
          <source>with FOIS</source>
          <year>2023</year>
          ,
          <volume>19</volume>
          -
          <issue>20</issue>
          <year>July</year>
          ,
          <year>2023</year>
          , Sherbrooke, Québec, Canada.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Hampton</surname>
          </string-name>
          ,
          <article-title>Concepts as prototypes</article-title>
          , volume
          <volume>46</volume>
          of Psychology of Learning and Motivation, Academic Press,
          <year>2006</year>
          , pp.
          <fpage>79</fpage>
          -
          <lpage>113</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>