<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Journal of Logic
and Computation 2 (1992) 5-30.
[14] L. Bozzato</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.29007/vdlq</article-id>
      <title-group>
        <article-title>Defeasible Reasoning with Prototype Descriptions: First Steps</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Gabriele Sacco</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Loris Bozzato</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oliver Kutz</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Fondazione Bruno Kessler</institution>
          ,
          <addr-line>Via Sommarive 18, 38123 Trento</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Free University of Bozen-Bolzano</institution>
          ,
          <addr-line>Piazza Domenicani 3, 39100, Bolzano</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>46</volume>
      <fpage>0000</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>The representation of defeasible information in Description Logics is a well-known issue and many formal approaches have been proposed, mostly emerging from existing formalisms in non-monotonic logic. However, in these proposals, little attention has been devoted to studying their capabilities in capturing the interpretation of typicality and exceptions from an ontological and cognitive point of view. In this regard, we are currently studying defeasible reasoning as discussed in the linguistic and cognitive literature in order to understand the important desiderata of defeasibility in commonsense reasoning. In this paper, we provide an initial formalisation of a defeasible semantics for description logics which aims at fulfilling such desiderata. The proposal is based on combining ideas from prototype theory, weighted description logic (aka 'tooth logic'), and earlier work on justifiable exceptions. The introduced weighted prototypes are normalised with respect to a given knowledge base, which in turn is used to compute a typicality score with respect to an individual. This machinery is then used to determine exceptions in case of conflicting axioms.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Description Logics</kwd>
        <kwd>Weighted Logics</kwd>
        <kwd>Perceptron Operators</kwd>
        <kwd>Defeasible Reasoning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Considering logic-based ontology representation languages, in Description Logics (DLs) many
proposals for defining defeasibility and typicality have been formalised: as a matter of fact, most
of them emerge from existing approaches in non-monotonic logics, as in [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. On the other
hand, little attention has been devoted to study the capability of these approaches in capturing
the interpretation of typicality and exceptions from the point of view of formal ontology and
cognitive aspects. Consequently, the philosophical and cognitive assumptions of this kind of
reasoning are often overlooked and need a committed discussion in order to understand the
capabilities of the existing approaches.
      </p>
      <p>
        Considering this, we recently initiated this discussion with an analysis of generics [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ],
sentences reporting a regularity regarding particular facts that can be generalised but tolerate
exceptions. Our analysis (presented in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]) highlighted three desiderata for non-monotonic
reasoning:
D1. Exceptionality: generics and non-monotonic reasoning both admit exceptions and much
of the efort in the research has been dedicated to explain and model how exceptions
can be tolerated. We think that another important aspect that should be considered is
why something is an exception, i.e. how to include in the formal representation also the
justification or explanation of why an instance is considered exceptional or not.
D2. Gradability: normality is a graded notion in the case of typicality. For example, instead of
typical individuals and atypical ones with respect to some concept, we have more or less
typical individuals. For instance, it would not be possible to divide wolves between typical
wolves and atypical ones in absolute terms, but there would be wolves that are more or
less typical according to the specific features of each individual.
      </p>
      <p>
        D3. Content sensitivity: non-monotonic reasoning cannot be modelled by using only an
extensional approach. This means that we cannot rely on purely extensional semantics, i.e.
seeing the relation among concepts only in the light of relationships between sets. We
need to take into account the semantics of the concepts involved in a broader sense, for
example by relying on notions like typicality and saliency. The intuition here is that to
explain why an individual is exceptional, for example, one would need some insights into
the meaning (or, the content) of the statements of which the individual is an exception.
According to these desiderata, in this paper we sketch a new formal account for non-monotonic
reasoning in DLs based on a graded reading of typicality, extending the work recently begun in
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Intuitively, in the case of a conflict between two facts about an individual, we can decide
which one should be accepted according to how much the individual in question is typical w.r.t.
such facts. For example: we know that dogs are trusted, whereas wolves are not; we know also
that Balto is a wolfdog hybrid; we can ask ourselves now, should we infer that Balto is trusted
or not? In our approach, we want to use the additional information we have about Balto being a
dog and Balto being a wolf to see if he is a more typical instance of a dog or wolf and, according
to this, infer if he is trusted or not.
      </p>
      <p>More specifically, our approach is based on two main elements: prototype definitions and a
typicality score. Prototype definitions are inspired by the prototype theory of concepts [ 6] and
its representation based on the tooth operator as introduced, for example, in [7]. According to
the endorsers of the prototype theory about concepts, being a member of a concept does not
mean to satisfy a precise definition, but rather to satisfy enough features or constituents of that
concept [8]. The second key element is the typicality score for individuals: this is calculated
by inspecting to what extent the individual satisfies the features of the prototype. The aim
of the score is to measure exactly how typical the individual is with respect to the prototype
considered: in case of a conflict on prototype-related properties, the score provides a preference
determining which conclusion should prevail for that specific individual.</p>
      <p>From a technical point of view, our work also aims at investigating the use of DL
weighted/tooth operators in the context of defeasible reasoning, as hinted at in [9]. We remark that the
current presentation of the formalization is still an initial proposal and includes some constraints
to simplify its exposition: some of the possible refinements and extensions are briefly discussed
in the conclusions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. DLs with Weighted Prototypes</title>
      <p>On the basis of the ideas outlined above, we distinguish two parts in our knowledge bases:
the actual DL knowledge base, which represents the knowledge of interest and can contain
defeasible axioms and information about features of individuals, and a separate set containing
prototype definitions.</p>
      <p>In the following we outline the syntax and semantics of such enriched KBs.
2.1. Syntax: Features, Prototype Definitions, Prototype Knowledge Bases
The following definitions are independent from the DL language used for representing the main
knowledge base: we consider a fixed concept language ℒΣ (such as for example ℒ) based on
a DL signature Σ with disjoint and non-empty sets NC of concept names, NR of role names,
and NI of individual names. Furthermore, we identify a subset of the concept names as denoting
prototype names by assuming a subset NP ⊆ NC and a set of feature names NF ⊆ NC with
NP ∩ NF = ∅.</p>
      <p>Definition 1 (Features). A basic feature is a concept name  ∈ NF. A general feature is a
complex concept in language ℒΣ using only basic features as concept names.</p>
      <p>For simplicity, we call general concepts the concepts composed only of concepts in NC∖NP∪NF.</p>
      <p>The features associated with prototypes together with the degree of their importance are
given in prototype definitions .1
Definition 2 (Positive prototype definition) . Let  ∈ NP be a prototype name, let 1, . . . , 
be general features of ℒΣ and let  = (1, . . . , ) ∈ Q be a weight vector of rational numbers,
where for every  ∈ {1, . . . , } we have  &gt; 0. Then, the expression</p>
      <p>(1 : 1, . . . ,  : )
is called a (positive) prototype definition for  .</p>
      <p>Intuitively, the weights associated to the features can be then combined to compute a score
denoting the degree of typicality of an instance w.r.t. the prototype: for the current definition,
weights are assumed to be positive and features are independent. We use here rational weights,
which is suficient for practical purposes. Real numbers could be allowed as well, but this would
not substantially change the formal setup; this is also the case for the related perceptron logic
[10].</p>
      <p>Note that, since some features could be mutually exclusive (e.g. the color of an apple can
be red or green, but not both), prototype definitions should not be seen as denoting a “perfect
individual”. To allow for a direct comparison across scores of diferent prototypes, these need
to be normalised to a common value interval, possibly with a scoring function that does not
depend on the number of features defining diferent prototypes.</p>
      <p>
        In an initial proposal, we simply constrained the weights of features to be in the [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] interval
and to prescribe further that they would add up to 1, i.e. prototypes were simply assumed to
1Note that this definition of prototypes is similar to the definition of concepts by the tooth operator defined in [
be given as positive and normalised [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]: in the following sections, we provide instead a more
general proposal for normalising prototype scores.
      </p>
      <p>In the knowledge part of the KB, we can use prototype names in DL axioms to describe
properties of the members of such classes. Here we consider the case in which prototype names
are only used as primitive concepts on the left hand side of concept inclusions.</p>
      <p>In particular, we call a concept inclusion of the type  ⊑  a prototype axiom if  ∈ NP
and  is a general concept of ℒΣ. Intuitively, these axioms are not absolute and can be
“overridden” by prototype instances (cf. defeasible axioms in [11]), also depending on the
“degree of membership” of the individual to the given prototype (i.e., the satisfaction of its
features). Prototype axioms can be seen as corresponding to generic sentences since they
express generalisations that admit exceptions. Such exceptions can thus override the truth of a
prototype axiom for that specific individual.</p>
      <p>As noted above, we consider knowledge bases which can contain prototype axioms and
which are enriched with an accessory KB, the PBox  providing prototype definitions.
Definition 3 (Prototyped Knowledge Base, PKB). A prototyped knowledge base, PKB for short,
in language ℒΣ is a triple K = ⟨ , , ⟩ where:
–  =  ⊎  ⊎  is a DL TBox consisting of concept inclusion axioms of the form  ⊑ 
and partitioned into the disjoint sets  of prototype axioms,  of general concept inclusions
based on arbitrary concepts and  of feature axiom, strict subsumptions regarding features;
–  =  ⊎  ⊎  is a set of ABox assertions of the form (), where  ∈ NI is an individual
name, and partitioned into the disjoint sets  of prototype assertions (where  ∈ NP), 
of general assertions (where  is a general concept) and  of basic feature assertions (where
 ∈ NF);
–  is a set of prototype definitions, exactly one for each prototype name  ∈ NP appearing in
the prototype TBox  .</p>
      <p>Note that a PKB ⟨ , , ∅⟩ can be seen as a standard DL knowledge base.</p>
      <p>Example 1. We can now represent the example described in the introduction as a prototyped
knowledge base  = ⟨ , , ⟩ as follows:
 = { Dog ⊑ Trusted, Wolf ⊑ ¬Trusted, Dog ⊑ hasLegs, Wolf ⊑ hasLegs },
 = { Dog(balto), Wolf(balto), Dog(pluto), Wolf(alberto), Dog(cerberus),
livesInWoods(balto), hasLegs(balto), isTamed(balto),
hasCollar(pluto), hasLegs(pluto), isTamed(pluto),
hasLegs(alberto), Hunts(alberto)
¬Trusted(cerberus)},
 = { Wolf(livesInWoods : 10, hasLegs : 4, livesInPack : 8, Hunts : 11),</p>
      <p>Dog(hasCollar : 33, livesInHouse : 22, hasLegs : 11, isTamed : 44) }
Below we will construct a semantics for this kind of PKB which will entail and justify the conclusion
that balto is a trusted dog which is a wolf, without being inconsistent, and that cerberus is
an exceptional dog with respect to the property of dogs of being trusted. Note that in the case of
the instances pluto and alberto no contradiction arises, thus we want that the axioms in  are
applied to them normally. ◇
2.2. Semantics for Prototype Knowledge Bases
The semantics of PKBs is based on standard interpretations for the underlying DL ℒΣ. However,
we need to introduce additional semantic structure to manage exceptions to prototype axioms,
exploiting the prototype definition expressions in  .</p>
      <p>Definition 4 (PKB interpretations). A PKB interpretation is a description logic interpretation
ℐ = ⟨∆ ℐ , · ℐ ⟩ for signature Σ with a non-empty domain, ∆ ℐ , ℐ ∈ ∆ ℐ for every  ∈ NI, ℐ ⊆ ∆ ℐ
for every  ∈ NC, ℐ ⊆ ∆ ℐ × ∆ ℐ for every  ∈ NR, and where the extension of complex
concepts is defined recursively as usual for language</p>
      <p>Note that we are not giving a DL interpretation to the prototype definition expressions in  .</p>
      <p>We consider the notion of axiom instantiation and clashing assumptions as defined in [ 11]:
intuitively, for an axiom  ∈ ℒΣ the instantiation of  with  ∈ NI, written  (), is the
specialization of  to .</p>
      <p>Definition 5 (Clashing assumptions and clashing sets). A clashing assumption is a pair ⟨,  ⟩
such that  () is an axiom instantiation of  , and  ∈  is a prototype axiom.</p>
      <p>A clashing set for ⟨,  ⟩ is a satisfiable set  of ABox assertions s.t.  ∪ { ()} is unsatisfiable.
Intuitively, a clashing assumption ⟨ ⊑ , ⟩ states that we assume that  is an exception to
the prototype axiom  ⊑  in a given PKB interpretation. Then, the fact that a clashing set 
for ⟨ ⊑ , ⟩ is verified by such an interpretation gives a “justification” of the validity of the
assumption of overriding. This intuition is reflected in the definition of models: we first extend
PKB interpretations with a set of clashing assumptions.</p>
      <p>Definition 6 (CAS-interpretation). A CAS-interpretation is a structure ℐCAS = ⟨ℐ,  ⟩ where ℐ
is a PKB interpretation and  is a set of clashing assumptions.</p>
      <p>Then, CAS-models for a PKB K are CAS-interpretations that verify “strict” axioms in  and
 and defeasibly apply prototype axioms in  (excluding the exceptional instances in  ).
Definition 7 (CAS-model). Given a PKB K, a CAS-interpretation ℐCAS = ⟨ℐ,  ⟩ is a CAS-model
for K (denoted ℐCAS |= K), if the following holds:
(i) for every  ∈  ∪  ∪  of ℒΣ, ℐ |=  ;
(ii) for every  =  ⊑  ∈  , if ⟨,  ⟩ ∈/  , then ℐ |=  ().</p>
      <p>Two DL interpretations ℐ1 and ℐ2 are NI-congruent, if ℐ1 = ℐ2 holds for every  ∈ NI. This
extends to CAS interpretations ℐCAS = ⟨ℐ,  ⟩ by considering PKB interpretations ℐ. Intuitively,
we say that a CAS-interpretation is justified if all of its clashing assumptions admit a clashing
set that is verified by the interpretation.</p>
      <p>Definition 8 (Justifications) . We say that ⟨,  ⟩ ∈  is justified for a CAS -model ℐCAS , if
some clashing set ⟨, ⟩ exists such that, for every ℐC′AS = ⟨ℐ′,  ⟩ of K that is NI-congruent with
ℐCAS , it holds that ℐ′ |= ⟨, ⟩. A CAS model ℐCAS of a PKB K is justified , if every ⟨,  ⟩ ∈ 
is justified in K.</p>
      <p>We define the consequence from justified CAS-models: K |=  if ℐCAS |=  for every
justified CAS-model ℐCAS of K.</p>
      <p>The main intuition of prototype definitions is that each instance of a prototype is associated
with a score which denotes the “degree of typicality” of the individual with respect to the
concept described by the prototype. As in [7], such a degree is computed from the prototype
features that are satisfied by the instances and their score. Ideally, the prototype score of an
individual allows us to determine a preference over models: axioms on prototypes with higher
score are preferred to the ones on lower scoring prototypes; thus the measure needs to be
independent of single models and comparable across diferent prototypes.</p>
      <p>Formally, a simple score function can be defined as follows:
Definition 9 (Prototype score). Given a prototype definition  (1 : 1, ...,  : ), we define
the score function score : NI → R for prototype  as:
score () =</p>
      <p>∑︁
K |= ()</p>
      <p>This measure, however, depends on the value interval over which the prototype weights
have been defined: in order to compare the score of an individual with scores relative to other
prototypes, this value needs to be normalized. We do so by computing the maximum score
 and minimum score  for all prototypes. The idea for this scoring function is derived
from the so-called tooth-max operator introduced in [10] and applied for instance in [12] to
some cognitive modelling problems such as over-extension and dominance of features. Here,
the tooth-max is a concept description that collects all those individuals in a given model that
obtain the maximal possible sum of feature weights, that is, that realise some specific value 
which corresponds to the maximal realisable weight in this situation (diferent selections of
feature combinations might result in this value ). It was shown in [10] that this concept forming
operation, when taken as a logical operator, is in fact equivalent to the universal modality,
which is known to significantly extend expressivity of standard DLs or modal logics [13].</p>
      <p>Coming back to the specifics of how we here want to compute the scoring function, the
maximum score  denotes the score of the maximum value of  obtainable from
the weights of consistent subset of features of  .2 Formally, given a prototype definition
 (1 : 1, ...,  : ), let  be the set of sets  ⊆ { 1, . . . , } s.t.  ∪ K is consistent.
Then:
 = max( ∑︁  |  ∈  )</p>
      <p>∈
The minimal score  denotes the sum of the weights for “unavoidable” features, namely
2We note that the computation of maximum score is related to the idea of maximization operator in [10].
those that are strictly implied by the membership to the prototype concept. Formally:
A normalized score function  can be derived from  as:
 =</p>
      <p>∑︁
K |=  ⊑</p>
      <p>() =
 () −</p>
      <p>The normalized scoring function can then be used to define preferences over models: in
particular, we want to prefer justified CAS models where the exceptions appear on elements of
the less scoring prototypes. This can be encoded as follows:
Definition 10 (Preference SP).  1 &gt;  2 if, for every ⟨ ⊑ , ⟩ ∈  1 ∖  2 such that there
exists a ⟨ ⊑ , ⟩ ∈  2 ∖  1 with K ∪ {(), ()} unsatisfiable, it holds that K () &lt;
K().</p>
      <p>The intuition behind the condition K ∪ {(), ()} unsatisfiable is that we want to make the
comparison between the clashing assumptions that are directly in conflict.</p>
      <p>Given two CAS-interpretations ℐC1AS = ⟨ℐ1,  1⟩ and ℐCAS = ⟨ℐ2,  2⟩, we say that ℐC1AS is
2
2 1 2
preferred to ℐCAS (denoted ℐCAS &gt; ℐCAS ) if  1 &gt;  2.</p>
      <p>Finally, we define the notion of PKB model as a minimal justified model for the PKB.
Definition 11 (PKB model). An interpretation ℐ is a PKB model of K (denoted, ℐ |= K) if
– K has some justified CAS model ℐCAS = ⟨ℐ,  ⟩.
– there exists no justified ℐC′AS = ⟨ℐ′,  ′⟩ that is preferred to ℐCAS .</p>
      <p>The consequence from PKB models of K (denoted K |=  ) allows us to use the degree of
typicality of instances to verify which of the conflicting prototype axioms should apply.
Example 2. Considering the PKB reported in the example above, assume to have two PKB
interpretations ℐ1 and ℐ2 associated respectively with the following two sets of clashing assumptions
 1 = {⟨Wolf ⊑ ¬Trusted, balto⟩, ⟨Dog ⊑ Trusted, cerberus⟩} and</p>
      <p>2 = {⟨Dog ⊑ Trusted, balto⟩, ⟨Dog ⊑ Trusted, cerberus⟩}.</p>
      <p>We have now two CAS-interpretations corresponding to ⟨ℐ1,  1⟩ and ⟨ℐ2,  2⟩.
Assuming that they are also CAS-models, we can check if the two are also
justiifed . Since, the clashing assumptions have the following clashing sets, respectively
{Wolf(balto), Trusted(balto), Dog(cerberus), ¬Trusted(cerberus)} for the clashing
assumptions in  1 and {Dog(balto), ¬Trusted(balto), Dog(cerberus), ¬Trusted(cerberus)}
for those in  2, they are both justified.</p>
      <p>In order to decide which model is preferred, we need to compute the prototype scores
for balto and for cerberus: we have score  () = 14, score() = 55,
score() = 11. Then we need to normalise them:
() =
score() −
 = 55 − 11 = 0, 4
110
  () =
score  () −
 
Consequently score  () &lt; score() and, since ⟨Dog ⊑ Trusted, cerberus⟩ is
present in both  1 and  2 so it does not influence the preference order, then we can conclude that
 2 &gt;  1. This means that the preferred model, i.e. the only PKB model, is ℐ1 where balto is an
exception to Wolf ⊑ ¬Trusted and cerberus is an exception to Dog ⊑ Trusted. Consequently,
it holds that K |= Trusted(balto) and K |= ¬Trusted(cerberus).</p>
      <p>Moreover, we can note that for pluto and alberto we can standardly infer Trusted(pluto)
and ¬Trusted(alberto). The reason is that the clashing assumptions are referred to specific
individuals, and since there are no contradicting assertions for pluto and alberto, there are no
clashing sets that justify their assumptions as exceptions. Therefore, axioms in  apply to them
standardly. ◇</p>
      <p>We note that, like PKB models can be related to Answer Sets (diferent solutions under
diferent assumptions for exceptions), the kind of ordering on the models is akin to the Answer
Set preferences definable with weak constraints or Asprin preferences. In fact, this has been
used in the implementation of CKR with justified exceptions in [14, 15].</p>
    </sec>
    <sec id="sec-3">
      <title>3. Related Works</title>
      <p>As we said in the introduction, many formalisms for defeasible reasoning have been already
developed in the framework of DLs. An extensive comparison with such approaches is currently
out of the scope of this initial paper, but we can already draw some relations. Firstly, our work
can be compared to more “classical” approaches like [16, 17]: these approaches are inspired by
the historical work on defeasible reasoning in propositional logic presented in [18, 19], where
formal properties, known as KLM properties, have been introduced as properties that any
non-monotonic logic should satisfy.</p>
      <p>Of particular interest for our work are formalisms developed starting from [16], which use
weights and have a multi-preferential relation over the individuals with respect to the concepts
they are instances of, as, for instance, [20, 21]. The interest comes from the fact that there
are commonalities with our formalism since both exploit weights and introduce preference
relations on the domain which are not absolute.</p>
      <p>Other than works strictly about defeasibility in DLs, our approach can be compared also to
works that share our interest for the results coming from cognitive science and philosophy to
develop formal systems in the field of knowledge representation and in particular using the
language of DLs. Examples of these works, particularly interested in the notion of typicality,
are [22, 23].</p>
    </sec>
    <sec id="sec-4">
      <title>4. Discussion and Conclusions</title>
      <p>We presented an initial formalisation for a non-monotonic extension of DLs with the goal of
satisfying three desiderata extracted from a critical discussion on generics and the prototype
theory about concepts. We note that our formalism meets the desiderata: (D1). the formalisation
is based on the idea that we need to justify an exception to an axiom by looking at how (a)typical
an individual is: in other words, we use typicality to decide to which of the conflicting axioms
(which correspond to generics) the individual is an exception; (D2). we are using a graded
notion of typicality: we do not simply have typical and atypical individuals, but we compute
a score which is comparable across prototypes; (D3). the notion of typicality we introduce is
not extensional: by using the scores to represent it, and which contextually depend on the
information in the knowledge base, we are relying on a characteristic that goes beyond a static
and extensional set-theoretic treatment.</p>
      <p>In future work, we want to extend the cognitive and ontological study of exceptions sketched
here also by comparing it with other accounts for typicality and defeasibility in DLs in greater
detail. Regarding our proposed formalisation, we need to further explore and refine the formal
consequences and properties of our approach. In particular, we need to discuss what the best
options are to compute the scores in order to have a balanced score for every prototype and how
to extend this computation to roles, possibly following the line of work on counting perceptron
logic presented in [24, 25]. Here, not only the satisfaction of a certain feature may give rise to a
‘weight contribution’ in the prototype, but each instance of a role filler might be individually
contributing to the overall weight. Moreover, diferent readings of the weights could also give
rise to alternative score functions, and particularly, the weights need not be added up in a linear
additive way. Another extension of the formalism could involve the extension of the degree
of typicality from prototypes to single axioms. Finally, we need to better understand how to
allow for more interaction between concepts used for prototypes and features, for example
by allowing nested definitions of prototypes, use prototype concepts as features and compute
scores with defeasible features.
www.sciencedirect.com/science/article/pii/0004370290901015. doi:https://doi.org/
10.1016/0004-3702(90)90101-5.
[19] D. Lehmann, M. Magidor, What does a conditional knowledge base entail?,
Artificial Intelligence 55 (1992) 1–60. URL: https://www.sciencedirect.com/science/article/pii/
000437029290041U. doi:https://doi.org/10.1016/0004-3702(92)90041-U.
[20] L. Giordano, D. Theseider Dupré, Weighted defeasible knowledge bases and a
multipreference semantics for a deep neural network model, in: Logics in Artificial Intelligence:
17th European Conference, JELIA 2021, Virtual Event, May 17–20, 2021, Proceedings 17,
Springer, 2021, pp. 225–242.
[21] L. Giordano, D. Theseider Dupré, An ASP approach for reasoning on neural networks under
a finitely many-valued semantics for weighted conditional knowledge bases, Theory and
Practice of Logic Programming 22 (2022) 589–605. doi:10.1017/S1471068422000163.
[22] A. Lieto, G. L. Pozzato, et al., What cognitive research can do for AI: a case study, in:
Proceedings of the AIxIA 2020 Discussion Papers Workshop co-located with the the 19th
International Conference of the Italian Association for Artificial Intelligence (AIxIA2020),
volume 2776, CEUR-WS, 2020, pp. 41–48.
[23] A. Lieto, G. L. Pozzato, A description logic framework for commonsense conceptual
combination integrating typicality, probabilities and cognitive heuristics, Journal of
Experimental &amp; Theoretical Artificial Intelligence 32 (2020) 769–804. doi: 10.1080/0952813X.
2019.1672799.
[24] P. Galliani, O. Kutz, N. Troquard, Perceptron operators that count, in: M. Homola,
V. Ryzhikov, R. Schmidt (Eds.), Proceedings of the 34th International Workshop on
Description Logics (DL 2021), CEUR Workshop Proceedings, Bratislava, Slovakia, 2021.
[25] P. Galliani, O. Kutz, N. Troquard, Succinctness and Complexity of ℒ with Counting
Perceptrons, in: Proceedings of the Twentieth International Conference on Principles
of Knowledge Representation and Reasoning (KR 2023), Rhodes, Greece, September 2–8,
2023.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Gliozzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lieto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Olivetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. L.</given-names>
            <surname>Pozzato</surname>
          </string-name>
          ,
          <article-title>Reasoning about typicality and probabilities in preferential description logics</article-title>
          ,
          <year>2020</year>
          . URL: https://arxiv.org/abs/
          <year>2004</year>
          .09507. doi:
          <volume>10</volume>
          .48550/ARXIV.
          <year>2004</year>
          .
          <volume>09507</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>K.</given-names>
            <surname>Britz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Heidema</surname>
          </string-name>
          , T. Meyer,
          <article-title>Modelling object typicality in description logics</article-title>
          , in: A.
          <string-name>
            <surname>Nicholson</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          <string-name>
            <surname>Li</surname>
          </string-name>
          (Eds.),
          <source>AI 2009: Advances in Artificial Intelligence</source>
          , Springer Berlin Heidelberg, Berlin, Heidelberg,
          <year>2009</year>
          , pp.
          <fpage>506</fpage>
          -
          <lpage>516</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.-J.</given-names>
            <surname>Leslie</surname>
          </string-name>
          , Generics: Cognition and acquisition,
          <source>Philosophical Review</source>
          <volume>117</volume>
          (
          <year>2008</year>
          )
          <fpage>1</fpage>
          -
          <lpage>47</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Sacco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bozzato</surname>
          </string-name>
          ,
          <string-name>
            <surname>O. Kutz,</surname>
          </string-name>
          <article-title>Generics in defeasible reasoning</article-title>
          . exceptionality, gradability, and content sensitivity,
          <year>2023</year>
          . Accepted at 7th CAOS Workshop 'Cognition and Ontologies',
          <source>9th Joint Ontology Workshops (JOWO</source>
          <year>2023</year>
          ), co-located
          <source>with FOIS</source>
          <year>2023</year>
          ,
          <volume>19</volume>
          -
          <issue>20</issue>
          <year>July</year>
          ,
          <year>2023</year>
          , Sherbrooke, Québec, Canada.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>G.</given-names>
            <surname>Sacco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bozzato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          ,
          <article-title>Introducing weighted prototypes in description logics for</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>