=Paper= {{Paper |id=Vol-3428/short3 |storemode=property |title=Introducing Weighted Prototypes in Description Logics for Defeasible Reasoning |pdfUrl=https://ceur-ws.org/Vol-3428/short3.pdf |volume=Vol-3428 |authors=Gabriele Sacco,Loris Bozzato,Oliver Kutz |dblpUrl=https://dblp.org/rec/conf/cilc/SaccoBK23 }} ==Introducing Weighted Prototypes in Description Logics for Defeasible Reasoning== https://ceur-ws.org/Vol-3428/short3.pdf
Introducing Weighted Prototypes in Description
Logics for Defeasible Reasoning
Gabriele Sacco1,2 , Loris Bozzato1 and Oliver Kutz2
1
    Fondazione Bruno Kessler, Via Sommarive 18, 38123 Trento, Italy
2
    Free University of Bozen-Bolzano, Piazza Domenicani 3, 39100, Bolzano, Italy


                                         Abstract
                                         The representation of defeasible information in Description Logics is a well-known issue and many
                                         formal approaches have been proposed, mostly emerging from existing formalisms in non-monotonic
                                         logic. However, in these proposals little attention has been devoted to study their capability in capturing
                                         the interpretation of typicality and exceptions from an ontological and cognitive point of view. In
                                         this regard, we are currently studying defeasible reasoning as discussed in the linguistic and cognitive
                                         literature in order to understand the important desiderata of defeasibility in commonsense reasoning.
                                              In this paper, we provide an initial formalisation of a defeasible semantics for description logics
                                         which aims at fulfilling such desiderata. The solution is based on the idea of weighted prototypes, a
                                         new form of perceptron operator which is used to represent a notion of graded typicality of concept
                                         instances.

                                         Keywords
                                         Description Logics, Weighted Logics, Perceptron Operators, Defeasible Reasoning




1. Introduction
Considering logic-based ontology representation languages, in Description Logics (DLs) many
proposals for defining defeasibility and typicality have been formalised: as a matter of fact, most
of them emerge from existing approaches in non-monotonic logics, as in [1, 2]. On the other
hand, little attention has been devoted to study the capability of these approaches in capturing
the interpretation of typicality and exceptions from the point of view of formal ontology and
cognitive aspects. Consequently, the philosophical and cognitive assumptions of this kind of
reasoning are often overlooked and need a committed discussion in order to understand the
capabilities of the current approaches.
   Considering this, we recently initiated this discussion with an analysis of generics [3], sen-
tences reporting a regularity regarding particular facts that can be generalised but tolerate
exceptions. Our analysis (presented in [4]) highlighted three desiderata for non-monotonic
reasoning:

D1. Exceptionality: generics and non-monotonic reasoning both admit exceptions and much
    of the effort in the research has been dedicated to explain and model how exceptions

CILC’23: 38th Italian Conference on Computational Logic, June 21–23, 2023, Udine, Italy
$ gsacco@fbk.eu (G. Sacco); bozzato@fbk.eu (L. Bozzato); Oliver.Kutz@unibz.it (O. Kutz)
 0000-0001-5613-5068 (G. Sacco); 0000-0003-1757-9859 (L. Bozzato); 0000-0003-1517-7354 (O. Kutz)
                                       © 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073
                                       CEUR Workshop Proceedings (CEUR-WS.org)
     can be tolerated. We think that another important aspect that should be considered is
     why something is an exception, i.e. how to include in the formal representation also the
     justification or explanation of why an instance is considered exceptional or not.
D2. Gradability: normality is a graded notion in the case of typicality, for example, instead of
    typical individuals and atypical ones with respect to some concept, we have more or less
    typical individuals. For instance, it would not be possible to divide wolves between typical
    wolves and atypical ones in absolute terms, but there would be wolves that are more or
    less typical according to the specific features of each individual.
D3. Content sensitivity: non-monotonic reasoning cannot be modelled by using only an exten-
    sional approach. This means that we cannot rely on pure extensional semantics, i.e. seeing
    the relation among concepts only in the light of relationships between sets. We need to
    take into account the semantics of the concepts involved in a broader sense, for example
    by relying on notions like typicality and saliency. The intuition here is that to explain why
    an individual is exceptional, for example, one would need some insights into the meaning
    (or, the content) of the statements of which the individual is an exception.
According to these desiderata, in this paper we sketch a new formal account for non-monotonic
reasoning in DLs based on a graded reading of typicality. Intuitively, in the case of a conflict
between two facts about an individual, we can decide which one should be accepted according
to how much the individual in question is typical w.r.t. such facts. For example: we know that
dogs are trusted, whereas wolves are not; we know also that Balto is a wolfdog hybrid; we can
ask ourselves now, should we infer that Balto is trusted or not? In our approach, we want to
use the additional information we have about Balto being a dog and Balto being a wolf to see if
he is a more typical instance of a dog or wolf and, according to this, infer if he is trusted or not.
   More specifically, our approach is based on two main elements: prototype definitions and a
typicality score. Prototype definitions are inspired by the prototype theory of concepts [5] and
its representation based on the tooth operator as introduced, for example, in [6]. According to
the endorsers of the prototype theory about concepts, being a member of a concept does not
mean to satisfy a precise definition, but rather to satisfy enough features or constituents of that
concept [7]. The second key element is the typicality score for individuals: this is calculated by
inspecting to what extent the individual satisfies the features of the prototype. The aim of the
score is to measure how typical the individual is with respect to the prototype considered: in
case of a conflict on prototype-related properties, the score provides a preference determining
which conclusion should prevail for that specific individual.
   We remark that the current presentation of the formalisation is still an initial proposal and
includes some constraints to simplify its exposition: some of the possible refinements and
extensions are briefly discussed in the conclusions.


2. DLs with Weighted Prototypes
On the base of the idea above, we distinguish two parts in our knowledge bases: the actual
DL knowledge base, which represents the knowledge of interest and can contain defeasible
axioms and information about features of individuals, and a separate set containing prototype
definitions. In the following we sketch a proposal for a syntax and semantics of such enriched
KBs.

2.1. Syntax
The following definitions are independent from the DL language used for representing the main
knowledge base: we consider a fixed concept language ℒΣ (such as for example 𝒜ℒ𝒞) based on
a DL signature Σ with disjoint and non-empty sets NC of concept names, NR of role names,
and NI of individual names. Furthermore, we identify a subset of the concept names as denoting
prototype names by assuming a subset NP ⊆ NC and a set of feature names NF ⊆ NC with
NP ∩ NF = ∅.

Definition 1 (Features). A basic feature is a concept name 𝐶 ∈ NF. A general feature is a
complex concept in language ℒΣ using only basic features as concept names.

For simplicity, we call general concepts the concepts composed only of concepts in NC∖NP∪NF.
   The features associated with prototypes together with the degree of their importance are
given in prototype definitions.1 In particular, to allow for a direct comparison across prototype
scores, we here constrain the weights of features to be in the [0, 1] interval and to add up to 1,
i.e. prototypes are positive and normalised.

Definition 2 (Positive normalised prototype definition). Let 𝑃 ∈ NP be a prototype name,
                                                          (𝑤1 , . . . , 𝑤𝑚 ) ∈ R𝑚 be a weight vector,
let 𝐶1 , . . . , 𝐶𝑚 be general features of ℒΣ and let 𝑤 = ∑︀
where for every 𝑖 ∈ {1, . . . , 𝑚} we have 𝑤𝑖 ∈ (0, 1] and 𝑖∈{1,...,𝑚} 𝑤𝑖 = 1. Then, the expression

                                             𝑃 (𝐶1 : 𝑤1 , . . . , 𝐶𝑚 : 𝑤𝑚 )

is called a prototype definition for 𝑃 .

   In the knowledge part of the KB, we can use prototype names in DL axioms to describe
properties of the members of such classes. Here we consider the case in which prototype names
are only used as primitive concepts on the left hand side of concept inclusions.
   In particular, we call a concept inclusion of the type 𝑃 ⊑ 𝐷 a prototype axiom if 𝑃 ∈ NP
and 𝐷 is a (possibly general) concept of ℒΣ . Intuitively, these axioms are not absolute and
can be “overridden” by prototype instances (cf. defeasible axioms in [8]), also depending on
the “degree of membership” of the individual to the given prototype (i.e., the satisfaction of
its features). Prototype axioms can be seen as corresponding to generic sentences since they
express generalisations that admits exceptions. Such exceptions can thus override the truth of a
prototype axiom for that specific individual.
   As noted above, we consider knowledge bases which can contain prototype axioms and
which are enriched with an accessory KB, the PBox 𝒫 providing prototype definitions.

Definition 3 (Prototyped Knowledge Base, PKB). A prototyped knowledge base, PKB for
short, in language ℒΣ is a triple K = ⟨𝒯 , 𝒜, 𝒫⟩ where:

1
    Note that this definition of prototypes is similar to the definition of concepts by the tooth operator defined in [6].
– 𝒯 = 𝑇𝑃 ⊎ 𝑇𝐶 is a DL TBox consisting of concept inclusion axioms of the form 𝐶 ⊑ 𝐷 and
  partioned into the disjoint sets 𝑇𝑃 of prototype axioms and 𝑇𝐶 of general concept inclusions
  based on arbitrary concepts;
– 𝒜 = 𝐴𝑃 ⊎ 𝐴𝐶 ⊎ 𝐴𝐹 is a set of ABox assertions of the form 𝐶(𝑎), where 𝑎 ∈ NI is an individual
  name, and partitioned into the disjoint sets 𝐴𝑃 of prototype assertions (where 𝐶 ∈ NP), 𝐴𝐹 of
  basic feature assertions (where 𝐶 ∈ NF) and 𝐴𝐶 of general assertions (where 𝐶 is a general
  concept);
– 𝒫 is a set of prototype definitions, exactly one for each prototype name 𝑃 ∈ NP appearing in
  the prototype TBox 𝑇𝑃 .
Note that a PKB ⟨𝒯 , 𝒜, ∅⟩ can be seen as a standard DL knowledge base.

Example 1. We can now represent the example described in the introduction as a prototyped
knowledge base 𝒦 = ⟨𝒯 , 𝒜, 𝒫⟩ as follows:
                      𝒯 = 𝑇𝑃 = { Dog ⊑ Trusted, Wolf ⊑ ¬Trusted },
             𝒜 = { Dog(balto), Wolf(balto), Dog(pluto), Wolf(alberto),
                    livesInWoods(balto), hasLegs(balto), isTamed(balto),
                    hasCollar(pluto), hasLegs(pluto), isTamed(pluto),
                    hasLegs(alberto), Hunts(alberto) },
    𝒫 = { Wolf(livesInWoods : 0.3, hasLegs : 0.1, livesInPack : 0.2, Hunts : 0.4),
           Dog(hasCollar : 0.3, livesInHouse : 0.2, hasLegs : 0.1, isTamed : 0.4) }
Below we will construct a semantics for this kind of PKB which will entail and justify the conclusion
that balto is a trusted dog which is a wolf, without being inconsistent. Note that in the case of
the instances pluto and alberto no contradiction arises, thus we want that the axioms in 𝒯 are
applied to them normally.                                                                          ◇

2.2. Semantics
The semantics of PKBs is based on standard interpretations for the underlying DL ℒΣ . However,
we need to introduce additional semantic structure to manage exceptions to prototype axioms,
exploiting the prototype definition expressions in 𝒫.
Definition 4 (PKB interpretations). A PKB interpretation is a description logic interpretation
ℐ = ⟨∆ℐ , ·ℐ ⟩ for signature Σ with a non-empty domain, ∆ℐ , 𝑎ℐ ∈ ∆ℐ for every 𝑎 ∈ NI, 𝐴ℐ ⊆ ∆ℐ
for every 𝐴 ∈ NC, 𝑅ℐ ⊆ ∆ℐ × ∆ℐ for every 𝑅 ∈ NR, and where the extension of complex
concepts is defined recursively as usual for language ℒΣ .
Note that we are not giving a DL interpretation to the prototype definition expressions in 𝒫.
   We consider the notion of axiom instantiation and clashing assumptions as defined in [8].
Given an axiom 𝛼 ∈ ℒΣ with FO-translation ∀x.𝜑𝛼 (x), the instantiation of 𝛼 with a tuple e
of individuals in NI, written 𝛼(e), is the specialisation of 𝛼 to e, i.e., 𝜑𝛼 (e), depending on the
type of 𝛼.
Definition 5 (Clashing assumptions and clashing sets). A clashing assumption is a pair
⟨𝛼, e⟩ such that 𝛼(e) is an axiom instantiation of 𝛼, and 𝛼 ∈ 𝑇𝑃 is a prototype axiom.
  A clashing set for ⟨𝛼, e⟩ is a satisfiable set 𝑆 of ABox assertions s.t. 𝑆 ∪ {𝛼(e)} is unsatisfiable.

Intuitively, a clashing assumption ⟨𝑃 ⊑ 𝐷, 𝑒⟩ states that we assume that 𝑒 is an exception to
the prototype axiom 𝑃 ⊑ 𝐷 in a given PKB interpretation. Then, the fact that a clashing set 𝑆
for ⟨𝑃 ⊑ 𝐷, 𝑒⟩ is verified by such an interpretation gives a “justification” of the validity of the
assumption of overriding. This intuition is reflected in the definition of models: we first extend
PKB interpretations with a set of clashing assumptions.

Definition 6 (CAS-interpretation). A CAS-interpretation is a structure ℐCAS = ⟨ℐ, 𝜒⟩ where
ℐ is a PKB interpretation and 𝜒 is a set of clashing assumptions.

Then, CAS-models for a PKB K are CAS-interpretations that verify “strict” axioms in 𝑇𝐶 and
defeasibly apply prototype axioms in 𝑇𝑃 (excluding the exceptional instances in 𝜒).

Definition 7 (CAS-model). Given a PKB K, a CAS-interpretation ℐCAS = ⟨ℐ, 𝜒⟩ is a CAS-
model for K (denoted ℐCAS |= K), if the following holds:
  (i) for every 𝛼 ∈ 𝑇𝐶 ∪ 𝒜 of ℒΣ , ℐ |= 𝛼;
 (ii) for every 𝛼 = 𝑃 ⊑ 𝐷 ∈ 𝑇𝑃 , if ⟨𝛼, 𝑑⟩ ∈
                                           / 𝜒, then ℐ |= 𝜑𝛼 (𝑑).

Two DL interpretations ℐ1 and ℐ2 are NI-congruent, if 𝑐ℐ1 = 𝑐ℐ2 holds for every 𝑐 ∈ NI. This
extends to CAS interpretations ℐCAS = ⟨ℐ, 𝜒⟩ by considering PKB interpretations ℐ. Intuitively,
we say that a CAS-interpretation is justified if all of its clashing assumptions admit a clashing
set that is verified by the interpretation.

Definition 8 (Justifications). We say that ⟨𝛼, e⟩ ∈ 𝜒 is justified for a CAS -model ℐCAS , if
                                                      ′
some clashing set 𝑆⟨𝛼,e⟩ exists such that, for every ℐCAS = ⟨ℐ ′ , 𝜒⟩ of K that is NI-congruent with
ℐCAS , it holds that ℐ |= 𝑆⟨𝛼,e⟩ . A CAS model ℐCAS of a PKB K is justified, if every ⟨𝛼, e⟩ ∈ 𝜒
                      ′

is justified in K.

We define the consequence from justified CAS-models: K |=𝐽𝐶𝐴𝑆 𝛼 if ℐCAS |= 𝛼 for every
justified CAS-model ℐCAS of K.
   The main intuition of prototype definitions is that each member of a prototype is associated
with a score which denotes the “degree of typicality” of the instance with respect to the concept
described by the prototype. As in [6], such a degree is computed from the prototype features
that are satisfied by the instances and their score. Ideally, the prototype score of an individual
allows us to determine a preference over models: axioms on prototypes with higher score are
preferred to the ones on lower scoring prototypes. Formally, a simple score function can be
defined as follows:

Definition 9 (Prototype score). Given a prototype definition 𝑃 (𝐶1 : 𝑤1 , ..., 𝐶𝑚 : 𝑤𝑚 ), we
define the score function score 𝑃 : NI → [0, 1] for prototype 𝑃 as:
                                                     ∑︁
                                 score 𝑃 (𝑎) =               𝑤𝑖
                                                  K |=𝐽𝐶𝐴𝑆 𝐶𝑖 (𝑎)
  The scoring function can then be used to define preferences over models: in particular, we
want to prefer justified CAS models where the exceptions appear on elements of the less scoring
prototypes. This can be encoded as follows:

Definition 10 (Preference SP). 𝜒1 > 𝜒2 if, for every ⟨𝑃 ⊑ 𝐷, 𝑒⟩ ∈ 𝜒1 ∖ 𝜒2 such that there
exists a ⟨𝑄 ⊑ 𝐸, 𝑒⟩ ∈ 𝜒2 ∖ 𝜒1 , it holds that score 𝑃K (𝑒) < score 𝑄
                                                                   K (𝑒).

Given two CAS-interpretations ℐCAS 1    = ⟨ℐ 1 , 𝜒1 ⟩ and ℐCAS
                                                           2   = ⟨ℐ 2 , 𝜒2 ⟩, we say that ℐCAS
                                                                                           1   is
preferred to ℐCAS (denoted ℐCAS > ℐCAS ) if 𝜒1 > 𝜒2 .
              2               1       2

  Finally, we define the notion of PKB model as a minimal justified model for the PKB.

Definition 11 (PKB model). An interpretation ℐ is a PKB model of K (denoted, ℐ |= K) if

– K has some justified CAS model ℐCAS = ⟨ℐ, 𝜒⟩.
                             ′
– there exists no justified ℐCAS = ⟨ℐ ′ , 𝜒′ ⟩ that is preferred to ℐCAS .

The consequence from PKB models of K (denoted K |= 𝛼) allows us to use the degree of
typicality of instances to verify which of the conflicting prototype axioms should apply.

Example 2. Considering the PKB reported in the example above, assume to have two PKB inter-
pretations ℐ 1 and ℐ 2 associated respectively with the following two sets of clashing assumptions

         𝜒1 = {⟨Wolf ⊑ ¬Trusted, balto⟩} and 𝜒2 = {⟨Dog ⊑ Trusted, balto⟩}.

We have now two CAS-interpretations corresponding to ⟨ℐ 1 , 𝜒1 ⟩ and ⟨ℐ 2 , 𝜒2 ⟩. Assuming that they
are also CAS-models, we can check if the two are also justified. Since, the clashing assumptions
have the following clashing sets, respectively {Wolf(balto), Trusted(balto)} for the clashing
assumptions in 𝜒1 and {Dog(balto), ¬Trusted(balto)} for that in 𝜒2 , they are both justified.
In order to decide which model is preferred, we need to compute the prototype scores for balto:
we have score 𝑊 𝑜𝑙𝑓 (𝑏𝑎𝑙𝑡𝑜) = 0.4, score 𝐷𝑜𝑔 (𝑏𝑎𝑙𝑡𝑜) = 0.5, consequently score 𝑊 𝑜𝑙𝑓 (𝑏𝑎𝑙𝑡𝑜) <
score 𝐷𝑜𝑔 (𝑏𝑎𝑙𝑡𝑜) and 𝜒2 > 𝜒1 . This means that the preferred model, i.e. the only PKB
model, is ℐ 1 where balto is an exception to Wolf ⊑ ¬Trusted. Consequently, it holds that
K |= Trusted(balto).
   Moreover, we can note that for pluto and alberto we can standardly infer Trusted(pluto)
and ¬Trusted(alberto). The reason is that the clashing assumptions are referred to specific
individuals, and since there are no contradicting assertions for pluto and alberto, there are no
clashing sets that justify their assumptions as exceptions. Therefore, axioms in 𝒯 apply to them
standardly.                                                                                       ◇


3. Discussion and Conclusions
We presented an initial formalisation for a non-monotonic extension of DLs with the aim of
satisfying three desiderata extracted from a critical discussion on generics and the prototype
theory about concepts. We note that our formalism meets the desiderata: (D1). the formalisation
is based on the idea that we need to justify an exception to an axiom by looking at how typical
it is: in other words, we use typicality to decide with respect to which of the conflicting axioms
(which correspond to generics) the individual is an exception; (D2). we are using a graded
notion of typicality: we do not simply have typical and atypical individuals, but we compute a
score which is comparable across prototypes; (D3). the notion of typicality that we introduce is
not extensional: by using the scores to represent it, we are relying on a characteristic which
goes beyond an extensional set-theoretic treatment.
    In future work, we want to extend the cognitive and ontological study of exceptions also
by comparing it with other accounts for typicality and defeasibility in DLs. Regarding our
formalisation, we need to explore and refine the formal consequences of our approach in greater
detail. In particular, we need to discuss what are the best options to compute the scores in order to
have a balanced score for every prototype and how to extend this computation to roles, possibly
following some of the ideas outlined in [9, 10], where novel tooth-operators for role-successor
counting are studied. The preference relation can also be refined: for example, comparisons
on clashing assumptions can be restricted to the axioms that are actually incompatible. We
need also to understand better how to allow for more interaction between concepts used for
prototypes and features, for example by allowing nested definitions of prototypes, use prototype
concepts as features, and compute scores with defeasible features.
    Finally, we need an extensive comparison with related works. On the one hand, we will
confront our approach to existing formalisms for defeasible reasoning in DLs like [11, 12]. Of
particular interest for this purpose are formalisms using weights and having a multi-preferential
relation over the individuals with respect to the concepts they are instances of, as, for instance,
[13, 14]. On the other hand, we will analyse also works that share our approach to taking
into account, in a central way, the results coming from cognitive science and philosophy for
developing formal systems in the field of knowledge representation and in particular using the
language of DLs. Examples of such works, particularly interested in the notion of typicality, are
[15, 16].


References
 [1] L. Giordano, V. Gliozzi, A. Lieto, N. Olivetti, G. L. Pozzato, Reasoning about typicality and
     probabilities in preferential description logics, 2020. URL: https://arxiv.org/abs/2004.09507.
     doi:10.48550/ARXIV.2004.09507.
 [2] K. Britz, J. Heidema, T. Meyer, Modelling object typicality in description logics, in:
     A. Nicholson, X. Li (Eds.), AI 2009: Advances in Artificial Intelligence, Springer Berlin
     Heidelberg, Berlin, Heidelberg, 2009, pp. 506–516.
 [3] S.-J. Leslie, Generics: Cognition and acquisition, Philosophical Review 117 (2008) 1–47.
 [4] G. Sacco, L. Bozzato, O. Kutz, Generics in defeasible reasoning. Exceptionality, gradability,
     and content sensitivity, 2023. 7th CAOS Workshop ‘Cognition and Ontologies’, 9th Joint
     Ontology Workshops (JOWO 2023), co-located with FOIS 2023, 19-20 July, 2023, Sherbrooke,
     Québec, Canada.
 [5] J. A. Hampton, Concepts as prototypes, volume 46 of Psychology of Learning and Motivation,
     Academic Press, 2006, pp. 79–113.
 [6] P. Galliani, G. Righetti, O. Kutz, D. Porello, N. Troquard, Perceptron connectives in
     knowledge representation, in: C. M. Keet, M. Dumontier (Eds.), Knowledge Engineering
     and Knowledge Management, Springer International Publishing, Cham, 2020, pp. 183–193.
 [7] E. Margolis, S. Laurence, Concepts, in: E. N. Zalta, U. Nodelman (Eds.), The Stanford
     Encyclopedia of Philosophy, Fall 2022 ed., Metaphysics Research Lab, Stanford University,
     2022.
 [8] L. Bozzato, T. Eiter, L. Serafini, Enhancing context knowledge repositories with justifiable
     exceptions, Artif. Intell. 257 (2018) 72–126.
 [9] P. Galliani, O. Kutz, N. Troquard, Perceptron operators that count, in: M. Homola,
     V. Ryzhikov, R. Schmidt (Eds.), Proceedings of the 34th International Workshop on De-
     scription Logics (DL 2021), CEUR Workshop Proceedings, Bratislava, Slovakia, 2021.
[10] P. Galliani, O. Kutz, N. Troquard, Succinctness and Complexity of 𝒜ℒ𝒞 with Counting
     Perceptrons, in: Proceedings of the Twentieth International Conference on Principles
     of Knowledge Representation and Reasoning (KR 2023), Rhodes, Greece, September 2–8,
     2023.
[11] L. Giordano, V. Gliozzi, N. Olivetti, G. Pozzato, Semantic characterization of rational
     closure: From propositional logic to description logics, Artificial Intelligence 226 (2015)
     1–33. doi:https://doi.org/10.1016/j.artint.2015.05.001.
[12] K. Britz, G. Casini, T. Meyer, K. Moodley, U. Sattler, I. Varzinczak, Principles of KLM-
     style defeasible description logics, ACM Trans. Comput. Logic 22 (2020). doi:10.1145/
     3420258.
[13] L. Giordano, D. Theseider Dupré, Weighted defeasible knowledge bases and a multipref-
     erence semantics for a deep neural network model, in: Logics in Artificial Intelligence:
     17th European Conference, JELIA 2021, Virtual Event, May 17–20, 2021, Proceedings 17,
     Springer, 2021, pp. 225–242.
[14] L. Giordano, D. Theseider Dupré, An ASP approach for reasoning on neural networks under
     a finitely many-valued semantics for weighted conditional knowledge bases, Theory and
     Practice of Logic Programming 22 (2022) 589–605. doi:10.1017/S1471068422000163.
[15] A. Lieto, G. L. Pozzato, et al., What cognitive research can do for AI: a case study, in:
     Proceedings of the AIxIA 2020 Discussion Papers Workshop co-located with the the 19th
     International Conference of the Italian Association for Artificial Intelligence (AIxIA2020),
     volume 2776, CEUR-WS, 2020, pp. 41–48.
[16] A. Lieto, G. L. Pozzato, A description logic framework for commonsense conceptual
     combination integrating typicality, probabilities and cognitive heuristics, Journal of Exper-
     imental & Theoretical Artificial Intelligence 32 (2020) 769–804. doi:10.1080/0952813X.
     2019.1672799.