<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <article-id pub-id-type="doi">10.1145/3420258</article-id>
      <title-group>
        <article-title>An ASP Translation for Non-Monotonic Reasoning on DL-Lite with Prototype Descriptions</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Gabriele Sacco</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Loris Bozzato</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oliver Kutz</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>DiSTA - Università dell'Insubria</institution>
          ,
          <addr-line>Via O. Rossi 9, 21100 Varese</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Fondazione Bruno Kessler</institution>
          ,
          <addr-line>Via Sommarive 18, 38123 Trento</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Free University of Bozen-Bolzano</institution>
          ,
          <addr-line>Piazza Università 1, 39100, Bolzano</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>3515</volume>
      <fpage>25</fpage>
      <lpage>27</lpage>
      <abstract>
        <p>In Artificial Intelligence, defeasible reasoning has been studied as one of the key features of common-sense reasoning and consequently various kinds of non-monotonic logics have been developed to model it formally. We recently developed a non-monotonic logic in the Description Logic (DL) framework based on a combination of ideas from prototype theory, weighted DLs (aka “tooth logic”), and earlier work on justifiable exceptions. A central ingredient in the new framework is the notion of a prototype description, weighted characterisations of concepts denoting the typical features of its members. In this paper, we develop an initial ASP translation for this system which allows to reason on instance level queries in the preferred models of a knowledge base. In particular, under reasonable conditions on the form of the input knowledge base, we show that preference reasoning on answer sets can be encoded via standard ASP constructs. We show that the translation is complete with respect to the preferential semantics of our system.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Non-monotonic logic</kwd>
        <kwd>Typicality</kwd>
        <kwd>Prototype theory</kwd>
        <kwd>Description Logics</kwd>
        <kwd>Answer Set Programming</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Defeasible reasoning is the capability of a system to reason with generalisations that may not be valid
for all instances: that is, general statements may admit exceptions. Defeasible reasoning is a key feature
of common-sense reasoning, thus its representation in logical formalisms via non-monotonic logics has
been studied in Artificial Intelligence since the earlier years of the field [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. More recently, approaches
for representing non-monotonicity have been proposed also in the context of Descriptions Logics (DL)
[
        <xref ref-type="bibr" rid="ref3">3, 4</xref>
        ].
      </p>
      <p>Following this direction, in [5] we introduced a formal approach for defeasible reasoning in DLs,
namely DLs with Prototype Descriptions, which satisfies desiderata extracted from the studies on
Generics [6] and insights from the philosophical and cognitive research on phenomena related to notions of
exceptions and defeasibility. From a formal point of view, our work stems from a combination of ideas
from prototype theory, weighted DLs (aka “tooth logic” [7]), and earlier work on defeasible DLs with
Justifiable Exceptions [ 8, 9]. A central ingredient in the new framework is the notion of a prototype
description, weighted characterisations of concepts denoting the typical features of its members.
Intuitively, prototype descriptions provide a way to include into DL knowledge bases similarity-based
numerical characterisations of prototype instances (for example, extracted and learned from data): in
our framework, those (learned) weights together with the ontology axioms are then used to compute a
“typicality score” of instances that is considered in defining a preference over DL models. In [ 10], we
further presented diferent options to characterise the preference on models induced by the computation
of the scores on prototypes.</p>
      <p>In this paper, we turn our attention to reasoning methods for DLs with Prototype Descriptions.
Following the direction of the previous work on Justified Exceptions [ 8], we present an Answer Set
Programming (ASP) encoding for our formalism that enables instance-level reasoning (namely, instance
checking) in presence of defeasible information in the case of knowledge bases in DL-Liteℛ [11]. The
ASP translation is an extension to the encoding proposed in [9] for reasoning on defeasible DL-Liteℛ
KBs with Justifiable Exceptions. In particular, we show that (by assuming some reasonable assumptions
on the weights in the input knowledge base) the computation of inferences in preferred models can be
encoded in standard ASP with weak constraints. Although the proposed encoding only allows us to
reason on a basic form of model preference, it illustrates a simple way for implementing reasoning for
DLs with prototype descriptions using readily available ASP tools. Moreover, since the rules defining the
model preference are modular with respect to the encoding of models, the proposed ASP translation can
be used as a base for encoding more complex preferences, possibly by adopting more flexible methods
for ASP preferences [12].</p>
      <p>In the following sections, we first provide a brief recollection of DLs with Prototype Descriptions
as defined in [ 10] and we introduce a simple model preference. We then introduce the rules and
definitions of our ASP translation: by showing the correspondence between preferred models of
the input knowledge base and optimal answer sets of its encoding, we prove the correctness of the
translation. We close with discussions of related and future work.</p>
    </sec>
    <sec id="sec-2">
      <title>2. DLs with Prototype Descriptions</title>
      <p>In our formalism, we distinguish two parts in knowledge bases: (i). a DL knowledge base representing
the knowledge of interest: the knowledge base can include defeasible TBox axioms about specific base
concepts called prototypes and ABox assertions about prototypes instances and their features; (ii). an
additional set containing prototype descriptions, weighted characterizations of prototypes, expressing the
“degree of typicality” of the features of their istances. For example, in a scenario modelling knowledge
about animals, we can have: (i). a DL knowledge base, with TBox axioms involving prototype Dog and
ABox assertions about pluto being a Dog and its features; (ii). a description of the prototypical Dog ,
expressing that, e.g., a typical Dog most likely has a collar and lives in a house.</p>
      <p>In the following, we outline the formal syntax and semantics of such enriched KBs. The following
definitions are independent from the DL language of the main knowledge base: we consider a fixed
concept language ℒΣ based on a DL signature Σ with disjoint and non-empty sets NC of concept names,
NR of role names, and NI of individual names. We identify a subset of the concept names NP ⊆ NC as
denoting prototype names. For simplicity, we call general concepts the concepts composed only from
concepts in NC ∖ NP.</p>
      <p>The features associated with prototypes together with the degree of their importance are given in
prototype descriptions.</p>
      <p>Definition 1 (Positive prototype description). Let  ∈ NP be a prototype name, let 1, . . . ,  be
general concepts of ℒΣ and let  = (1, . . . , ) ∈ Q be a weight vector of rational numbers, where
for every  ∈ {1, . . . , } we have  &gt; 0. Then, the expression</p>
      <p>(1 : 1, . . . ,  : )
is called a (positive) prototype description for  .</p>
      <p>Note that this description of prototypes is also similar to the definition of concepts with tooth operators
as defined in [ 13]. Intuitively, the weights associated with features can be combined to compute a score
denoting the degree of typicality of an instance w.r.t. the prototype: for the current definition, weights
are assumed to be positive and features are independent. Note that, to allow for a direct comparison
across scores of diferent prototypes, these need to be normalised to a common value interval, possibly
with a scoring function that does not depend on the number of features used in defining diferent
prototypes.</p>
      <p>In the knowledge part of the KB, we can use prototype names in DL axioms to describe properties of
the instances of such concepts. Here we consider the case in which prototype names are only used as
primitive concepts on the left-hand side of concept inclusions.</p>
      <p>Definition 2 (Prototype axiom). A concept inclusion of the type  ⊑  is a prototype axiom of ℒΣ if
 ∈ NP and  is a general concept of ℒΣ.</p>
      <p>Intuitively, these axioms are not absolute and can be “overridden” by prototype instances (cf. defeasible
axioms in [8]), also depending on the “degree of membership” of the individual to the given prototype
(i.e., the satisfaction of its features).</p>
      <p>As noted, we consider knowledge bases which can contain prototype axioms and which are enriched
with an accessory KB, the PBox , providing prototype descriptions.</p>
      <p>Definition 3 (Prototyped Knowledge Base, PKB). A prototyped knowledge base (PKB) in language ℒΣ
is a triple K = ⟨ , , ⟩ where:
•  =  ⊎  is a DL TBox consisting of concept inclusion axioms of the form  ⊑ ;  is
partitioned into the sets  of prototype axioms and  of general concept inclusions based on
general concepts;
•  =  ⊎  is a set of ABox assertions;  is partitioned into the sets  of prototype assertions
(of the form  () with  ∈ NP and  ∈ NI) and  of ABox assertions for general concepts and
roles;
•  is a set of prototype descriptions, exactly one for each prototype name  ∈ NP appearing in
prototype TBox  .</p>
      <p>Note that a PKB ⟨ , , ∅⟩ can be seen as simply a standard DL knowledge base. We present an example
to clarify the notions and syntax introduced thus far.</p>
      <p>Example 1. Consider the following prototyped knowledge base  = ⟨ , , ⟩:
 = { Dog ⊑ Trusted, Wolf ⊑ ¬Trusted, Dog ⊑ hasLegs, Wolf ⊑ hasLegs },
 = { Dog(balto), Wolf(balto), Dog(pluto), Wolf(alberto), Dog(cerberus),
livesInWoods(balto), hasLegs(balto), isTamed(balto),
hasCollar(pluto), hasLegs(pluto), isTamed(pluto),
hasLegs(alberto), Hunts(alberto),
¬Trusted(cerberus)},
 = { Wolf(livesInWoods : 10, hasLegs : 4, livesInPack : 8, Hunts : 11),</p>
      <p>Dog(hasCollar : 33, livesInHouse : 22, hasLegs : 11, isTamed : 44) }
Basically,  provides information about the prototype concepts Dog and Wolf: using prototype axioms, we
can state that (generally) dogs are trusted, but wolves are not. The ABox  provides information about
prototype instances and their features: note that pluto and cerberus are dogs, alberto is a wolf, but
balto is both a dog and a wolf. Finally,  provides the weights of features for prototypes Dog and Wolf:
instances satisfying features with larger weights are interpreted as “more typical” members of the prototype
concept. Note that diferent prototypes might give diferent importance to the same feature and can use
diferent value ranges (see, for example, the diferent values for hasLegs in Wolf and Dog).</p>
      <p>Intuitively, we want to entail and justify the conclusion that balto is a trusted dog which is a wolf,
without being inconsistent, and that cerberus is an exceptional dog with respect to the property of dogs of
being trusted. Moreover, in the case of the instances pluto and alberto no contradiction arises, thus we
want that the axioms in  are applied to them normally.</p>
      <p>Note that the conflict regarding balto has the same structure as the so called Nixon diamond [14]. ◇
The semantics of PKBs is based on standard interpretations of the underlying DL ℒΣ. In fact,
interpretations of a PKB are DL interpretations for its knowledge base part, as follows:
Definition 4 (Interpretation). A pair ℐ = ⟨Δℐ , · ℐ ⟩ is an interpretation for signature Σ with a non-empty
domain, Δℐ , and with ℐ ∈ Δℐ for every  ∈ NI, ℐ ⊆ Δℐ for every  ∈ NC, ℐ ⊆ Δℐ × Δℐ for every
 ∈ NR, and where the extension of complex concepts is defined recursively as usual for language ℒΣ.
Note that we are not giving a DL interpretation to the prototype description expressions in . However,
we need to introduce additional semantic structure to manage exceptions to prototype axioms in  ,
exploiting the prototype description expressions in . We consider the notion of axiom instantiation as
defined in [ 8]: intuitively, for an axiom  ∈ ℒΣ the instantiation of  with  ∈ NI, written  (), is the
specialisation of  to .1
Definition 5 (Exception assumptions and clashing sets). An exception assumption is a pair ⟨,  ⟩ where
 ∈  is a prototype axiom,  ∈ NI is an individual name appearing in  and such that  () is an
axiom instantiation of  .</p>
      <p>A clashing set for ⟨,  ⟩ is a satisfiable set ⟨, ⟩ of ABox assertions s.t. ⟨, ⟩ ∪ { ()} is unsatisfiable.
Intuitively, an exception assumption ⟨ ⊑ , ⟩ states that we assume that  is an exception to the
prototype axiom  ⊑  in a given interpretation. Then, the fact that a clashing set ⟨ ⊑,⟩ for
⟨ ⊑ , ⟩ is verified by such an interpretation gives a “justification” of the validity of the assumption
of overriding in terms of ABox assertions. This intuition is reflected in the definition of models: we first
extend interpretations with a set of exception assumptions.</p>
      <p>Definition 6 ( -interpretation). A  -interpretation is a structure ℐ = ⟨ℐ,  ⟩ where ℐ is an
interpretation and  is a set of exception assumptions.</p>
      <p>Then,  -models for a PKB K are those  -interpretations that verify “strict” axioms in  ∪  and
defeasibly apply prototype axioms in  (excluding the exceptional instances in  ).
Definition 7 ( -model). Given a PKB K, a  -interpretation ℐ = ⟨ℐ,  ⟩ is a  -model for K (denoted
ℐ |= K), if the following holds:
(i) for every  ∈  ∪  of ℒΣ, ℐ |=  ;
(ii) for every  =  ⊑  ∈  , if ⟨,  ⟩ ∈/  , then ℐ |=  ().</p>
      <p>Two DL interpretations ℐ1 and ℐ2 are NI-congruent, if ℐ1 = ℐ2 holds for every  ∈ NI. This
extends to  -interpretations ℐ = ⟨ℐ,  ⟩ by considering interpretations ℐ. Intuitively, we say that a
 -interpretation is justified if all of its exception assumptions have a clashing set that is verified by the
interpretation.</p>
      <p>Definition 8 (Justifications) . We say that ⟨,  ⟩ ∈  is justified for a  -model ℐ , if some clashing set
⟨, ⟩ exists such that, for every ℐ′ = ⟨ℐ′,  ⟩ of K that is NI-congruent with ℐ , it holds that ℐ′ |= ⟨, ⟩.</p>
      <p>A  -model ℐ of a PKB K is justified , if every ⟨,  ⟩ ∈  is justified in K.</p>
      <p>We consider the notion of logical consequence from justified  -models (i.e. axioms that are valid in all
 -models that are justified): we write K |=  if ℐ |=  for every justified  -model ℐ of K. There can
be more than one justified model, in particular for diferent valid combinations of exception assumptions
and justifications. As will be shown in examples, this allows reasoning by cases: scores defined over
prototype descriptions’ values allow to define a preference over such cases.</p>
      <p>The next part of the semantics takes care of defining a preference over exceptions in case of conflicts
between diferent prototype axioms. The main intuition of prototype descriptions is that each individual
which is an instance of a prototype is associated with a score which denotes the “degree of typicality”
of the individual with respect to the concept described by the prototype. As in [13], such a degree is
computed from the prototype features that are satisfied by the instances and their score. Ideally, the
prototype score of an individual allows us to determine preferences over models: for an individual,
axioms on prototypes with higher score are preferred to the ones on lower scoring prototypes; thus the
measure needs to be comparable across diferent prototypes.</p>
      <p>Given the set of prototypes  ∈ , a family of prototype score functions { } ∈ is composed by
functions  : NI → R for each prototype name  ∈  such that every function of the family has
range in a fixed interval [, ..., ] ∈ R.</p>
      <sec id="sec-2-1">
        <title>1As in [8],  () can be formally specified via the FO-translation of  .</title>
        <p>Ideally, these families of functions can then be used to define preferences over models: diferent
preference criteria can be defined, in particular, by using the results of score functions on the exceptional
individuals in the exception assumptions’ sets  of  -interpretations.</p>
        <p>In general, a preference between exception assumption sets is some relation between sets  for K,
denoted  1 &gt;  2. Given two  -interpretations ℐ1 = ⟨ℐ1,  1⟩ and ℐ2 = ⟨ℐ2,  2⟩, we then say that ℐ1
is preferred to ℐ2 (denoted ℐ1 &gt; ℐ2 ) if  1 &gt;  2.</p>
        <p>Finally, we define the notion of PKB model as a minimal justified model for the PKB.
Definition 9 (PKB model). An interpretation ℐ is a PKB model of K (denoted, ℐ |= K) if
• K has some justified  -model ℐ = ⟨ℐ,  ⟩.</p>
        <p>• there exists no justified ℐ′ = ⟨ℐ′,  ′⟩ that is preferred to ℐ .</p>
        <p>The consequence from PKB models of K (denoted K |=  ) characterises the “preferred” consequences
of the PKB, on the basis of the degree of typicality of instances.</p>
        <p>The above definitions are provided for any score function and preference: in the following we will
define one specific way of instantiating these definitions (which we later use as a counterpart for the
preference in the ASP encoding). For another way of computing scores and preferences, see [10].</p>
        <p>A simple score function can be defined by considering the features that are inferable from the KB in
a single model:
Definition 10 (Model dependent prototype score). Given a prototype definition  (1 : 1, ...,  :
), we define the score function scoreℐ  : NI → R for prototype  and a justified  -model ℐ  as:
scoreℐ  () =</p>
        <p>∑︁
ℐ  |= ()

This measure, however, depends on the value interval over which the prototype weights have been
defined: in order to compare the score of an individual with scores relative to other prototypes, this
value needs to be normalised. We do so by computing the maximum score  and minimum score
 for all prototypes.</p>
        <p>The maximum score  denotes the score of the maximum value of  obtainable from
the weights of consistent subset of features of  . Formally, given a prototype description  (1 :
1, ...,  : ), let  be the set of sets  ⊆ { 1, . . . , } s.t.  ∪ K is consistent. Then:  =
max(∑︀∈  |  ∈  ).</p>
        <p>The minimal score  denotes the sum of the weights for “unavoidable” features, namely those that
are strictly implied by the membership to the prototype concept. Formally:  = ∑︀K |=  ⊑ .
Now, a normalised score function ℐ can be derived from ℐ as:
ℐ () =
ℐ () −
 −</p>
        <p>.</p>
        <p>
          By this normalisation we obtain a family of prototype score functions with range in the same interval
[
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ], allowing for comparison of prototype scores on the same individual.
        </p>
        <p>Simple model-dependent preference. For the definition of a preference based on such scores, we
consider the case in which the scores are equal across all of the justified  -models of :
Definition 11 (Stable score). Given a set  = {ℐ | ℐ is a justified  -model of }, score () is a
stable score if for every ℐ , ℐ ∈  , it holds that score () = score ()
ℐ ℐ</p>
        <p>Intuitively, this condition of stability of scores occurs in the case in which the computation of the
tipicality scores does not depend on other prototype axioms: this condition is verified, for example, in
knowledge bases where axioms and assertions involving features are not dependent on the satisfaction
of prototype axioms, but only on the “strict” part of the KB.</p>
        <p>Let us consider the case in which all of the prototypes in  have stable scores: in this case, we can
propose a notion of preference based on model dependent scores, which comes as a simplification of
the model dependent preference introduced in [10]:
Definition 12 (Preference SimpleMDP).  1 &gt;  2 if, for every ⟨ ⊑ , ⟩ ∈  1 ∖  2 such that there
exists a ⟨ ⊑ ,  ⟩ ∈  2 ∖  1, it holds that  () &lt; ( ).</p>
        <p>Basically, SimpleMDP prefers justified  -models where the exceptions appear for elements of the lower
scoring prototypes.</p>
        <p>Example 2. Considering the PKB reported in the example above, assume to have two PKB interpretations
ℐ1 and ℐ2 associated respectively with the following two sets of clashing assumptions:
 1 = {⟨Wolf ⊑ ¬Trusted, balto⟩, ⟨Dog ⊑ Trusted, cerberus⟩}
 2 = {⟨Dog ⊑ Trusted, balto⟩, ⟨Dog ⊑ Trusted, cerberus⟩}</p>
        <p>We have now two  -interpretations corresponding to ⟨ℐ1,  1⟩ and ⟨ℐ2,  2⟩. Assuming that
they are also  -models, we can check if the two are also justified . Since, the clashing
assumptions have the following clashing sets, respectively {Wolf(balto), Trusted(balto), Dog(cerberus),
¬Trusted(cerberus)} for the clashing assumptions in  1 and {Dog(balto), ¬Trusted(balto),
Dog(cerberus), ¬Trusted(cerberus)} for those in  2, they are both justified.</p>
        <p>In order to decide which model is preferred, we need to compute the prototype scores for balto
and for cerberus: note that in our PKB scores are stable. We have: scoreWolf(balto) = 14,
scoreDog(balto) = 55, scoreDog(cerberus) = 11. Then we need to normalise them, getting
Dog(balto) ≈ 0.4, Wolf(balto) ≈ 0.3, Dog(cerberus) = 0. Consequently
scoreWolf(balto) &lt; scoreDog(balto) and, since ⟨Dog ⊑ Trusted, cerberus⟩ is present in both  1
and  2 so it does not influence the preference order, then we can conclude that  1 &gt;  2. This means that
the preferred model, i.e. the PKB model, is ℐ1 where balto is an exception to Wolf ⊑ ¬Trusted and
cerberus is an exception to Dog ⊑ Trusted. Consequently, it holds that K |= Trusted(balto) and
K |= ¬Trusted(cerberus).</p>
        <p>Moreover, we note that for pluto and alberto we can standardly infer Trusted(pluto) and
¬Trusted(alberto). The reason is that the exception assumptions are referred to specific
individuals, and since there are no contradicting assertions for pluto and alberto, there are no clashing sets that
justify their assumptions as exceptions. Therefore, axioms in  apply to them standardly. ◇</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Translation to ASP with Preferences</title>
      <sec id="sec-3-1">
        <title>3.1. ASP Translation</title>
        <p>Defeasible reasoning on PKBs can be encoded by means of an ASP translation: the base of such encoding
is the ASP translation for DL-Liteℛ with defeasible axioms presented in [9].</p>
        <p>If we consider input PKBs in DL-Liteℛ [11], the translation process can be largely defined as in [ 9]:
the goal of the encoding is to obtain a Datalog representation of the input PKB that can then be used to
reason on instance checking queries.</p>
        <p>In the following, we adopt the model dependent scoring function: for simplicity, we further assume
that the scores are stable, already normalized across prototype descriptions and only integers. Although
these assumptions can be seen as limiting, they allow us to show that some simple preference reasoning
can already be obtained using standard ASP2 constructs: these assumptions can be relaxed, for example
2In particular, we use weak constraints and aggregates to encode the preference rules: we adopt the syntax of the clingo
ASP solver for such constructs.</p>
        <p>Strict axioms: for ,  ∈ NC, ,  ∈ NR, ,  ∈ NI:
()
(, )
 ⊑ 
 ⊑ ¬
∃ ⊑ ∃</p>
        <p>∃ ⊑ ∃
 ⊑</p>
        <p>Dis(, )</p>
        <p>Inv(, )</p>
        <p>Irr()</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Prototype axioms: for  ∈ NP,  ∈ NC:</title>
      <p>⊑</p>
      <p>DL-Liteℛ deduction rules 
(pdlr-instd)
(pdlr-tripled)
(pdlr-subc)
(pdlr-supnot)
(pdlr-subex)
(pdlr-supex)
(pdlr-subr)
(pdlr-dis1)
(pdlr-dis2)
(pdlr-inv1)
(pdlr-inv2)
(pdlr-irr)
(pdlr-nsubc)
(pdlr-nsupnot)
(pdlr-nsubex)
(pdlr-nsupex)
(pdlr-nsubr)
(pdlr-ninv1)
(pdlr-ninv2)</p>
      <p>instd(, ) ←
tripled(, , ) ←</p>
      <p>instd(, ) ←
¬instd(, ) ←</p>
      <p>instd(, ) ←
tripled(, , ′) ←
tripled(, , ′) ←
¬tripled(, , ) ←
¬tripled(, , ) ←
tripled(, , ) ←
tripled(, , ) ←
¬tripled(, , ) ←
¬instd(, ) ←</p>
      <p>instd(, ) ←
¬tripled(, , ′) ←</p>
      <p>¬instd(, ) ←
¬tripled(, , ′) ←
¬tripled(, , ) ←
¬tripled(, , ) ←
insta(, ).
triplea(, , ).
subClass(, ), instd(, ).
supNot(, ), instd(, ).
subEx(, ), tripled(, , ′).
supEx(, , ′), instd(, ).
subRole(, ), tripled(, , ′).
dis(, ), tripled(, , ).
dis(, ), tripled(, , ).
inv(, ), tripled(, , ).
inv(, ), tripled(, , ).
irr(), const().
subClass(, ), ¬instd(, ).
supNot(, ), ¬instd(, ).
subEx(, ), const(′), ¬instd(, ).
supEx(, , ), const(), all_nrel(, ).
subRole(, ), ¬tripled(, , ′).
inv(, ), ¬tripled(, , ).</p>
      <p>inv(, ), ¬tripled(, , ).
(pdlr-allnrel1) all_nrel_step(, , ) ← first(), ¬tripled(, , ).
(pdlr-allnrel2) all_nrel_step(, , ) ← all_nrel_step(, , ′), next(′, ), ¬tripled(, , ).
(pdlr-allnrel3) all_nrel(, ) ← last(), all_nrel_step(, , ).</p>
      <p>Output translation ( )
(o-concept) () ↦→ {instd(, ).}
(o-role) (, ) ↦→ {tripled(, , ).}
by adopting an encoding of rational numbers in ASP [15] and including (possibly external) computations
for score normalisation; compare also the approach of [16].</p>
      <p>Normal form for DL-Lite . For the DL knowledge base part of our PKBs, we assume that DL-Liteℛ
ℛ
axioms are in the normal form presented in [9], which allows to simplify the formulation of the traslation
cases. The kind of axioms included in the normal form are shown in Table 1.</p>
      <p>A set of rules to transform any DL-Liteℛ PKB into this normal form and a proof of equivalence of the
rewritten PKB can be given analogously to the original paper. The only notable diference in the case of
PKBs is the fact that the only form of "defeasible" axioms is the one of prototype axioms.
(iproto)  (1 : 1, . . . ,  : ) ↦→ { isProto( )., featwt(, 1, 1)., . . . featwt(, , ). }</p>
      <p>Deduction rules for score and preference 
(dpref-as) addScore(, , ) ← instd(, ), instd(, ), featwt(, , ).
(dpref-sc) score(, , ) ← instd(, ), isProto(),  = #{ : addScore(, , )}.
(dpref-os) ovrscore(, , , ) ← ovr(, , ), score(, , ).</p>
      <p>(dpref-wc) ⇝ ovrscore(, , , ). []
Translation rules overview. The datalog encoding is composed by diferent sets of translation rules
(inspired by the materialization calculus in [17]). The complete set of rules in our translation is provided
in Tables 2–4. The encoding has a set of input rules which translate DL axioms and signature in datalog,
deduction rules that provide instance level inference by datalog rules, and output rules that encode as a
datalog fact the ABox assertion to be proved. In our case, the translation includes the following set of
rules:
DL-Liteℛ input rules: input DL-Liteℛ rules (in Table 2) translate (strict) KB axioms in normal form into
their ASP encoding: for example, an atomic concept inclusion  ⊑  is encoded by the rule:
 ⊑  ↦→ {subClass(, )}.</p>
      <p>DL-Liteℛ deduction rules: deduction rules for DL-Liteℛ (in Table 2) allow instance level reasoning on
the interpretation of encoded axioms: for example, in the case of atomic concept inclusions:
instd(, ) ←</p>
      <p>subClass(, ), instd(, ).</p>
      <p>DL-Liteℛ output rules: output rules (in Table 2) define the translation of instance queries to the ASP
encoding: for example, for an atomic concept assertion (), we have the rule:</p>
      <p>() ↦→ {instd(, ).}.</p>
      <p>Previous rules allow to translate and reason over the strict part of the KB. Other set of rules provide
datalog traslation and reasoning rules for the defeasible part of the PKB:
Prototype axioms input rules: prototype axioms of the form  ⊑  can be translated using the input
rules for defeasible axioms (in Table 4) from the original translation: we can encode  ⊑  by the rule
 ⊑  ↦→ {def_subclass(, ).}
Note that, with respect to [9], this is the only form of "defeasible" axiom that is defined in our formalism.
Overriding rules: overriding rules (in Table 4) are used to determine when an overriding to the above
axiom occurs:
ovr(, , ) ←</p>
      <p>def_subclass(, ), instd(, ), ¬instd(, ).</p>
      <p>Defeasible application rules: the following rule (in Table 4) defines when the prototype axiom can be
applied, leaving out the instances for which an overriding can be proved:</p>
      <p>def_subclass(, ), instd(, ), not ovr(, , ).</p>
      <p>The translation rules presented so far are simply a restriction to the use of the encoding proposed in
[9]: in fact, this provides an encoding of the standard DL part of a PKB. We now have to introduce an
ASP encoding of prototype descriptions and scores in order to define a preference over such answer
sets (compatible with the preference defined in our semantics).</p>
      <p>Prototype description rule: prototype descriptions can be easily added to the program in form of facts
with the following rule (in Table 4):</p>
      <p>(1 : 1, . . . ,  : ) ↦→ { isProto( ), featwt(, 1, 1). . . . featwt(, , ). }
Preference rules: we can then use this information to compute the score associated to the exceptions in
our answer sets. The score for a particular instance of a prototype can be computed using the rules (in
Table 4):</p>
      <p>addScore(, , ) ← instd(, ), instd(, ), featwt(, , ).</p>
      <p>score(, , ) ← instd(, ), isProto(),  = #{ : addScore(, , )}.
Then, we can associate a score to the overriding on an individual, based on its score for the particular
prototype:
ovrscore(, , , ) ←</p>
      <p>ovr(, , ), score(, , ).</p>
      <p>Using weak constraints, we can prefer the answer sets where overridings occur on the less typical
elements of prototypes:</p>
      <p>⇝ ovrscore(, , , ).[]
Translation process. Given a PKB  in DL-Liteℛ normal form, a program  () that encodes
query answering for  can be encoded as:</p>
      <p>() = () ∪ D() ∪  () ∪  ∪ D ∪ 
For completeness of the DL-Liteℛ translation (and in particular to reason on roles for negation of
existential axioms),  () is completed with a set of supporting facts about constants: for every literal
nom() or supEx(, , ) in  (), const() is added to  (). Then, given an arbitrary enumeration
0, . . . ,  s.t. each const() ∈  (), the facts first(0), last() and next(, +1) with 0 ≤
 &lt;  are added to  ().</p>
      <p>Query answering  |=  is then obtained by testing whether the (instance) query, translated to
datalog by ( ), is a consequence of  (), i.e., whether  () |= ( ) holds.
Example 3. We can encode in ASP the example PKB introduced in Example 1 using the presented translation.
In particular, by applying the DL-Liteℛ input rules  on the knowledge part of the example PKB  (in
normal form) we obtain the following facts:
supNot(, ).
supNot(, ).
subClass(, ℎ).
subClass(, ℎ).</p>
      <p>insta(, ).
insta(,  ).
insta(, ).
insta(,  ).</p>
      <p>insta(, ).
insta(,  ).
insta(, ℎ).
insta(,  ).</p>
      <p>insta(, ℎ).
insta(, ℎ).
insta(,  ).</p>
      <p>insta(, ℎ).
insta(, ℎ).
insta(, ).</p>
      <p>Input rules for prototype axioms D applied to  produce the following facts, encoding the two prototype
axioms of our example:
def_subclass(, ).</p>
      <p>def_subclass(, ).</p>
      <p>Then, input rules  for prototype axioms add facts about the prototypes and prototype description scores
in  to the program (considering here a normalized integer version of the scores):
isProto().
isProto( ).</p>
      <p>featwt(, ℎ, 3).
featwt(, , 2).
featwt(, ℎ, 1).
featwt(,  , 4).</p>
      <p>featwt(,  , 3).
featwt(, ℎ, 1).
featwt(,  , 2).
featwt(, ℎ, 4).</p>
      <p>The program  () is then completed by the deduction rules  ∪ D ∪  and the supporting facts
as by its definition above.</p>
      <p>If we solve the resulting program, consistently with what we have shown in the semantics, we obtain two
"candidate" answer sets: 1 that contains ovr(, , ), ovr(, , )
and 2 containing ovr(, , ), ovr(, , ). By applying the rules for
the computation of scores, in 1 we have that ovrscore(, , , 4) and in 2 we have
ovrscore(, , , 5), while both contain ovrscore(, , , 1). Then, as
expected by our preference, we have that the weak constraint prefers the answer set with the "less expensive"
set of overridings: thus, the preferred answer set is 1. ◇</p>
      <sec id="sec-4-1">
        <title>3.2. Correctness</title>
        <p>In the following, we show that the presented ASP encoding provides a sound and complete
materialization calculus for DL-Liteℛ PKBs in normal form (with the assumption of integer and normalized input
weights). As in [9], in the translation we assume UNA on elements of  and consider named models,
models restricted to (), i.e. all constants that occur in  and their Skolem constants. We show the
correctness result on the least model of  with respect to an exception assumptions set  , denoted with
ℐ^( ).</p>
        <p>Let ℐ = ⟨ℐ,  ⟩ be a justified named  -model: we define the set of overriding assumptions
OVR(ℐ ) = { ovr(, , ) | ⟨ ⊑ , ⟩ ∈  }. Given a  -interpretation ℐ and the input PKB
, we can define a corresponding interpretation  = (ℐ ) for  (): the construction of  extends
the one used in [9] to the interpretation of the preference rules.</p>
        <p>(1).  ∈ , if  ∈  ();
(2). instd(, ) ∈ , if ℐ |= () and ¬instd(, ) ∈ , if ℐ |= ¬();
(3). tripled(, , ) ∈ , if ℐ |= (, ) and ¬tripled(, , ) ∈ , if ℐ |= ¬(, );
(4). tripled(, ,  ) ∈ , if ℐ |= ∃() for  =  ⊑ ∃;
(5). all_nrel(, ) ∈  if ℐ |= ¬∃();
(6). ovr(, , ) ∈ , if ovr(, , ) ∈ OVR(ℐ );
(7). {isProto( ), featwt(, 1, 1), . . . , featwt(, , )} ⊆</p>
        <p>) ∈ ;
(8). addScore(, , ) ∈ , if ℐ |=  (), () and featwt(, , ) ∈ ;
(9). score(, , ) ∈ , if  = score ();
(10). ovrscore(, , , ) ∈ , if ovr(, , ) ∈  and score(, , ) ∈ .
, if  (1 : 1, . . . ,  :</p>
        <p>The next proposition shows that the least models of  can be represented by the answer sets of the
program  (). Considering that the translation presented for DL-Liteℛ is simply a restriction to the
use of the encoding proposed in [9], and given the analogous interpretation of defeasible axioms and
their justification, we clearly inherit their similar result (Proposition 7) for our translation.
Lemma 1. Let  be a PKB in DL-Liteℛ normal form with normalized integer scores and assuming that
scores are stable. Then:
(i). for every (named) justified exception assumption set  , the interpretation  = (ℐ^( )) is an answer
set of  ();
(ii). every answer set  of  () is of the form  = (ℐ^( )) where  is a (named) justified exception
assumption set for .</p>
        <p>Proof (Sketch). Intuitively, since the newly added preference rules play a minor role in the computation
of such answer sets (they only add information from  and additional facts about scores), the result
can be proved analogously to [9, Proposition 7] in original DL-Liteℛ translation. This is obtained by
showing that the answer sets of (the rules part of)  () correspond to the sets  = (ℐ^( )) with 
a justified exception assumption set for , using the construction of  detailed above. More in detail: (i)
is proved by showing that  = (ℐ^( )) built from a justified  satisfies  |=  () and  is minimal
with respect to the reduct on ovr NAF-literals; on the other hand, (ii) can be shown by considering any
answer set  of  () and building a justified  -model ℐ for  such that  = (ℐ) = (ℐ^( ))
holds.</p>
        <p>The correspondence with PKB models is then obtained by considering the interpretation of preference
on answer sets defined by weak constraints (which minimize the score of overridings) and the definition
of preference SimpleMDP on  -models in the semantics.</p>
        <p>Lemma 2. Let  be a PKB in DL-Liteℛ normal form with normalized integer scores and assuming that
scores are stable.</p>
        <p>Then, ℐ^ is a PKB model of  if there exists a (named) justified exception assumption set  s.t. (ℐ^( ))
is an optimal answer set of  ().</p>
        <p>Proof. The claim is proved by showing that ℐ^ is a PKB model if:
(i). there exists a (named) justified clashing assumption  s.t. (ℐ^( )) is an answer set of  ();
(ii). (ℐ^( )) is an optimal answer set of  ().</p>
        <p>Condition (i) is directly verified from Lemma 1 and the definition of PKB model.</p>
        <p>To prove (ii), we have to show the correspondence of the SimpleMDP preference on exception
assumption sets with the order induced by the objective function  ()() on answer sets. In other
words, (ℐ^( )) is optimal if there does not exist a justified  ′ s.t.  ′ &gt;  (with respect to SimpleMDP).
(⇐) In one direction, suppose that  is preferred, that is there does not exist a justified  ′ s.t.  ′ &gt;  .
This means that for every such  ′,  ′ is either preferred (and thus, intuitively, with the same “cost”
of  ) or we have  ′ &lt;  . By the definition of SimpleMDP, in this case we have then that for every
⟨ ⊑ , ⟩ ∈  ∖  ′, there exists a ⟨ ⊑ ,  ⟩ ∈  ′ ∖  with score () &lt; score( ). This means that
in  ′ there exists at least an “additional” ⟨ ⊑ ,  ⟩ such that score( ) is larger than any score ()
for the ⟨ ⊑ , ⟩ in  . Considering now the interpretation ′ = (ℐ^( ′)), we can show that it has
necessarily an higher cost with respect to  = (ℐ^( )). First, we note that weak constraints are only
associated to instances of ovrscore atoms: these atoms only depend on the instantiations of the score
atoms, that are computed by the (dpref-as) and (dpref-sc) rules and the same for all answer sets (as for the
stable scores assumption), and the overriding atoms ovr. The optimization of the answer sets is thus only
dependent on minimization of aspects related on the score of such atoms (which, on the other hand, are
related to the scores of exception assumptions). Since ⟨ ⊑ ,  ⟩ ∈  ′, by construction of ′ we have
that the corresponding ovr(, , ) ∈ ′ and ovrscore(, , , ) ∈ ′ with  = score( ). Thus,
we have an instantiation of the weak constraint rule (dpref-wc) relative to ovrscore(, , , ), which
adds a weak constraint violation to ′ with cost . Similarly, considering the ⟨ ⊑ , ⟩ ∈  above,
the construction of  adds ovr(, , ) ∈ ′ and ovrscore(, , , ) ∈ ′ with  = score (): we
know, by the preference, that it holds  &gt; . Now, considering the definition of the optimization
function  () from [18], since the violation in ′ is at the same level of the violation in , then the
cost  of the violation (knowing that it is larger than the costs of corresponding violations in ) assures
that the cost of violations in ′ is larger than those in . Thus, we have that  ()(′) &gt;  ()().
This shows the optimality of (ℐ^( )).
(⇒) The other direction can be shown with a similar reasoning: supposing that  = (ℐ^( )) is optimal,
then for all other ′ = (ℐ^( ′)) we have  ()(′) &gt;  ()(). Thus, by the definition of the
optimization function, we have that there exists at least a violation on a ovrscore(, , , ) causing
an higher cost with respect to . Considering the corresponding exception assumption sets, we can
map to the definition of their preference: there need to exist ⟨ ⊑ ,  ⟩ ∈  ′ ∖  with score( )
that is larger than any score () for ⟨ ⊑ , ⟩ ∈  ∖  ′. This corresponds to the definition of the
SimpleMDP preference: thus,  is preferred and we proved the result.</p>
        <p>The correctness of the translation with respect to instance checking is then a direct consequence of the
previous results.</p>
        <p>Theorem 1. Let  be a PKB in DL-Liteℛ normal form with normalized integer scores and assuming that
scores are stable. Let  ∈ ℒΣ s.t. ( ) is defined: then,  |=  if  () |= ( ).</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>4. Related Work</title>
      <p>
        Regarding the formalization of DLs with Prototype Descriptions, our work can be compared to
approaches for non-monotonic DLs like [
        <xref ref-type="bibr" rid="ref3">3, 4</xref>
        ]: these approaches are inspired by the historical work
on defeasible reasoning in propositional logic presented in [19, 20], where formal properties, known
as KLM properties, have been introduced as properties that any non-monotonic logic should satisfy.
Moreover, implicitly or explicitly they rely on a notion of typicality for explaining the defeasibility of
their systems. Of particular interest for our work are formalisms developed starting from [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], which use
weights and have a multi-preferential relation over the individuals with respect to the concepts they
are instances of, as, for instance, [21, 22]. Despite these commonalities, diferently from such systems
our approach does not introduce new operators or logical connectives to identify defeasible axioms:
this allows to apply the non-monotonic mechanism by simply extending the classical DL knowledge
base with the needed prototype descriptions. Moreover, in our approach the preference relations hold
among models and are determined by reasoning on knowledge, while in the aforementioned works the
ordering regards the elements of the domain.
      </p>
      <p>Regarding the ASP translation, we note that reasoning by rewriting DLs in ASP programs has been
adopted also in other approaches for non-monotonic DLs like [22], in particular for taking advantage
of the non-monotonic features and declarative definition of answer set programs. The ASP encoding
presented in this paper derives from the ASP rewriting proposed in [8] for defeasible reasoning in
Contextualized Knowledge Repositories (CKR) with Justifiable Exceptions. We note that in [ 23] this
translation was already extended with weak constraints to model a similar notion of preference:
however, in [23] preference was not defined on “scores” derivable from the KB, but with respect to a
ifxed contextual structure. In [ 24], such answer set preference was further refined, using an extension
of ASP, to allow for reasoning with more general contextual structures.</p>
    </sec>
    <sec id="sec-6">
      <title>5. Conclusions and Future Work</title>
      <p>In this paper, we presented an ASP encoding enabling reasoning on DLs with Prototype Descriptions,
in particular in the DL-Liteℛ language. The encoding extends the ASP translation for DL-Liteℛ with
Justifiable Exception from [ 9] in order to consider the model preference defined by prototype scores. By
considering reasonable assumptions on the input PKB and a basic form of model preference, we have
shown that our ASP translation is complete with respect to instance checking in DL-Liteℛ prototype
knowledge bases. In particular, our contribution shows that an ASP encoding of PKBs based only on
standard ASP constructs (and thus, readily usable over standard ASP solvers) is already enough to
reason in our formalism with a simple case of preference induced by prototype scores. Moreover, since
rules encoding answer set preference are distinct from the rules encoding the contents of the PKB, the
proposed translation provides a foundation for including more complex preferences (or, on the other
hand, consider more expressive DL languages like, e.g., ℛℐ-RL [8]).</p>
      <p>The next steps in further developing a reasoning method for DLs with Prototype Descriptions will
thus include the extension of our translation for encoding more general criteria for preference, as the
preferences based on model dependent and model independent scores presented in [10]. In this regard,
more advanced tools for answer set preferences can be adopted. For example, a similar model preference
has been encoded using Asprin in [24], but more general comparisons across answer sets could also
be obtained with ASP with algebraic measures [25]. Another natural direction for our work is the
implementation of the ASP translation: one easy way to achieve this is to extend the CKRew datalog
rewriter3 used in [9] for the DL-Liteℛ translation by implementing the rewriting of rules for the adopted
preference. On the other hand, we are currently continuing the study of the formalization of DLs with
Prototype Descriptions: for example, while in this paper we assumed independence of scores across
features, we are currently studying the options for representing dependencies between features and the
impact of this on the evaluation of weights. Future proposals for reasoning methods will then need to
be adapted to consider such further developments of the formalism.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>We acknowledge the financial support through the ‘Abstractron’ project funded by the Autonome
Provinz Bozen - Südtirol (Autonomous Province of Bolzano-Bozen) through the Research Südtirol/Alto
Adige 2022 Call.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <sec id="sec-8-1">
        <title>The authors have not employed any Generative AI tools.</title>
        <p>[20] D. Lehmann, M. Magidor, What does a conditional knowledge base entail?, Artificial
Intelligence 55 (1992) 1–60. URL: https://www.sciencedirect.com/science/article/pii/000437029290041U.
doi:https://doi.org/10.1016/0004-3702(92)90041-U.
[21] L. Giordano, D. Theseider Dupré, Weighted defeasible knowledge bases and a multipreference
semantics for a deep neural network model, in: Logics in Artificial Intelligence: 17th European
Conference, JELIA 2021, Virtual Event, May 17–20, 2021, Proceedings 17, Springer, 2021, pp.
225–242.
[22] L. Giordano, D. Theseider Dupré, An ASP approach for reasoning on neural networks under a
ifnitely many-valued semantics for weighted conditional knowledge bases, Theory and Practice
of Logic Programming 22 (2022) 589–605. doi:10.1017/S1471068422000163.
[23] L. Bozzato, L. Serafini, T. Eiter, Reasoning with justifiable exceptions in contextual hierarchies, in:
M. Thielscher, F. Toni, F. Wolter (Eds.), Principles of Knowledge Representation and Reasoning:
Proceedings of the Sixteenth International Conference, KR 2018, Tempe, Arizona, 30 October - 2
November 2018, AAAI Press, 2018, pp. 329–338. URL: https://aaai.org/ocs/index.php/KR/KR18/
paper/view/18032.
[24] L. Bozzato, T. Eiter, R. Kiesel, Reasoning on multirelational contextual hierarchies via answer
set programming with algebraic measures, Theory Pract. Log. Program. 21 (2021) 593–609. URL:
https://doi.org/10.1017/S1471068421000284. doi:10.1017/S1471068421000284.
[25] T. Eiter, R. Kiesel, Weighted LARS for quantitative stream reasoning, in: G. D. Giacomo, A. Catalá,
B. Dilkina, M. Milano, S. Barro, A. Bugarín, J. Lang (Eds.), ECAI 2020 - 24th European Conference
on Artificial Intelligence, 29 August-8 September 2020, Santiago de Compostela, Spain, August 29
September 8, 2020 - Including 10th Conference on Prestigious Applications of Artificial Intelligence
(PAIS 2020), volume 325 of Frontiers in Artificial Intelligence and Applications , IOS Press, 2020, pp.
729–736. URL: https://doi.org/10.3233/FAIA200160. doi:10.3233/FAIA200160.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Strasser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Antonelli</surname>
          </string-name>
          ,
          <article-title>Non-monotonic Logic</article-title>
          , in: E. N.
          <string-name>
            <surname>Zalta</surname>
          </string-name>
          (Ed.),
          <source>The Stanford Encyclopedia of Philosophy</source>
          , Summer 2019 ed., Metaphysics Research Lab, Stanford University,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>McCarthy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. J.</given-names>
            <surname>Hayes</surname>
          </string-name>
          ,
          <source>Some Philosophical Problems from the Standpoint of Artificial Intelligence</source>
          , Morgan Kaufmann Publishers Inc., San Francisco, CA, USA,
          <year>1987</year>
          , p.
          <fpage>26</fpage>
          -
          <lpage>45</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Giordano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Gliozzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Olivetti</surname>
          </string-name>
          , G. Pozzato,
          <article-title>Semantic characterization of rational closure: From propositional logic to description logics</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>226</volume>
          (
          <year>2015</year>
          )
          <fpage>1</fpage>
          -
          <lpage>33</lpage>
          . URL: https://www.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>