<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Typicality-based revision for handling exceptions in Description Logics</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Roberto Micalizio</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gian Luca Pozzato</string-name>
          <email>gianluca.pozzatog@unito.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dip. Informatica - Universita ́ di Torino -</institution>
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>We continue our investigation on how to revise a Description Logic knowledge base when detecting exceptions. Our approach relies on the methodology for debugging a Description Logic terminology, addressing the problem of diagnosing inconsistent ontologies by identifying a minimal subset of axioms responsible for an inconsistency. In the approach we propose, once the source of the inconsistency has been localized, the identified TBox inclusions are revised in order to obtain a consistent knowledge base including the detected exception. We define a revision operator whose aim is to replace inclusions of the form “Cs are Ds” with “typical Cs are Ds”, admitting the existence of exceptions, obtaining R a knowledge base in the nonmonotonic logic ALCminT which corresponds to a notion of rational closure for Description Logics of typicality. We also describe an algorithm implementing such a revision operator.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>
        The family of Description Logics (DL) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] is one of the most important formalisms of
knowledge representation. DLs are reminiscent of the early semantic networks and of
frame-based systems. They offer two key advantages: (i) a well-defined semantics based
on first-order logic, and (ii) a good trade-off between expressivity and computational
complexity. DLs have been successfully implemented by a range of systems, and are
at the base of languages for the Semantic Web such as OWL. In a DL framework, a
knowledge base (KB) comprises two components: (i) a TBox, containing the definition
of concepts (and possibly roles) and a specification of inclusion relations among them;
and (ii) an ABox, containing instances of concepts and roles, in other words, properties
and relations between individuals.
      </p>
      <p>
        Recent works [
        <xref ref-type="bibr" rid="ref2 ref3 ref4 ref5">2–7</xref>
        ] have addressed the problem of diagnosing inconsistent
ontologies by identifying a minimal subset of axioms responsible for the inconsistency. The
idea of these works is that once the source of the inconsistency has been localized, the
ontology engineer can intervene and revise the identified axioms by rewriting or
removing some of them in order to restore the consistency. These approaches presuppose that
the ontology has become inconsistent due to the introduction of errors; as for instance
when two ontologies are merged together.
      </p>
      <p>In this work we continue our investigation in revising a Description Logic
knowledge base when an exception is discovered. Albeit an exception has the same effect
of an error (i.e., it causes an ontology to become inconsistent), an exception is not an
error. Rather, an exception is a piece of additional knowledge that, although partially
in contrast with what we know, must be taken into account. Thus, on the one hand,
ignoring exceptions would be deleterious as the resulting ontology would not reflect the
applicative domain correctly. On the other hand, accommodating exceptions requires
the exploitation of some form of defeasible reasoning that allows us to revise some of
concepts in the ontology.</p>
      <p>
        In [8] we have moved a first step in the direction of providing a methodology for
revising a TBox when a new concept is introduced and the resulting TBox is
incoherent, i.e. it contains at least a concept whose interpretation is mapped into an empty set
of domain elements. Here we refine that proposal, and move a further step in order to
tackle the problem of revising a TBox in order to accommodate a newly received
information about an exception represented by an ABox individual x. Our approach is
inspired by the weakening-based revision introduced in [9] and relies on the
methodology by Schlobach et al. [
        <xref ref-type="bibr" rid="ref2 ref3 ref4">2–4</xref>
        ] for detecting exceptions by identifying a minimal subset
of axioms responsible for an inconsistency. Once the source of the inconsistency has
been localized, the identified axioms are revised in order to obtain a consistent
knowledge base including the detected exception about the individual x. To this aim, we use
a nonmonotonic extension of the DL ALC recently introduced by Giordano and
colleagues in [10]. This extension is based on the introduction of a typicality operator T
in order to express typical inclusions. The intuitive idea is to allow concepts of the
form T(C), whose intuitive meaning is that T(C) selects the typical instances of a
concept C. It is therefore possible to distinguish between properties holding for all
instances of a concept C (C v D), and those holding only for the typical instances of
C (T(C) v D). For instance, a knowledge base can consistently express that birds
normally fly (T(Bird ) v Fly ), but penguins are exceptional birds that do not fly
(Penguin v Bird and Penguin v :Fly ). The T operator is intended to enjoy the
well-established properties of rational logic, introduced by Lehmann and Magidor in
[11] for propositional logic. In order to reason about prototypical properties and
defeasiR
ble inheritance, the semantics of this nonmonotonic DL, called ALCminT, is based on
rational models and exploits a minimal models mechanism based on the minimization
of the rank of the domain elements. This semantics corresponds to a natural extension
to DLs of Lehmann and Magidor’s notion of rational closure [11].
      </p>
      <p>Given a consistent knowledge base K = (T ; A) and a consistent ABox A0 =
fD1(x); D2(x); : : : ; Dn(x)g, such that (T ; A[A0) is inconsistent, we define a
typicalitybased revision of T in order to replace some inclusions C v D in T with T(C) v D,
rmesounlotitnognicinAaLnCeRmwinTTBaonxdTsuncehwthsautcTh nthewatc(aTptnuerwes; Aan[otiAo
n0)oifsmcionnimsiastlecnhtainngeths.eAnsoannexample, consider a knowledge base K = (T ; A) whose TBox T is as follows:</p>
      <sec id="sec-1-1">
        <title>Professor v AcademicSta Member</title>
      </sec>
      <sec id="sec-1-2">
        <title>Professor v 9teaches:Course</title>
        <p>representing that professors are members of the academic staff that teach courses,
and whose ABox contains the information that Frank is a professor, namely A =
fProfessor (frank )g. If further information A0 about Frank is provided, for instance
that he does not teach any course because he asked for a sabbatical year:</p>
        <p>A0 = f:9teaches:Course(frank )g
the resulting knowledge base (T ; A [ A0) is inconsistent. Our approach, given the
exception arised in A0, provides a typicality-based revision of T as described by the
following T new:</p>
      </sec>
      <sec id="sec-1-3">
        <title>Professor v AcademicSta Member</title>
      </sec>
      <sec id="sec-1-4">
        <title>T(Professor ) v 9teaches:Course</title>
        <p>representing that, in normal circumstances, professors teach courses, admitting the
existence of exceptions. The revised TBox is now consistent with all the information of the
ABox, namely the knowledge base (T new; A [ A0) is consistent in the DL of typicality</p>
        <p>R
ALCminT.
2</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>State of the Art and Motivations</title>
      <p>Our work starts from the contribution in [9], where a weakening-based revision
operator is introduced in order to handle exceptions in an ALC knowledge base. In that work,
the authors consider the following problem: given a consistent ALC knowledge base K,
further information is provided by an additional, consistent ALC knowledge base K0,
however the resulting K [ K0 is inconsistent. K0 contains only ABox assertions that,
as a matter of fact, correspond to exceptions. The basic idea of [9] is that conflicts are
solved by adding explicit exceptions to weaken them, and then assuming that the
number of exceptions is minimal. A revision operator w is introduced in order to weaken
the initial knowledge base K to handle the exceptions in K’: in other words, a
disjunctive knowledge base K w K0 = K1; K2; : : : ; Kn, where each Ki is such that K [ Ki is
consistent, represents the revised knowledge. Each Ki is obtained from K as follows:
if K0 contains information about an exception x; e.g., :D(x) 2 K0 when C(x) and
C v D 2 K, then either the inclusion relation is replaced by C u :fxg v D or C(x)
is replaced by &gt;(x). Then, the revision operator w selects the TBoxes minimizing a
degree of the weakened knowledge base, defined as the number of exceptions
introduced. A further refined revision operator, taking into account exceptions introduced
by universal restrictions, is introduced.</p>
      <p>Let us consider the following example. Let K contain Bird v Fly and Bird (tweety ).
Let K0 be the newly introduced ABox, containing the only information about the fact
that Tweety does not fly (because he is, for instance, a penguin), so K0 = f:Fly (tweety )g.
The revision operator w introduces the following two weakening-based knowledge
bases:</p>
      <p>K1 = fBird u :ftweety g v Fly g</p>
      <p>K2 = f&gt;(tweety)g
Let us further consider the well known example of the Nixon diamond: let K = (T ; ;)
where T is:</p>
      <sec id="sec-2-1">
        <title>Quacker v Paci st</title>
      </sec>
      <sec id="sec-2-2">
        <title>Republican v :Paci st</title>
        <p>and let K0 = fQuacker (nixon); Republican(nixon)g. In this case, we have that K w
K0 = fK1; K2g, where K1 = fQuacker u :fnixong v Paci st ; Republican v
:Paci st g and K2 = fQuacker v Paci st ; Republican u :fnixong v :Paci st g.
On the contrary, the knowledge base obtained by weakening both the inclusions is
discarded, since the degree of exceptionality is higher with respect to the alternatives: in
both K1 and K2 only one exception is introduced, whereas the discarded alternative
weaken two inclusions.</p>
        <p>The above simple examples show the drawbacks of the revision operator introduced
in [9]. In the first example, it can be observed that no inferences about Tweety can be
further performed in the revised knowledge base when Bird (tweety ) is weakened to
&gt;(tweety ), because Tweety is no longer considered as a bird: for instance, if K further
contains the inclusion Bird v Animal , we are no longer able to infer that Tweety
is an animal, i.e. Animal (tweety ). Furthermore, in the weakened inclusion Bird u
:ftweety g v Fly all exceptions are explicitly enumerated, however this implies that
a revision would be needed if further exceptions are discovered, in other words if we
further know that also cipcip is a not flying bird, the resulting knowledge base is still
inconsistent, and we need to revise it by replacing the inclusion relation with Bird u
:ftweety g u :fcipcipg v Fly . A similar problem also occurs in the example of the
Nixon diamond: since a disjunctive knowledge base fK1; K2; : : : ; Kng is satisfied in a
model M if and only if M is a model of at least one Ki, all Kis become inconsistent
when another individual being both a Quacker and a Republican is discovered.</p>
        <p>The approach in [9] is strongly related to well established works in inconsistency
handling in knowledge bases formalized in propositional and first-order logics. In
particular, basic ideas are inspired to the DMA (disjunctive maxi-adjustment) approach
proposed in [12] in order to handle conflicts in stratified propositional knowledge bases.
This approach is extended to first order knowledge bases in [13], whose basic idea is
similar to the one of weakening an inclusion: here a first-order formula 8xP (x) !
Q(x) generating a conflict is weakened by dropping the instances originating such
inconsistence, rather than dropping the whole formula. An alternative approach, called
RCMA (Refined Conjunctive Maxi-Adjustment), is proposed in [14]: here an inclusion
relation is replaced by a cardinality restriction then, when a conflict is detected, the
corresponding cardinality restriction is weakened by adapting the number of elements
involved. However, when ABox assertions are responsible for the inconsistency, the
solution consists in deleting such assertions.</p>
        <p>
          Several works have been proposed in the literature in order to handle inconsistencies
in DLs. Most of them try to tackle the more general problem of revising terminologies
when two consistent sources K and K0 are put together, resulting inconsistent. Due
to space limitations, we just mention the most closely related. In [15] the basic idea
is to adopt a selection function for choosing among different, consistent subsets of an
inconsistent DL knowledge base. Other works [
          <xref ref-type="bibr" rid="ref2 ref3 ref4 ref5">2–7</xref>
          ] have addressed the problem of
diagnosing incoherent ontologies by identifying a minimal subset of axioms responsible
for the inconsistency.
        </p>
        <p>As a main difference with all the above mentioned approaches, in our work we focus
on the specific problem of revising a TBox after discovering an exception, described
by means of ABox information. We try to tackle the problem of handling exceptions
in ALC knowledge bases by exploiting nonmonotonic Description Logics allowing to
represent and reason about prototypical properties of concepts, not holding for all the
instances of that concept but only for the typical (or most normal) ones. As far as we
know, this is the first attempt to handle exceptions in DLs by exploiting the capabilities
of a nonmonotonic DL.</p>
        <p>In the above example of (non) flying birds, our approach replaces the inclusion
Bird v Fly with the typicality-based weakened inclusion T(Bird ) v Fly , allowing to
express properties holding only for typical birds. In this case, the knowledge base
obtained by adding the fact :Fly (tweety ) (Tweety is a non flying bird) is consistent, and
Tweety will be an exceptional bird, preserving all other properties eventually ascribed
to birds. In the example of the Nixon diamond, our approach provides the following
revised TBox:</p>
      </sec>
      <sec id="sec-2-3">
        <title>T(Quacker ) v Paci st</title>
      </sec>
      <sec id="sec-2-4">
        <title>T(Republican) v :Paci st</title>
        <p>so that, given the information about Nixon being both a Quaker and a Republican, no
conclusion about his position about peace is drawn, since Nixon is both an exceptional
Quaker and an exceptional Republican.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Description Logics of Typicality</title>
      <p>In the recent years, a large amount of work has been done in order to extend the
basic formalism of DLs with nonmonotonic reasoning features. The traditional approach
is to handle defeasible inheritance by integrating some kind of nonmonotonic
reasoning mechanisms [16–20]. A simple but powerful nonmonotonic extension of DLs is
proposed in [21]. In this approach, “typical” or “normal” properties can be directly
specified by means of a “typicality” operator T enriching the underlying DL; the
typicality operator T is essentially characterized by the core properties of nonmonotonic
reasoning, axiomatized by preferential logic P in [22].</p>
      <p>In this work we refer to the most recent approach proposed in [10], where the
authors extend ALC with T by considering rational closure as defined by Lehman and
Magidor [11] for propositional logic. Here the T operator is intended to enjoy the
wellestablished properties of rational logic R . Even if T is a nonmonotonic operator (so
that for instance T(Bird ) v Fly does not entail that T(Bird u Penguin) v Fly ),
the logic itself is monotonic. Indeed, in this logic it is not possible to monotonically
infer from T(Bird ) v Fly , in the absence of information to the contrary, that also
T(Bird u Black ) v Fly . Nor it can be nonmonotonically inferred from Bird (tweety),
in the absence of information to the contrary, that T(Bird )(tweety).
Nonmonotonicity is achieved by adapting to ALC with T the propositional construction of rational
closure. This nonmonotonic extension allows to infer typical subsumptions from the
TBox. Intuitively, and similarly to the propositional case, the rational closure
construction amounts to assigning a rank (a level of exceptionality) to every concept; this rank
is used to evaluate typical inclusions of the form T(C) v D: the inclusion is
supported by the rational closure whenever the rank of C is strictly smaller than the rank
of C u :D. From a semantic point of view, nonmonotonicity is achieved by defining,
on the top of ALC with typicality, a minimal model semantics where the notion of
minimality is based on the minimization of the ranks of the domain elements. The problem
of extending rational closure to ABox reasoning is also taken into account: in order to
ascribe typical properties to individuals, the typicality of an individual is maximized.
This is done by minimizing its rank (that is, its level of exceptionality). Let us recall the</p>
      <p>R
resulting extension ALCminT in detail.</p>
      <p>Definition 1. We consider an alphabet of concept names C, of role names R, and of
individual constants O. Given A 2 C and R 2 R, we define:</p>
      <p>CR := A j &gt; j ? j :CR j CR u CR j CR t CR j 8R:CR j 9R:CR</p>
      <p>CL := CR j T(CR)
A knowledge base is a pair (T ; A). T contains a finite set of concept inclusions CL v
CR. A contains assertions of the form CL(a) and R(a; b), where a; b 2 O.
We define the semantics of the monotonic ALC + TR, formulated in terms of rational
models: ordinary models of ALC are equipped with a preference relation &lt; on the
domain, whose intuitive meaning is to compare the “typicality” of domain elements,
that is to say x &lt; y means that x is more typical than y. Typical members of a concept
C, that is members of T(C), are the members x of C that are minimal with respect to
this preference relation (s.t. there is no other member of C more typical than x).
Definition 2 (Definition 3 [10]). A model M of ALC + TR is any structure h ; &lt;; :I i
where: is the domain; &lt; is an irreflexive, transitive and modular (if x &lt; y then either
x &lt; z or z &lt; y) relation over ; :I is the extension function that maps each concept C
to CI , and each role R to RI in the standard way for ALC concepts:
&gt;I =
?I = ;
(:C)I = nCI
(C u D)I = CI \ DI
(C t D)I = CI [ DI
(8R:C)I = fx 2 j 8y:(x; y) 2 RI ! y 2 CI g
(9R:C)I = fx 2 j 9y:(x; y) 2 RI and y 2 CI g
whereas for concepts built by means of the typicality operator</p>
      <p>(T(C))I = M in&lt;(CI )
where M in&lt;(S) = fu : u 2 S and @z 2 S such that z &lt; ug. Furthermore, &lt;
satisfies the Well Foundedness Condition , i.e., for all S , for all x 2 S, either
x 2 M in&lt;(S) or 9y 2 M in&lt;(S) such that y &lt; x.</p>
      <p>Definition 3 (Definition 4 [10]). Given an ALC + TR model M= h ; &lt;; :I i, we say
that: (i) a model M satisfies an inclusion C v D if it holds CI DI ; (ii) M satisfies
an assertion C(a) if aI 2 CI and M satisfies an assertion R(a; b) if (aI ; bI ) 2 RI .
Given a knowledge base K=(TBox,ABox), we say that: (i) M satisfies TBox if M
satisfies all inclusions in TBox; (ii) M satisfies ABox if M satisfies all assertions in
ABox; (iii) M satisfies K if it satisfies both its TBox and its ABox.</p>
      <p>Given a knowledge base K, an inclusion CL v CR, and an assertion CL(a), with a 2
O, we say that the inclusion CL v CR is derivable from K, written K j=ALCRT CL v
CR, if CLI CRI holds in all models M =h ; &lt;; :I i satisfying K. Moreover, we
say the assertion CL(a) is derivable from K, written K j=ALCRT CL(a), if aI 2 CLI
holds in all models M =h ; &lt;; :I i satisfying K.</p>
      <sec id="sec-3-1">
        <title>Definition 4 (Rank of a domain element kM(x), Definition 5 [10]). Given a model</title>
        <p>M =h ; &lt;; :I i, the rank kM of a domain element x 2 , is the length of the longest
chain x0 &lt; : : : &lt; x from x to a minimal x0 (i.e. such that there is no x0 such that
x0 &lt; x0).</p>
        <p>As already mentioned, although the typicality operator T itself is nonmonotonic (i.e.
T(C) v D does not imply T(C u E) v D), the logic ALC + TR is monotonic: what
is inferred from K can still be inferred from any K0 with K K0. This is a clear
limitation in DLs. As a consequence of the monotonicity of ALC + TR, one cannot
deal with irrelevance. For instance, from the knowledge base of birds and penguins,
one cannot derive that K j=ALCRT T(Penguin u Black ) v :Fly , even if the property
of being black is irrelevant with respect to flying. In the same way, if we added to
K the information that Tweety is a bird (Bird(tweety)), in ALC + TR one cannot
tentatively derive, in the absence of information to the contrary, that T(Bird)(tweety)
and F ly(tweety).</p>
        <p>In order to tackle this problem, in [10] the definition of rational closure introduced
by Lehmann and Magidor [11] for the propositional case has been extended to the DL
R
ALC + TR. The resulting nonmonotonic logic is called ALCminT. From a semantic
point of view, in [10] it is shown that minimal rational models that minimize the rank
of domain elements can be used to give a semantical reconstruction of this extension of
rational closure. The idea is as follows: given two models of K, one in which a given
domain element x has rank x1 and another in which it has rank x2, with x1 &gt; x2,
then the latter is preferred, as in this model the element x is “more normal” than in the
former.</p>
        <p>Definition 5 (Minimal models, Definition 8 [10]). Given M =h ; &lt;; :I i and M0 =
h 0; &lt;0; :I0 i we say that M is preferred to M0 (M &lt;FIMS M0) if: (i) = 0; (ii)
CI = CI0 for all concepts C; (iii) for all x 2 , it holds that kM(x) kM0 (x)
whereas there exists y 2 such that kM(y) &lt; kM0 (y). Given a knowledge base
K=(TBox,ABox), we say that M is a minimal model of K with respect to &lt;FIMS if it
is a model satisfying K and there is no M0 model satisfying K such that M0 &lt;FIMS
M. Furthermore, we say that M is preferred to M0 with respect to ABox, and we
write M &lt;ABox M0, if, for all individual constants a occurring in ABox, it holds that
kM(aI ) kM0 (aI0 ) and there is at least one individual constant b occurring in ABox
such that kM(bI ) &lt; kM0 (bI0 ).</p>
        <p>Given a knowledge base K = (T ; A), in [10] it is shown that an inclusion C v D
(respectively, an assertion C(a)) belongs to the rational closure of K if and only if
C v D (resp., C(a)) holds in all minimal models of K of a “special” kind, named
canonical models. The rational closure construction for ALC is inexpensive, since it
retains the same complexity of the underlying logic, and thus a good candidate to define
effective nonmonotonic extensions of DLs. More precisely, the problem of deciding
whether a typical inclusion belongs to the rational closure of T is in EXPTIME as well
as the problem of deciding whether an assertion C(a) belongs to the rational closure
over A.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Typicality-based revision</title>
      <p>Similarly to what done in [9], we define a notion of revised knowledge base, precisely
a typicality-based revised knowledge base. Given a consistent knowledge base</p>
      <p>K = (T ; A)
we have to tackle the problem of accommodating a further ABox information A0,
describing an individual x that belongs to the extensions of the concepts D1; D2; : : : ; Dn,
and that is an exception to the knowledge described by K. Given a consistent ABox
we have that the knowledge base</p>
      <p>A0 = fD1(x); D2(x); : : : ; Dn(x)g</p>
      <p>(T ; A [ A0)
is inconsistent. We revise the TBox T of K by replacing some standard inclusions
C v D with typicality inclusions T(C) v D, in a way such that the resulting revised
knowledge base (T new ; A [ A0) is consistent in ALCRminT. We first define a
typicalitybased weakening of a TBox inclusion as follows:</p>
      <sec id="sec-4-1">
        <title>Definition 6 (Typicality-based weakening of C v D). Given an ALC inclusion C v</title>
        <p>D, we say that either C v D or T(C) v D is a typicality-based weakening of C v D.
Definition 7 (Typicality-based weakened TBox). Let K = (T ; A) be a consistent
knowledge base and A0 = fD1(x); D2(x); : : : ; Dn(x)g be a consistent ABox, and
R
suppose that (T ; A [ A0) is inconsistent. An ALCminT TBox Ti is a typicality-based
weakening of T if the following conditions hold:
– (Ti; A [ A0) is consistent
– there exists a bijection f from T to Ti such that for each C v D 2 T , f (C v D)
is a typicality-based weakening of C v D.</p>
        <p>Given K = (T ; A), we denote with Rev T;A0 (K) the set of all typicality-based
weakenings of T given a newly received ABox A0.</p>
        <p>Among all typicality-based weakenings in Rev T;A0 (K) we select the one, called
T new , capturing a notion of minimal chRanges needed in order to accommodate the
discovered exception. The resulting ALCminT knowledge base</p>
        <p>Knew = (T new ; A [ A0)
is consistent.</p>
        <p>Let us introduce the typicality-based revision approach by means of some examples.
Example 1. Let K = (T ; ;) where T is:</p>
        <sec id="sec-4-1-1">
          <title>Quacker v Christian</title>
        </sec>
        <sec id="sec-4-1-2">
          <title>Christian v Paci st</title>
        </sec>
        <sec id="sec-4-1-3">
          <title>RepublicanPresident v Republican</title>
        </sec>
        <sec id="sec-4-1-4">
          <title>Republican v :Paci st</title>
          <p>(1)
(2)
(3)
(4)
and let A0 = fQuacker (nixon); RepublicanPresident (nixon)g.</p>
          <p>It can be shown that there are eleven different typicality-based weakenings in Rev T;A0 (K),
but the one chosen as the typicality-based revision T new considers Nixon as an
exceptional Christian and Republican, as follows:</p>
        </sec>
        <sec id="sec-4-1-5">
          <title>Quacker v Christian</title>
        </sec>
        <sec id="sec-4-1-6">
          <title>T(Christian) v Paci st</title>
        </sec>
        <sec id="sec-4-1-7">
          <title>RepublicanPresident v Republican</title>
        </sec>
        <sec id="sec-4-1-8">
          <title>T(Republican) v :Paci st</title>
          <p>T new is a weakening of K given A0, therefore it accommodates the exception
represented by the individual x. The chosen solution does not lead to trivially inconsistent
concepts: in Example 1 above, let us consider a revised TBox in which only the
inclusion (4) is weakened; that is, T(Republican) v :Paci st is the only typicality
inclusion introduced. Of course, such a TBox is a weakening of K, and the resulting
knowledge base is consistent; however, all its models are such that there are not typical
Republicans being also Quakers, in other words the concept T(Republican) u Quacker is
inconsistent. As a consequence, Nixon is an exceptional Republican (i.e., a non-typical
Republican); however Nixon is also a Quaker, and then a Christian and, therefore, it can
be inferred that he is pacifist. This is, in general, an unwanted conclusion, at least when
reasoning in a skeptical way.</p>
          <p>Moreover, the discovered exception is not handled in a “lazy way”, by weakening
all the inclusion relations in the TBox. As a matter of fact, when an inclusion C v D
is weakened by T(C) v D, we are moving from models in ALC where all Cs are Ds
(CI DI ) to models in ALCRminT where we are able to distinguish between typical
Cs and exceptional ones (CI and T(C)I ), and x is an exceptional C (i.e., x 62 T(C)I ).
However, we prefer a knowledge base whose models minimize the number of concepts
for which x is exceptional. For instance, the following TBox is a weakening of K in
Example 1:</p>
        </sec>
        <sec id="sec-4-1-9">
          <title>T(Quacker ) v Christian</title>
        </sec>
        <sec id="sec-4-1-10">
          <title>T(Christian) v Paci st</title>
        </sec>
        <sec id="sec-4-1-11">
          <title>T(RepublicanPresident ) v Republican</title>
        </sec>
        <sec id="sec-4-1-12">
          <title>T(Republican) v :Paci st</title>
          <p>However, in this case we are introducing the–not needed–opportunity of having
exceptional Quakers not being Christians, as well as the counterintuitive fact of having
exceptional Republican presidents not being Republicans.</p>
          <p>Finally, our approach prefers a revision in which the typicality-based weakenings
are performed on the most general concepts. Consider again Example 1: obviously, also
the following</p>
        </sec>
        <sec id="sec-4-1-13">
          <title>T(Quacker ) v Christian</title>
        </sec>
        <sec id="sec-4-1-14">
          <title>Christian v Paci st</title>
        </sec>
        <sec id="sec-4-1-15">
          <title>T(RepublicanPresident ) v Republican Republican v :Paci st</title>
          <p>is a weakening of K, however this corresponds to consider Nixon as an exceptional
Republican president, not allowing to conclude that he is Republican (Republican(nixon)
is not entailed by the revised knowledge base). Symmetrically, Nixon is considered as
an exceptional Quaker, not allowing to infer that he is a Christian (again Christian(nixon)
is not entailed by the revised knowledge base). We prefer T new since the typicality
operator is introduced over the most general concepts Christian and Republican, and as
a consequence, from the revised knowledge base, one can infer that Nixon is a Christian
and that he is Republican, but (correctly) no conclusions about being pacifist or not are
drawn (Nixon is both an exceptional Christian and an exceptional Republican).
5</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Computing a Revised TBox</title>
      <p>In this section we propose a possible algorithm that revises a given knowledge base
K = (T ; A) according to the typicality-based weakening described in the previous
section. As describe above, we are interested in revising K in order to accommodate an
additional observation of the form A0 = fD1(x); : : : ; Dn(x)g such that K = (T ; A [
A0) is inconsistent. Intuitively, the individual x is exceptional as it possesses properties
that, as far as we know in the current K, do not occur together, but rather are in contrast.</p>
      <p>
        The algorithm we propose relies on two main concepts: the computation of a
Minimal Unsatisfiability-Preserving Sub-TBoxes (mups), that singles out the subset of
inclusions strictly involved in the inconsistency; and the notion of Generalized Qualified
Subconcepts (gqs). Both concepts have been introduced by Schlobach et al. in their
seminal works [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ] about the problem of debugging a DL terminology. We shortly
recall these basic notions before presenting our algorithm. Note that the techniques
proposed in [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ] are restricted to unfoldable TBoxes, only containing unique, acyclic
definitions. An axiom is called a definition of A if it is of the form A v C, where
A 2 C is an atomic concept. An axiom A v C is unique if the KB contains no other
definition of A. An axiom is acyclic if C does not refer either directly or indirectly (via
other axioms) to A [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Since we rely on the above mentioned works, from now on we
restrict our concern to unfoldable TBoxes.
5.1 Mups, gqs, and specificity ordering
To explain incoherences in terminologies, Schlobach et al. propose a methodology
based on two steps: first, axiom pinpointing excludes axioms which are irrelevant to the
incoherence; second, concept pinpointing provides a simplified definition highlighting
the exact position of a contradiction within the axioms previously selected. In this
paper we are interested in the axiom pinpointing step, which identifies debugging-relevant
axioms. Intuitively, an axiom is relevant for debugging if, when removed, a TBox
becomes consistent, or at least one previously unsatisfiable concept turns satisfiable. The
notion of subset of relevant axioms is captured by the following definition.
Definition 8 (MUPS, Definition 3.1 [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]). Let C be a concept which is unsatisfiable in
a TBox T . A set T 0 T is a minimal unsatisfiability-preserving sub-TBox (MUPS) of
T if C is unsatisfiable in T 0, and C is satisfiable in every sub-TBox T 00 T 0.
In the following, mups(T ; C) is used to denote the set of MUPS for a given
terminology T and a concept C. Intuitively, each set of axioms in mups(T ; C) represents a
conflict set; i.e., a set of axioms that cannot all be satisfied. From this point of view, it
is therefore possible to infer a diagnosis for the concept C by applying the Hitting-Set
tree algorithm proposed by Reiter [23]. However, the set mups(T ; C) is sufficient for
our purpose of dealing with exceptions.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], Schlobach also proposes a methodology for explaining concept
subsumptions. The idea is to reduce the structural complexity of the original concepts in order
to highlight the logical interplay between them. To this aim, Schlobach proposes to
exploit the structural similarity of concepts, that can be used to simplify terminological
concepts, and hence the subsumption relations. The structural similarity is based on the
notion of qualified subconcepts; namely, variants of those concepts a knowledge
engineer explicitly uses in the modeling process, and where the context (i.e., sequence of
quantifiers and number of negations) of this use is kept intact. Schlobach specifies the
notion of qualified subconcepts in two ways: Generalized Qualified Subconcepts (gqs),
and Specialized Qualified Subconcepts (sqs) which are defined by induction as follows:
Definition 9 (Generalized/Specialized Qualified Subconcepts, [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]). Given concepts
A, C and D, we define:
gqs(A) = sqs(A) = fAg if A is atomic
gqs(C u D) = fC0; D0; C0 u D0jC0 2 gqs(C); D0 2 gqs(D)g
gqs(C t D) = fC0 t D0jC0 2 gqs(C); D0 2 gqs(D)g
gqs(9r:C) = f9r:C0jC0 2 gqs(C)g
gqs(8r:C) = f8r:C0jC0 2 gqs(C)g
gqs(:C) = f:C0jC0 2 sqs(C)g
sqs(C u D) = fC0 u D0jC0 2 sqs(C); D0 2 sqs(D)g
sqs(C t D) = fC0; D0; C0 t D0jC0 2 sqs(C); D0 2 sqs(D)g
sqs(9r:C) = f9r:C0jC0 2 sqs(C)g
sqs(8r:C) = f8r:C0jC0 2 sqs(C)g
sqs(:C) = f:C0jC0 2 gqs(C)g
As Schlobach himself notes, a simple consequence of this definition is that j= C v C0
for every C0 2 gqs(C), and j= D0 v D for each D0 v sqs(D).
      </p>
      <p>We slightly extend the base case of Definition 9 as follows:</p>
      <sec id="sec-5-1">
        <title>Definition 10 (Extended GQS and SQS). Given a TBox T , we define sqs(C) and</title>
        <p>gqs(C) by adding the following clauses to those in Definition 9:
gqs(A) = fAg [ fgqs(D) j A v D 2 T g if A is atomic
sqs(A) = fAg [ fsqs(C) j C v A 2 T g if A is atomic
gqs(:A) = f:Ag [ fgqs(D) j :A v D 2 T g if A is atomic
sqs(:A) = f:Ag [ fsqs(C) j C v :A 2 T g if A is atomic
Thus, we also take into account the axioms (i.e., concept inclusions) defined in a given
TBox. This generalization allows us to move upward (by means of gqs), and downward
(by means of sqs) in the hierarchy of concepts defined by a given TBox T . Relying on
the notions of (extended) sqs and gqs, we can define a partial ordering relation between
concepts as follows:
Definition 11. Let C and D be two concepts (C 6= D) in a given TBox T , we say that
C is more specific than D, denoted as C D, iff at least one of the following relations
holds: (i) C 2 sqs(D), or (ii) D 2 gqs(C).</p>
        <p>It is easy to see that is irreflexive, antisymmetric, and transitive; however, it is just
partial because is not defined for any pair of concepts; i.e., there may exist two
concepts C and D such that neither C D nor D C holds. As we will discuss later,
the methodology we propose for determining which concepts represent properties to be
made “typical” relies on the fact that concepts are considered in order from the most
specific to the most general. In those situations where two concepts are not directly
comparable with one another by means of , either ordering is possible.</p>
        <p>
          By extension, we can order two concept inclusions C v D0 C v D00, meaning
that C v D0 is more specific than C v D00, as far as D0 D00.
5.2 Algorithm
We are now in the position for presenting an algorithmic solution to the problem of
accommodating an exceptional individual x within an existing knowledge base K =
(T ; A). A central role is played by the notion of mups since it allows us to isolate a
portion of T which is minimal and relevant. In the original formulation by Schlobach
et al., however, a mups is computed given a concept which is unsatisfiable in a given
terminology. Since in our problem T is consistent, we need to add in T a novel concept
inclusion which, on the one side represents the novel observation about x, and on the
other side represents an unsatisfiable concept from which the computation of a mups can
start. We therefore create a temporary terminology T 0 = T [ fX v D1 u : : : u Dng,
where the fake concept X simply reflects the properties of the exceptional individual x.
The mups(T 0; X) can now be computed following the methodology suggested in [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
Note that in principle mups(T 0; X) is a set of alternative sub-TBoxes, in this paper,
however, we assume that mups(T 0; X) only contains a single sub-TBox, and leave as
future work the study of how handling mups containing alternative sub-TBoxes that
possibly lead to different revised knowledge bases.
        </p>
        <p>As noted above, we are interested in revising K so that it satisfies a minimality
criterion about the exceptionality of x. Algorithmically, this requires to consider all the
possible generalizations of X, computed according to the extended notions of gqs and
sqs given in Definition 10. This structure is called GQS(X) and is defined as follows:
Definition 12 (GQS(X)). Let T 0 be the terminology T modified with the addition of
the concept inclusion describing the exceptional, observed individual x, i.e. T 0 = T [
fX v D1 u : : : u Dng, and let mups(T 0; X) be the minimal subset of T preserving the
inconsistency with X; the GQS(X) is a list of concept inclusions, ordered according to
the specificity ordering in Definition 11, such that each element X v E1 u : : : u Em
in this list satisfies the following conditions:
Condition 1 puts a limit within which the generalizations of X have to be looked for.
The mups, in fact, restricts the concepts of interest to only those concepts that are strictly
involved in the inconsistency of X. Condition 2 discards any inclusion that is not
informative at all because it is inconsistent; observe that inconsistent inclusions can be
generated because the starting terminology T 0 is inconsistent itself. Condition 3 discards
any inclusion that contradicts the observations about x, which must always be
considered as correct. Note that the computation of GQS(X) obtained as a simple rewriting
of the concepts D1; : : : ; Dn in terms of their respective gqs always terminates since, on
the one hand, we assume that T is acyclic and unfoldable and, on the other hand, the
set of inclusions and concepts to be considered is limited to the ones in mups(T 0; X).</p>
        <p>Since we adopt a skeptical approach, the intuition is that the revised knowledge
base Knew = (T new ; A [ A0) must be pairwise consistent with each of the inclusions
in GQS(X); that is, Knew must be ready to accommodate any further observations
about x, and hence it must not make assertions about x which have not been directly
observed. For this reason, the revised T new must differ from the original T only for a
finite number of inclusions weakened by the operator T.</p>
        <p>The algorithm for revising K = (T ; A) into Knew is given in Algorithm 1. The
algorithm takes as inputs the original terminology T and the fake inclusion X v
D1 u : : : u Dn. The first step consists in computing mups(T 0; X); the resulting
subterminology, denoted as T mups contains, besides X v D1 u : : : u Dn, a minimal set of
inclusions in T which are involved in the inconsistency caused by the exceptional
individual x. Since we are interested in revising the inclusions in the original terminology,
we isolate these concepts in the sub-terminology T rev . As noted above, we assume that
mups(T 0; X) results in a single sub-TBox; in case alternative sub-TBoxes there
existed, we assume to select one of them; e.g., the one with the least number of inclusions
could be heuristically selected.</p>
        <p>The next step is the computation of GQS(X). After these two preliminary steps,
the algorithm looks for the concepts in T rev to be weakened. The pinpointing of these
concepts proceeds as a pairwise consistency check between T rev and the concept
inclusions in GQS(X), considered from the most general to the most specific. In this way
the algorithm answers to two preference requirements over the revised Knew : (1) the
inclusions to be weakened must be the most general possible with respect to X, and (2)
the number of weakened inclusions must be minimal, matching the desiderata for the
typicality-based revised TBox described in the previous section. In fact, by considering
the inclusions in GQS(X) from the most general, the algorithm finds first those
inclusions in T rev mentioning concepts that are the greatest generalizations of X, and hence
represent the most general properties of the individual x that make x exceptional. On
the other hand, the weakening of an inclusion C v C1 u : : : u Ck 2 T rev makes
consistent any other concept that belongs to sqs(C), and this guarantees a minimal number
of weakenings. The algorithm terminates with the generation of T new , which is
simply the original T updated according to the weakened concepts found in the working
terminology T rev in line 9.</p>
        <p>It can be shown that the knowledge base Knew = (T new ; Anew ) where Anew =
A [ A0 and T new results from algorithm Revise in Algorithm 1, is consistent.</p>
        <p>Let us conclude this section with an example: a simplified variant of the Pizza
ontology distributed together with Prote´ge´.</p>
        <p>Example 2. Let us consider K = (Ttops; ;), where Ttops contains the following
subsumption relations:
ax1 : FatToppings v PizzaToppings ,
ax2 : CheesyToppings v FatToppings,
ax3 : VegetarianToppings v :FatToppings.</p>
        <p>Now, let us assume that we discover that there also exists the tofu cheese, which is made
of curdled soybeanmilk. Formally, we have a newly received ABox</p>
        <p>A0 = fVegetarianToppings (tofu); CheesyToppings (tofu)g:
Obviously, the knowledge base K = (Ttops; A0) is inconsistent. We apply the
methodology introduced in this work in order to find the typicality-based revision of the TBox
Ttops: in this case, among the typicality-based weakenings of Ttops, we aim at obtaining
the following Ttnopews :</p>
        <sec id="sec-5-1-1">
          <title>FatToppings v PizzaToppings ,</title>
        </sec>
        <sec id="sec-5-1-2">
          <title>T(CheesyToppings ) v FatToppings,</title>
        </sec>
        <sec id="sec-5-1-3">
          <title>T(VegetarianToppings ) v :FatToppings</title>
          <p>Let us see how the algorithm finds such a revised terminology. The first step consists
in constructing Tt0ops by adding the following inclusion to Ttops:</p>
          <p>ax4 : X v CheesyToppings u VegetarianToppings .</p>
          <p>Then we can compute T mups =fax2; ax3; ax4g; it follows that Ttroepvs = fax2; ax3g.
The algorithm proceeds by computing GQS(X) = f g1 : X v CheesyT oppings u
V egetarianT oppings, g2 : X v CheesyT oppings, g3 : X v V egetarianT oppings,
g4 : X v F atT oppings u V egetarianT oppings, g5 : X v CheesyT oppings u
:F atT oppings, g6 : X v F atT oppings, g7 : X v :F atT oppings g: Note that the
inclusion X v F atT oppings u :F atT oppings is discarded as it is trivially
contradictory. GQS(X) thus keeps a spectrum of possible generalizations of X that are not
trivial and consistent with the observation we have about tofu (i.e., being both
CheesyToppings and VegetarianToppings). As such, Ttroepvs must be consistent with all of them.
The algorithm, thus, considers all the inclusions in GQS(X) from the tail of the list
(i.e., from the most general inclusions), and verifies, in a pairwise way, the consistency
of each of them with ax2 and ax3. It is easy to see that, while g7 and g6 are pairwise
consistent with ax2 and ax3, g5 is inconsistent with ax2, and g4 is inconsistent with
ax3. The two inclusions ax2 and ax3 are therefore weakened obtaining the expected
revised terminology Ttnopews .</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6 Conclusions and Future Issues</title>
      <p>
        We have presented a typicality-based revision of a DL TBox in presence of exceptions.
We exploit techniques and algorithms proposed in [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ], which have been extended
to more expressive DLs such as SHOIN in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], corresponding to ontology language
OWL-DL. We aim at extending our typicality-based revision to such expressive DLs
in future research, by exploiting results provided in [24, 25] where the typicality
operator and the rational closure construction have been applied to the logics SHIQ and
sROIQ.
      </p>
      <p>
        As mentioned in Section 5, a drawback of the debugging approach in [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ] is that
it is restricted to unfoldable TBoxes, only containing unique, acyclic definitions. This
restriction could seem too strong for our objective of representing and reasoning about
defeasible inheritance in a natural way. As an example, a TBox expressing that students
are not tax payers, but working students do pay taxes, could be naturally expressed by
the following, not unfoldable, TBox=fStudent v :TaxPayer ; Student u Worker v
TaxPayer g. In order to fill this gap, in [
        <xref ref-type="bibr" rid="ref5">7, 5</xref>
        ] axiom pinpointing is extended to general
TBoxes. A set of algorithms for computing axiom pinpointing, in particular to compute
the set of MUPS for a given terminology T and a concept A, is also provided. Another
aspect that deserves further investigation is the extension of our approach to revise
not unfoldable TBoxes. Furthermore, we intend to develop an implementation of the
proposed algorithms, by considering the integration with existing tools for manually
modifying ontologies when inconsistencies are detected.
6. Kalyanpur, A., Parsia, B., Sirin, E., Grau, B.C.: Repairing unsatisfiable concepts in owl
ontologies. In: ESWC. (2006) 170–184
7. Parsia, B., Sirin, E., Kalyanpur, A.: Debugging owl ontologies. In: WWW. (2005) 633–640
8. Micalizio, R., Pozzato, G.L.: Revising description logic terminologies to handle exceptions:
a first step. In Giordano, L., Gliozzi, V., Pozzato, G.L., eds.: Proceedings of the 29th Italian
Conference on Computational Logic, Torino, Italy, June 16-18, 2014. Volume 1195 of CEUR
Workshop Proceedings., CEUR-WS.org (2014) 225–240
9. Qi, G., Liu, W., Bell, D.A.: A revision-based approach to handling inconsistency in
description logics. Artificial Intelligence Review 26(1-2) (2006) 115–128
10. Giordano, L., Gliozzi, V., Olivetti, N., Pozzato, G.L.: Semantic characterization of rational
closure: From propositional logic to description logics. Artif. Intelligence 226 (2015) 1–33
11. Lehmann, D., Magidor, M.: What does a conditional knowledge base entail? Artificial
      </p>
      <p>Intelligence 55(1) (1992) 1–60
12. Benferhat, S., Kaci, S., Berre, D.L., Williams, M.: Weakening conflicting information for
iterated revision and knowledge integration. Artif. Intell. 153(1-2) (2004) 339–371
13. Benferhat, S., Baida, R.E.: A stratified first order logic approach for access control.
International Journal of Intelligent Systems 19(9) (2004) 817–836
14. Meyer, T.A., Lee, K., Booth, R.: Knowledge integration for description logics. In Veloso,
M.M., Kambhampati, S., eds.: Proceedings, The 20th National Conference on Artificial
Intelligence and the 17th Innovative Applications of Artificial Intelligence Conference, July
9-13, 2005, Pittsburgh, Pennsylvania, USA, AAAI Press / The MIT Press (2005) 645–650
15. Huang, Z., van Harmelen, F., ten Teije, A.: Reasoning with inconsistent ontologies. In
Kaelbling, L.P., Saffiotti, A., eds.: IJCAI-05, Proc. of the 19th International Joint Conference
on Artificial Intelligence, Edinburgh, Scotland, UK, July 30-August 5, 2005. (2005) 454–459
16. Giordano, L., Gliozzi, V., Olivetti, N., Pozzato, G.L.: ALC+T: a preferential extension of</p>
      <p>Description Logics. Fundamenta Informaticae 96 (2009) 1–32
17. Baader, F., Hollunder, B.: Embedding defaults into terminological knowledge representation
formalisms. J. Autom. Reasoning 14(1) (1995) 149–180
18. Ke, P., Sattler, U.: Next Steps for Description Logics of Minimal Knowledge and Negation as
Failure. In Baader, F., Lutz, C., Motik, B., eds.: Proceedings of Description Logics. Volume
353 of CEUR Workshop Proceedings., Dresden, Germany, CEUR-WS.org (May 2008)
19. Casini, G., Straccia, U.: Rational Closure for Defeasible Description Logics. In Janhunen,
T., Niemela¨, I., eds.: Proceedings of the 12th European Conference on Logics in Artificial
Intelligence (JELIA 2010). Volume 6341 of Lecture Notes in Artificial Intelligence., Helsinki,
Finland, Springer (September 2010) 77–90
20. Casini, G., Straccia, U.: Defeasible Inheritance-Based Description Logics. In Walsh, T.,
ed.: Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI
2011), Barcelona, Spain, Morgan Kaufmann (July 2011) 813–818
21. Giordano, L., Gliozzi, V., Olivetti, N., Pozzato, G.L.: A NonMonotonic Description Logic
for Reasoning About Typicality. Artificial Intelligence 195 (2013) 165–202
22. Kraus, S., Lehmann, D., Magidor, M.: Nonmonotonic reasoning, preferential models and
cumulative logics. Artificial Intelligence 44(1-2) (1990) 167–207
23. Reiter, R.: A theory of diagnosis from first principles. Artif. Intelligence 32 (1) (1987) 57–96
24. Giordano, L., Gliozzi, V., Olivetti, N., Pozzato, G.L.: Rational closure in SHIQ. In: DL
2014, 27th International Workshop on Description Logics. Volume 1193 of CEUR Workshop
Proceedings., CEUR-WS.org (2014) 543–555
25. Giordano, L., Gliozzi, V.: Encoding a preferential extension of the description logic SROIQ
into SROIQ. In Esposito, F., Pivert, O., Hacid, M., Ras, Z.W., Ferilli, S., eds.: Proc. of
Foundations of Intelligent Systems - 22nd Int. Symposium, ISMIS 2015, Lyon, France, October
21-23. Volume 9384 of Lecture Notes in Computer Science., Springer (2015) 248–258</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Baader</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Calvanese</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McGuinness</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nardi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Patel-Schneider</surname>
            ,
            <given-names>P.: The</given-names>
          </string-name>
          <string-name>
            <surname>Description Logic Handbook - Theory</surname>
          </string-name>
          , Implementation, and
          <string-name>
            <surname>Applications</surname>
          </string-name>
          . Cambridge (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Schlobach</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cornet</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          :
          <article-title>Explanation of terminological reasoning: A preliminary report</article-title>
          . In: Description Logics. (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Schlobach</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cornet</surname>
          </string-name>
          , R.:
          <article-title>Non-standard reasoning services for the debugging of description logic terminologies</article-title>
          .
          <source>In: IJCAI</source>
          . (
          <year>2003</year>
          )
          <fpage>355</fpage>
          -
          <lpage>362</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Schlobach</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Diagnosing terminologies</article-title>
          . In: AAAI. (
          <year>2005</year>
          )
          <fpage>670</fpage>
          -
          <lpage>675</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Kalyanpur</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parsia</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cuenca-Grau</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sirin</surname>
          </string-name>
          , E.:
          <article-title>Axiom pinpointing: Finding (precise) justifications for arbitrary entailments in SHOIN (owl-dl)</article-title>
          .
          <source>Technical report, UMIACS</source>
          ,
          <year>2005</year>
          -
          <fpage>66</fpage>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>