<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Revising Description Logic Terminologies to Handle Exceptions: a First Step</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Roberto Micalizio</string-name>
          <email>roberto.micalizio@unito.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gian Luca Pozzato</string-name>
          <email>gianluca.pozzato@unito.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dipartimento di Informatica - Universita` degli Studi di Torino</institution>
        </aff>
      </contrib-group>
      <fpage>225</fpage>
      <lpage>240</lpage>
      <abstract>
        <p>We propose a methodology to revise a Description Logic knowledge base when detecting exceptions. Our approach relies on the methodology for debugging a Description Logic terminology, addressing the problem of diagnosing incoherent ontologies by identifying a minimal subset of axioms responsible for an inconsistency. In the approach we propose, once the source of the inconsistency has been localized, the identified axioms are revised in order to obtain a consistent knowledge base including the detected exception. To this aim, we make use of a nonmonotonic extension of the Description Logic ALC based on the combination of a typicality operator and the well established nonmonotonic mechanism of rational closure, which allows to deal with prototypical properties and defeasible inheritance.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Introduction
Exceptions do exist. Intuitively, we can define an exception as an individual (or
a group of individuals) that has, or has not, a property in contrast with other
individuals of the same class. For instance, a penguin can be considered as an
exceptional bird since, although equipped with wings, is unable to fly. We can
find numerous examples of exceptions in nature, but also in natural languages
(e.g., irregular verbs), medicine (e.g. situs inversus is an anomaly in which the
major visceral organs are reversed or mirrored from their normal positions, so
that the heart is positioned in the right-hand side of the chest), and many other
real-world scenarios.</p>
      <p>
        Dealing with exceptions can be tricky for an ontology engineer. All the
exceptions should be known in advance by the ontology engineer; i.e., when
the ontology is being designed. Unfortunately, in many real-world scenarios,
exceptions are discovered just while the system is running, and the ontology
is actually used. Whenever exceptions are discovered at runtime, the resulting
ontology becomes incoherent (i.e., at least one concept is mapped to an empty set
of individuals). Some recent works [
        <xref ref-type="bibr" rid="ref13 ref15 ref16 ref17 ref8 ref9">16, 17, 15, 8, 9, 13</xref>
        ] have addressed the problem
of diagnosing incoherent ontologies by identifying a minimal subset of axioms
responsible for the inconsistency. The idea of these works is that once the source
of the inconsistency has been localized, the ontology engineer can intervene and
revise the identified axioms in order to restore the consistency. These approaches
presuppose that the ontology has become incoherent due to the introduction of
errors; as for instance when two ontologies are merged together. It is worth
noting that, albeit an exception has the same e↵ect of an error (i.e., it causes an
ontology to become incoherent), an exception is not an error. An exception is
rather a piece of knowledge that partially contradicts what is so far known about
a portion of the ontology at hand. Thus, on the one hand, ignoring exceptions
would be deleterious as the resulting ontology would not reflect the applicative
domain correctly. On the other hand, accommodating exceptions requires the
exploitation of some form of defeasible reasoning that allows us to revise some
of concepts in the ontology.
      </p>
      <p>
        In this paper we propose a methodology to revise a Description Logic (for
short: DL) knowledge base when detecting exceptions. Our approach relies on the
above mentioned methodology of [
        <xref ref-type="bibr" rid="ref15 ref16 ref17">16, 17, 15</xref>
        ] for detecting exceptions by
identifying a minimal subset of axioms responsible for an inconsistency. Once the source
of the inconsistency has been localized, the identified axioms are revised in order
to obtain a consistent knowledge base including the detected exception. To this
aim, we use a nonmonotonic extension of the DL ALC recently presented by
Giordano and colleagues in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. This extension is based on the introduction of a
typicality operator T in order to express typical inclusions. The intuitive idea is
to allow concepts of the form T(C), whose intuitive meaning is that T(C) selects
the typical instances of a concept C. It is therefore possible to distinguish between
properties holding for all instances of a concept C (C v D), and those only
holding for the typical instances of C (T(C) v D). For instance, a knowledge base
can consistently express that birds normally fly (T(Bird ) v Fly ), but penguins
are exceptional birds that do not fly (Penguin v Bird and Penguin v ¬Fly ).
      </p>
      <p>
        The T operator is intended to enjoy the well-established properties of rational
logic, introduced by Lehmann and Magidor in [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] for propositional logic. In order
to reason about prototypical properties and defeasible inheritance, the semantics
      </p>
      <p>
        R
of this nonmonotonic DL, called ALCminT, is based on rational models and
exploits a minimal models mechanism based on the minimization of the rank of
domain elements. This semantics corresponds to a natural extension to DLs of
Lehmann and Magidor’s notion of rational closure [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>
        The paper is organized as follows: in Section 2 we recall extensions of DLs for
prototypical reasoning, focusing on the approach based on a typicality operator
T and rational closure [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]; in Section 3 we recall and extend the notions
introduced in [
        <xref ref-type="bibr" rid="ref13 ref17">17, 13</xref>
        ] for diagnosing incoherent ontologies; in Section 4 we present
our methodology for revising a DL terminology to handle exceptions by means
of T; we conclude with some pointers to future issues in Section 5.
2
      </p>
      <p>
        Description Logics and Exceptions
The family of Description Logics [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] is one of the most important formalisms of
knowledge representation. DLs are reminiscent of the early semantic networks
and of frame-based systems. They o↵er two key advantages: (i) a well-defined
semantics based on first-order logic and (ii) a good trade-o↵ between
expressivity and computational complexity. DLs have been successfully implemented by a
range of systems, and are at the base of languages for the Semantic Web such as
OWL. In a DL framework, a knowledge base (KB) comprises two components:
(i) a TBox, containing the definition of concepts (and possibly roles) and a
specification of inclusion relations among them; (ii) an ABox, containing instances
of concepts and roles, in other words, properties and relations of individuals.
Since the primary objective of the TBox is to build a taxonomy of concepts, the
need of representing prototypical properties and of reasoning about defeasible
inheritance of such properties easily arises.
      </p>
      <p>
        In the recent years, a large amount of work has been done in order to extend
the basic formalism of DLs with nonmonotonic reasoning features. The
traditional approach is to handle defeasible inheritance by integrating some kind of
nonmonotonic reasoning mechanisms [
        <xref ref-type="bibr" rid="ref10 ref2 ref3 ref4 ref5">5, 2, 10, 3, 4</xref>
        ]. A simple but powerful
nonmonotonic extension of DLs is proposed in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. In this approach, “typical” or
“normal” properties can be directly specified by means of a “typicality”
operator T enriching the underlying DL; the typicality operator T is essentially
characterized by the core properties of nonmonotonic reasoning, axiomatized by
preferential logic P in [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>
        In this work we refer to the most recent approach proposed in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], where
the authors extend ALC with T by considering rational closure as defined by
Lehman and Magidor [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] for propositional logic. Here the T operator is
intended to enjoy the well-established properties of rational logic R . Even if T
is a nonmonotonic operator (so that for instance T(Bird ) v Fly does not entail
that T(Bird u Penguin) v Fly ), the logic itself is monotonic. Indeed, in this logic
it is not possible to monotonically infer from T(Bird ) v Fly , in the absence of
information to the contrary, that also T(Bird u Black ) v Fly . Nor it can be
nonmonotonically inferred from Bird (tweety), in the absence of information to
the contrary, that T(Bird )(tweety). Nonmonotonicity is achieved by adapting to
ALC with T the propositional construction of rational closure. This
nonmonotonic extension allows to infer typical subsumptions from the TBox. Intuitively,
and similarly to the propositional case, the rational closure construction amounts
to assigning a rank (a level of exceptionality) to every concept; this rank is used
to evaluate typical inclusions of the form T(C) v D: the inclusion is supported
by the rational closure whenever the rank of C is strictly smaller than the rank of
C u¬D. From a semantic point of view, nonmonotonicity is achieved by defining,
on the top of ALC with typicality, a minimal model semantics where the notion
of minimality is based on the minimization of the ranks of the domain elements.
The problem of extending rational closure to ABox reasoning is also taken into
account: in order to ascribe typical properties to individuals, the typicality of an
individual is maximized. This is done by minimizing its rank (that is, its level
R
of exceptionality). Let us recall the resulting extension ALCminT in detail.
Definition 1. We consider an alphabet of concept names C, of role names R,
and of individual constants O. Given A 2 C and R 2 R , we define:
CR := A | &gt; | ? | ¬ CR | CR u CR | CR t CR | 8 R.CR | 9 R.CR
      </p>
      <p>CL := CR | T(CR)
A knowledge base is a pair (TBox, ABox). TBox contains a finite set of concept
inclusions CL v CR. ABox contains assertions of the form CL(a) and R(a, b),
where a, b 2 O .</p>
      <p>We define the semantics of the monotonic ALC + TR, formulated in terms of
rational models: ordinary models of ALC are equipped with a preference relation
&lt; on the domain, whose intuitive meaning is to compare the “typicality” of
domain elements, that is to say x &lt; y means that x is more typical than y.
Typical members of a concept C, that is members of T(C), are the members x
of C that are minimal with respect to this preference relation (s.t. there is no
other member of C more typical than x).</p>
      <p>
        Definition 2 ([
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]). A model M of ALC + TR is any structure h, &lt;, I i where:
is the domain; &lt; is an irreflexive, transitive and modular (if x &lt; y then either
x &lt; z or z &lt; y) relation over ; I is the extension function that maps each
concept C to CI ✓ , and each role R to RI ✓ ⇥ as follows: &gt;I = , ? I =
; , (¬C)I = \CI , (C u D)I = CI \ DI , (C t D)I = CI [ DI , (8 R.C)I = {x 2
| 8 y.(x, y) 2 RI ! y 2 CI }, (9 R.C)I = {x 2 | 9 y.(x, y) 2 RI and y 2 CI },
(T(C))I = M in&lt;(CI ), where M in&lt;(S) = {u : u 2 S and @z 2 S such that z &lt;
u}. Furthermore, &lt; satisfies the Well Foundedness Condition, i.e., for all S ✓
, for all x 2 S, either x 2 M in&lt;(S) or 9 y 2 M in&lt;(S) such that y &lt; x.
Given an ALC +TR model M= h, &lt;, I i, we say that (i) M satisfies an inclusion
C v D if it holds CI ✓ DI , (ii) M satisfies an assertion C(a) if aI 2 CI and
(iii) M satisfies an assertion R(a, b) if (aI , bI ) 2 RI . Given K=(TBox,ABox),
we say that M satisfies TBox if M satisfies all inclusions in TBox, M satisfies
ABox if M satisfies all assertions in ABox and M satisfies K if it satisfies both
its TBox and its ABox.
      </p>
      <p>Given a knowledge base K, an inclusion CL v CR and an assertion CL(a),
with a 2 O , we say that the inclusion CL v CR is derivable from K, written
K |=ALCRT CL v CR, if CLI ✓ CRI holds in all models M =h, &lt;, I i satisfying
K. Moreover, we say the assertion CL(a) is derivable from K, written K |=ALCRT
CL(a), if aI 2 CLI holds in all models M =h, &lt;, I i satisfying K.
As already mentioned, although the typicality operator T itself is nonmonotonic
(i.e. T(C) v D does not imply T(C uE) v D), the logic ALC +TR is monotonic:
what is inferred from K can still be inferred from any K0 with K ✓ K0. This is a
clear limitation in DLs. As a consequence of the monotonicity of ALC + TR, one
cannot deal with irrelevance, for instance. So, from the knowledge base of birds
and penguins, one cannot derive that K |=ALCRT T(Penguin u Black ) v ¬Fly ,
even if the property of being black is irrelevant with respect to flying. In the same
way, if we added to K the information that Tweety is a bird (Bird(tweety)), in
ALC + TR one cannot tentatively derive, in the absence of information to the
contrary, that T(Bird)(tweety) and F ly(tweety).</p>
      <p>
        In order to tackle this problem, in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] the definition of rational closure
introduced by Lehmann and Magidor [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] for the propositional case has been extended
to the DL ALC + TR. Due to space limitations, we omit all definitions of the
rational closure of a DL knowledge base, reminding to [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        From a semantic point of view, in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] it is shown that minimal rational models
that minimize the rank of domain elements can be used to give a semantical
reconstruction of this extension of rational closure. The rank kM of a domain
element x is the length of the longest chain x0 &lt; · · · &lt; x from x to a minimal x0
(i.e. such that there is no x0 such that x0 &lt; x0). The idea is as follows: given two
models of K, one in which a given domain element x has rank x1 and another in
which it has rank x2, with x1 &gt; x2, then the latter is preferred, as in this model
the element x is “more normal” than in the former.
      </p>
      <p>
        Given a knowledge base K=(TBox,ABox), in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] it is shown that an
inclusion C v D (respectively, an assertion C(a)) belongs to the rational closure of
K if and only if C v D (resp., C(a)) holds in all minimal models of K of a
“special” kind, named canonical models. The rational closure construction for ALC
is inexpensive, since it retains the same complexity of the underlying logic, and
thus a good candidate to define e↵ective nonmonotonic extensions of DLs. More
precisely, the problem of deciding whether a typical inclusion belongs to the
rational closure of the TBox is in ExpTime as well as the problem of deciding
whether an assertion C(a) belongs to the rational closure over the ABox.
3
      </p>
      <p>
        Explaining Incoherent Terminologies
In this paper, we propose a methodology to deal with exceptions that builds up
on the methodology for debugging a DL terminology proposed by Schlobach et
al. since their seminal work [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
      </p>
      <p>Let us first introduce the notion of incoherent terminology. Given a TBox
T , we say that it is coherent if there is no unsatisfiable concept, in other words
there is at least a model of T in which the extensions of all concepts are not
empty.</p>
      <p>To explain incoherences in terminologies, Schlobach et al. propose a
methodology based on two steps: first, axiom pinpointing excludes axioms which are
irrelevant to the incoherence; second, concept pinpointing provides a simplified
definition highlighting the exact position of a contradiction within the axioms
previously selected. In this paper we are interested in the axiom pinpointing step,
which identifies debugging-relevant axioms. Intuitively, an axiom is relevant for
debugging if, when removed, a TBox becomes coherent, or at least one
previously unsatisfiable concept turns satisfiable. The notion of subset of relevant
axioms is captured by the following definition.</p>
      <p>
        Definition 3 (MUPS, Definition 3.1 [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]). Let C be a concept which is
unsatisfiable in a TBox T . A set T 0 ✓ T is a minimal unsatisfiability-preserving
sub-TBox (MUPS) of T if C is unsatisfiable in T 0, and C is satisfiable in every
sub-TBox T 00 ⇢ T 0.
      </p>
      <p>
        In the following, mups(T , C) is used to denote the set of MUPS for a given
terminology T and a concept C. Intuitively, each set of axioms in mups(T , C)
represents a conflict set; i.e., a set of axioms that cannot all be satisfied. From
this point of view, it is therefore possible to infer a diagnosis for the concept
C by applying the Hitting-Set tree algorithm proposed by Reiter [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. However,
the set mups(T , C) is sucient for our purpose of dealing with exceptions.
      </p>
      <p>
        A drawback of the debugging approach in [
        <xref ref-type="bibr" rid="ref16 ref17">16, 17</xref>
        ] is that it is restricted to
unfoldable TBoxes, only containing unique, acyclic definitions. An axiom is called
a definition of A if it is of the form A v C, where A 2 C is an atomic concept. An
axiom A v C is unique if the KB contains no other definition of A. An axiom
is acyclic if C does not refer either directly or indirectly (via other axioms)
to A [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. This restriction is too strong for our objective of representing and
reasoning about defeasible inheritance in a natural way. As an example, the TBox
expressing that students are not tax payers, but working students do pay taxes,
can be naturally expressed by the following, not unfoldable, TBox={Student v
¬TaxPayer , Student u Worker v TaxPayer }. In order to overwhelm this gap, in
[
        <xref ref-type="bibr" rid="ref13 ref8">13, 8</xref>
        ] axiom pinpointing is extended to general TBoxes, more suitable to our
purposes. As a further di↵erence, Schlobach and colleagues limit their attention
to the basic ALC, whereas Parsia and colleagues in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] are able to deal also with
the more expressive SHOIN , corresponding to OWL-DL. A set of algorithms
for computing axiom pinpointing, in particular to compute the set of MUPS for
a given terminology T and a concept A, is also provided.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], Schlobach also proposes a methodology for explaining concept
subsumptions. The idea is to reduce the structural complexity of the original
concepts in order to highlight the logical interplay between them. To this aim,
Schlobach proposes to exploit the structural similarity of concepts, that can be
used to simplify terminological concepts, and hence the subsumption relations.
The structural similarity is based on the notion of qualified subconcepts; namely,
variants of those concepts a knowledge engineer explicitly uses in the
modeling process, and where the context (i.e., sequence of quantifiers and number of
negations) of this use is kept intact. Schlobach specifies the notion of qualified
subconcepts in two ways: Generalized Qualified Subconcepts (gqs), and
Specialized Qualified Subconcepts (sqs) which are defined by induction as follows:
Definition 4 (Generalized and Specialized Qualified Subconcepts, [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]).
Given concepts A, C and D, we define:
if A is atomic
gqs(A) = sqs(A) = {A}
gqs(C u D) = {C0, D0, C0 u D0|C0 2 gqs(C), D0 2 gqs(D)}
gqs(C t D) = {C0 t D0|C0 2 gqs(C), D0 2 gqs(D)}
gqs(9 r.C) = {9 r.C0|C0 2 gqs(C)}
gqs(8 r.C) = {8 r.C0|C0 2 gqs(C)}
gqs(¬C) = {¬C0|C0 2 sqs(C)}
sqs(C u D) = {C0 u D0|C0 2 sqs(C), D0 2 sqs(D)}
sqs(C t D) = {C0, D0, C0 t D0|C0 2 sqs(C), D0 2 sqs(D)}
sqs(9 r.C) = {9 r.C0|C0 2 sqs(C)}
sqs(8 r.C) = {8 r.C0|C0 2 sqs(C)}
sqs(¬C) = {¬C0|C0 2 gqs(C)}
As Schlobach himself notes, a simple consequence of this definition is that |=
C v C0 for every C0 2 gqs(C), and |= D0 v D for each D0 v sqs(D).
      </p>
      <p>We slightly extend the base case of Definition 4 as follows:
Definition 5 (Generalized and Specialized Qualified Subconcepts
extended). We define sqs(C) and gqs(C) by adding to clauses in Definition 4 the
following ones:
gqs(A) = {A} [ { gqs(D) | A v D 2 TBox}
sqs(A) = {A} [ { sqs(C) | C v A 2 TBox}
gqs(¬A) = {¬A} [ { gqs(D) | ¬A v D 2 TBox}
sqs(¬A) = {¬A} [ { sqs(C) | C v ¬A 2 TBox}
if A is atomic
if A is atomic
if A is atomic
if A is atomic
Thus, we also take into account the axioms (i.e., concept inclusions) defined in
a given TBox. This generalization allows us to move upward (by means of gqs),
and downward (by means of sqs) in the hierarchy of concepts defined by a given
TBox T . Relying on the notions of sqs and gqs, we can define a partial ordering
relation between concepts as follows:
Definition 6. Let C and D be two concepts in a given TBox T , we say that
C is more specific than D, denoted as C D, i↵ at least one of the following
relations holds: (i) C 2 sqs(D), or (ii) D 2 gqs(C).</p>
      <p>It is easy to see that is irreflexive, antisymmetric, and transitive; however, it
is just partial because is not defined for any pair of concepts; i.e., there may
exist two concepts C and D such that neither C D nor D C holds. As we
will discuss later, the methodology we propose for determining which concepts
represent properties to be made “typical” relies on the fact that concepts are
considered in order from the most specific to the most general. In those situations
where two concepts are not directly comparable with one another by means of ,
we need some further tools. For this reason, we introduce the notions of Concept
Network and depth of concepts. First of all, let S be the set of concepts (and
subconcepts) occurring in a given TBox.</p>
      <p>Definition 7 (Concept Network). Given a TBox T , a concept network for T
is a graph CN (T ) : hV, Ei where V is a set of vertexes, each of which corresponds
to a concept C 2 S , and E is a set of oriented edges of the form hC, Di, where
C and D belong to V ; an edge hC, Di is in E i↵ C D.</p>
      <p>Definition 8 (Depth of a concept). Given a TBox T , its corresponding
concept network CN (T ), and a concept C 2 S , the depth of a concept C 2 T ,
denoted as depth(C), corresponds to the length of the longest path from &gt; to C.
In principle, any two concepts C and D at the same depth can be considered by
our methodology in either orderings C D or D C. However, when both C
and ¬C are at the same depth, we use a further criteria of preference based on
the distance between concepts.</p>
      <p>Definition 9 (Distance between concepts). Given a TBox T , its
corresponding concept network CN (T ), and two concepts C, D 2 S , the distance
between C and D in T , denoted as dist(C, D) is given by the shortest path in
CN (T ) from C to D, if C D; 1 , otherwise.</p>
      <p>Example 1. Let us consider a simple example in which a TBox Tstudent contains
the following concept definitions:</p>
      <p>Student v ¬ TaxPayer
Student uWorker v TaxPayer
Of course, Tstudent is consistent only if there are no working students, and in
the following we will show how it can be repaired by means of the typicality
operator. For the time being, we are just interested in determining the depth of
the concepts in Tstudent. By applying Definition 6 we obtain:</p>
    </sec>
    <sec id="sec-2">
      <title>Student u Worker</title>
    </sec>
    <sec id="sec-3">
      <title>Student ¬ TaxPayer; Student u Worker</title>
      <p>TaxPayer.</p>
      <p>It is easy to see that Student uWorker is the deepest concept in our terminology.
In fact, we have that both TaxPayer and ¬TaxPayer are at the same depth
1, as well as Worker; Student is at depth 2; whereas Student u Worker is at
depth 3. However, since dist(Student u W orker, T axP ayer) &lt; dist(Student u
W orker, ¬T axP ayer), we will prefer to consider T axP ayer before ¬T axP ayer.
Definition 5 above does not take into account that the terminology may be
inconsistent, and hence some of the concepts generated via the generalization
(specialization) rules might be trivially contradictory (e.g., having the form D u ¬D).
Moreover, the set of concepts within the gqs (sqs) of a given concept C are
not necessarily consistent with one another. For instance, let us consider again
Tstudent; it is easy to see that gqs(Student uWorker) is the set:
{Student u Worker, Student, TaxPayer, ¬TaxPayer, ¬ (Student u Worker),
Worker, ¬ TaxPayer u Worker, ¬ (Student u Worker) u Worker, TaxPayer}.</p>
      <p>Both TaxPayer and ¬TaxPayer are members of gqs(Student uWorker), but
only one of them will be correct, the other will have to be pruned. As we have
already mentioned above, TaxPayer will be preferred as it is closer to Student u
Worker. In the next section we propose a pruning technique that discards those
concepts that, albeit generalize a concept C (i.e., belong to gqs(C)), contradict
the terminology T . Our pruning technique relies on a compact representation of
gqs. That is, rather than explicitly computing all concepts in gqs(C) by
recursively unfolding every gqs(C0) for each C0 2 gqs(C), we consider gqs(C) as a list
of concepts (not necessarily atomic), and gqss.</p>
      <p>For example, gqs(Student uWorker) can be compactly represented as the list
of elements {Student u Worker, gqs(Student) u gqs(W orker), gqs(Student),
gqs(W orker)}. Intuitively, gqs(Student uWorker) is given by the union of the
concepts that are directly mentioned in the list (e.g., Student uWorker), or that
belong to one of the gqss mentioned in the list itself (e.g., gqs(Student)).</p>
      <p>Note that we keep the list associated with gqs(C) ordered according to the
depth (and distance when necessary) of the mentioned concepts. Also in this case,
deepest concepts come first. However we have to pay some attention in dealing
with gqss. In fact, if D E, then also gqs(D) E, and hence gqs(D) gqs(E).
However, if both D and gqs(D) are in gqs(C), then D gqs(D). On the other
hand, if D and F have the same depth, we put first the concept which is closer
to C. In case, D and F are at the same distance from C than their relative
ordering is arbitrary. The idea is that gqs(D) abstracts a set of concepts which
are at least as specific as D, but possibly more general than D. Therefore, any
concept more specific than D is also more specific than gqs(D). Vice versa, if
D E, then also gqs(D) E as gqs(D) contains at least D.
4 Revising Terminologies Including Exceptions
In this section we present the methodology for automatically modify a TBox
T whenever a detected inconsistency can be treated as an exception. The basic
idea of our proposal is first to isolate a subset of T which is relevant for the
inconsistency, and for this purpose we will adopt the notion of MUPS introduced
in Definition 3. Then we reconsider all the concepts within a MUPS and try to
identify which concepts describe characterizing properties to be made “typical”.
The intuition is that all the properties labeled as typical hold for all “normal”
members of a given class C, but not necessarily for all members in C. That is,
we admit that a subset of individuals in C do not present the typical properties
of the class, and hence are considered as exceptional.</p>
      <p>
        Let us start by considering a coherent terminology T , and assume we
discover a new subsumption relation newC v D that once added to T introduces
an incoherence. As mentioned above, the first step consists in restricting our
attention to only those concepts that are relevant for the incoherence. These
concepts are identified by mups(T , newC) [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. For the sake of discussion, we
will assume that mups(T , newC) will only contain a set of incoherent axioms,
the approach can be easily extended to consider the more general situation in
which mups(T , newC) is a set of sets.
      </p>
      <p>In principle, we have to identify which concept subsumptions in mups(T , newC)
are to be modified by introducing the T operator on the left-hand side of an
axiom. The main issue in this process is that the terminology T might hide implicit
knowledge (i.e., implicit subsumptions), and these implicit subsumptions are still
hidden in mups(T , newC). Inferring implicit subsumptions is therefore essential
in order to correctly identify the most specific properties to be made typical.</p>
      <p>To reach this result, we propose a search process that is based on the
following intuition: Given a concept C in mups(T , newC), all the implicit
subsumptions we are looking for are already contained in gqs(C). In other words,
for any concept D 2 gqs(C) (either atomic or not), the subsumption C v D
must hold by definition of gqs. However, we have to be cautious when dealing
with gqs(C) as it potentially contains subsumptions which are irrelevant, or even
incorrect, for the specific case under consideration. First of all, gqs(C)
considers all the possible concept definitions in T , but we are just interested in those
concepts mentioned within mups(T , newC). Thus, in considering the concepts
mentioned within gqs(C) we have to restrict ourselves only to those concepts
which are also mentioned mups(T , newC); any other concept in gqs(C) will be
ignored as irrelevant. In addition, not all the generalizations suggested by gqs(C)
are consistent with the concepts in mups(T , newC). As we have already noted,
mups(T , newC) might contain the subsumption relation C v D; at the same
time, the concept ¬D appears in gqs(C); implying that C v ¬D. Of course, this
second subsumption is in direct contrast with a subsumption in mups(T , newC);
we say that C v ¬D syntactically contradicts C v D. (Note that such a
contradiction is just syntactic, and hence it can be checked by inspecting concepts in
mups(T , newC).)</p>
      <p>
        Another critical issue is that we are interested in finding the most specific
properties that have to be considered as typical. This is also the reason why we
look for implicit subsumptions: we want to discover what properties are inherited
by the concepts in mups(T , newC). Thus, when we look for inclusions, we have to
proceed from the most specific concepts in mups(T , newC) to the most general
ones. In this way, we can progressively build a new set of subsumption relations
S, in which the typicality operator is added to only those specific properties that
are relevant for dealing with the exception introduced by newC.
4.1 Algorithms
In this section we give a detailed description of the methodology we propose for
automatically managing, by means of the typicality operator, a concept newC
with exceptional properties D. We give for granted that mups(T , newC) has
already been computed by means of one of the calculi proposed in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Thus,
mups(T , newC) identifies a minimal subset of concept inclusions that are
relevant for the inconsistency arising when T is extended with newC v D.
Algorithm 1 FindSubsumptionsMain(mups(T , newC))
FindSubsumptionsMain To restore the consistency in T , we propose to
rewrite the inclusion relations in mups(T , newC) by characterizing some of them
with the typicality operator. As said above, we reach this goal by considering
all concepts mentioned in mups(T , newC) in depth ordering, starting from the
deepest ones up to the most generic concepts. Algorithm 1 simply shows this
loop over the concepts in mups(T , newC). At each iteration, a set S of
inclusions is incrementally extended with the new relations discovered by function
FindSubsumptions. In case several concepts Ci have the same depth, a further
consistency check among their corresponding sets Sis is performed to deal with
inconsistent properties, coming from di↵erent Sis, and inherited by a given
concept F . For instance, let us assume that F v C u D 2 S , C and D have the same
depth and distance from F , thus we cannot prefer one to the other. In addition,
by invoking FindSubsumptions over C and D we get, respectively, C v E 2 S C
and D v ¬E 2 S D. Since F inherits both E and ¬E, we do not conclude
anything about F having, or not, property E; however, to build a coherent S,
T(C) v E and T(D) v ¬E are added to S.
      </p>
      <p>FindSubsumptions Algorithm 2 sketches the main steps of function
FindSubsumption: it takes as inputs a concept Ci mentioned in mups(T , newC), the
corresponding gqs(Ci) given as a list in compact form, and the set S of inclusions
found so far. The intuition is that all implicit inclusions we are looking for have
the form Ci v D, where D 2 gqs(Ci).</p>
      <p>First of all, the algorithm initializes some local structures: Si will contain
the subsumptions discovered in this invocation, BlackList will only contain the
gqs of concepts that have been pruned: future occurrences of concepts belonging
to these gqs have to be discarded; Queue will either contain concept nodes or
complex nodes: a concept node represents a concept (not necessarily atomic),
whereas a complex node represents a structure where at least a gqs is mentioned
(i.e., a complex node is an abstraction of a number of concepts). All nodes in
Queue are kept ordered from the most specific to the most general according to
the depth of the concepts they contain. At the beginning of the algorithm (see
lines 3- 12), Queue is initialized with the elements in gqs(C).</p>
      <p>After these preliminary steps, the algorithm loops over the elements in Queue.
At each iteration, the first node n is removed from the head of Queue. The
algorithm then checks whether n is a concept node or a complex node.
Handling concept nodes (lines 15 - 28). If n is a concept node, for instance
representing concept D, the algorithm has discovered a possible inclusion Ci v D
to be added to Si. However, if Ci v ¬D is already in S [ S i, relation Ci v D
has to be discarded. Moreover, since we know that D is not acceptable, then
any other concept which generalizes D is not acceptable, too; thereby we remove
from Queue any node e belonging to gqs(D) (line 18), and then add gqs(D) in
BlackList (line 19). Otherwise (Ci v ¬D is not in S [ S i), Ci v D represents a
new piece of knowledge that we can add to Si. If S [ S i [ { Ci v D} is coherent,
the new relation is added to Si as it is (line 22). Conversely, we have discovered
one of the most specific properties of Ci that have to be made typical. That is,
in order to restore the consistency in T when we also consider the exceptional
properties of newC, the inclusion Ci v D has to be rewritten as T(Ci) v D
(line 24).</p>
      <p>Note that, since we add either Ci v D or T(Ci) v D, we can conclude that
any other concept in gqs(¬D) will not be acceptable as it would introduce an
inconsistency. Thus, we can remove from Queue any node e which is a
generalization of ¬D (line 26), and then add gqs(¬D) in BlackList (line 27).
Handling complex nodes (lines 29 - 32). In case the head node n is a complex
node, then it must mention at least a gqs, which abstracts a number of
concepts. Dealing with a complex node means generating a number of child-nodes
by expanding one of the gqs mentioned in n. The ExpandComplexNode
function is shown in Algorithm 3. The set of new nodes N ewN odes returned by
ExpandComplexNode are enqueue in Queue in the specificity ordering.
Algorithm 2 FindSubsumptions(Ci, gqs(Ci), S)
Algorithm 3 ExpandComplexNode(n, BlackList )
1: List ;
2: Let gqs(D) be one of the most specific gqs mentioned in n
3: for all element e 2 gqs(D) such that e 62 gqs(F ), 8 gqs(F ) 2 BlackList do
4: create a node n0 by substituting gqs(D) with e in n
5: if n0 is an inconsistent concept then
6: discard n0
7: else
8: put n0 in List
9: end if
10: end for
11: return List
ExpandComplexNode The expansion of a complex node is outlined in
Algorithm 3. It takes as inputs the complex node n that has to be expanded, and
the BlackList containing the gqss of those concepts that have been pruned o↵
so far. The algorithm returns a list of new nodes, either concept or complex.
After having initialized List, (line 1), the algorithm selects one of the most
specific gqs in n. In fact, since n is a complex node, then it must mention at
least one gqs (line 2). Let us assume that gqs(D) has been selected, remember
that gqs(D) is compactly encoded as a list of concepts or gqss. The algorithm
therefore loops over the elements e in the list gqs(D), if e belongs to any gqs(F ) 2
B lackList, then e can be pruned o↵ as it has already been determined that F ,
and then any other concept in gqs(F ), is not consistent with some axioms in
S (line 3). Otherwise, e can be used to create a new node n0. More precisely,
n0 is obtained by substituting gqs(D) with e in n. Of course, n0 can either be
a concept node or a complex node (line 4). As noted above, the generalizations
of a given concept D might contain contradictory concepts since the initial set
of axioms in mups(T , newC) is inconsistent. Thus, before adding n0 to List,
we first check whether n0 is contradictory, in which case we discard it (line 5).
Otherwise, n0 is added to List (line 8). The algorithm terminates by returning
the, possibly empty, list of new nodes (line 11).</p>
      <p>Let newC v D be the subsumption relation that, once added to a TBox T ,
generates an inconsistency, and let mups(T , newC) the minimal subset of axioms
in T preserving the inconsistency, then we can prove the following properties.
Proposition 1. The set S of concept inclusions generated by Algorithm 1 with
the invocation FindSubsumptionsMain(mups(T , newC)) is coherent.
Intuitively, S is only modified either in line 7 or in line 11 of algorithm
FindSubsumptionsMain. In the first case, we already know that all the sets Sis inferred
for all concepts at a given depth l are altogether coherent. In the second case,
instead, we know that an inconsistency has been detected; thereby we “correct”
the axioms in Sis by means of T, yielding the consistency of S. In fact, for the</p>
      <p>
        R
properties ALCminT in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], since T is nonmonotonic, we have that S is coherent.
Proposition 2. Let S be the set of concept inclusions generated by Algorithm
1, then either newC v D or T(newC) v D belongs to S.
      </p>
      <p>This follows directly by the fact that newC v D has generated an inconsistency
in T , and then mups(T , newC) necessarily includes such a relation. Algorithm
1 simply reconsiders all the axioms in mups(T , newC), and builds a new set S
of axioms which contains, at least, the same axioms as in mups(T , newC), but
at least one has been modified by the typicality operator.</p>
      <p>Theorem 1. Let T 0 be a new terminology obtained as T 0 = T \ mups(T , newC) [
S. The new terminology T 0 is coherent.</p>
      <p>This allows us to conclude that once we have computed S, we have a means for
correcting the initially incoherent terminology T . The simplest way to do that is
to substitute the axioms in mups(T , newC) with S. Further details and proofs
are omitted due to space limitations.</p>
      <p>Tstudent, can be rewritten as Ts0tudent:
Example 2. Let us consider again the terminology Tstudent presented in
Example 1. Algorithm 1 considers the concepts in the ordering: Student u W orker,
Student, T axP ayer, ¬T axP ayer, W orker. The first invocation of
FindSubsumptons (Algorithm 2) is therefore over the inputs: StudentuW orker,
gqs(StudentuW orker), and ; (i.e., at the beginning S is empty). The result of such a first
invocation is the set S = {Student u W orker v Student; Student u W orker v
T axP ayer; Student u W orker v W orker}. The second invocation of
FindSubsumptions is over the inputs: Student, gqs(Student), S. In this case, the
algorithm will find the subsumption Student v ¬T axP ayer, which is
incoherent with the subsumptions already in S, thus, ¬T axP ayer is considered
as the typical property of Student; in other words, S is modified by adding
T(Student) v ¬T axP ayer. It is easy to see that no further subsumptions
are added in S when FindSubsumptions is invoked over the remaining concepts
T axP ayer, W orker, and ¬T axP ayer. In conclusion, the original terminology</p>
      <sec id="sec-3-1">
        <title>Student u Worker v Student</title>
        <p>Student u Worker v Worker</p>
      </sec>
      <sec id="sec-3-2">
        <title>Student u Worker v TaxPayer T(Student ) v ¬TaxPayer</title>
        <p>R
By the properties of the DL ALCminT, from the resulting knowledge base above
we are able to reason about defeasible inheritance. For instance, if we have
Student (franco) (Franco is a student), we are able to infer ¬TaxPayer (franco)
(Franco does not pay taxes), however this conclusion is retracted if we
further discover that Worker (franco) (Franco is also a worker), from which we
obtain TaxPayer (franco). We are also able to deal with irrelevance: for instance,
T(Student u Fat ) v ¬TaxPayer can be inferred, and the above conclusions still
hold even if we further know Fat (franco) (Franco is fat).</p>
        <p>Example 3. Let us consider a simplified variant of the Pizza ontology distributed
together with Prot´eg´e. In particular, let us assume that a TBox Ttops contains
the following subsumption relations:
ax1 : FatToppings v PizzaToppings,
ax2 : CheesyToppings v FatToppings,
ax3 : VegetarianToppings v ¬FatToppings.</p>
        <p>Now, let us assume that we discover that there also exists the tofu cheese, which
is made of curdled soybeanmilk. Thus, we change Ttops by adding the following:
ax4 : CheesyVegetarianToppings v CheesyToppings u VegetarianToppings.
Let the ABox Atops contain CheesyVegetarianToppings(tofu). The resulting
knowledge base is inconsistent, respectively, the knowledge base obtained by
adding CheesyVegetarianToppings(tofu) to the initial ABox is inconsistent, since
FatToppings and DieteticToppings are disjunct concepts, whereas
CheesyVegetarianToppings falls in their intersection.</p>
        <p>The tofu cheese is an exception, and the first step to tackle is to compute
mups(Ttops, CheesyVegetarianToppings) = {ax2, ax3, ax4}. Concept inclusions
(even implicit) are taken into account: possibly, some of them will be “corrected”
by means of the typicality operator in order to accommodate the exceptional
CheesyVegetarianToppings concept. In particular, the inclusions discovered are:
Stops = {
CheesyVegetarianToppings v CheesyToppings,
CheesyVegetarianToppings v VegetarianToppings,
T(CheesyToppings) v FatToppings,
T(VegetarianToppings) v ¬FatToppings
}
The concept inclusions in mups(Ttops, CheesyVegetarianToppings) can be now
substituted in Ttops with the set Stops. Theorem 1 ensures that the new Ttop
so obtained is now coherent and correctly addresses the exceptional concept
R
CheesyVegetarianToppings. By the properties of ALCminT, from the resulting
knowledge base, we will be able to reason about defeasible inheritance: for
instance, if we know CheesyToppings(brie) (Brie is a cheesy topping), we conclude
that FatToppings(brie) (Brie is a fat topping), whereas for tofu we say nothing
about being a fat topping or not. We are also able to deal with irrelevance:
for instance, it can be inferred that, normally, cheesy potatoes toppings are fat
toppings, i.e. T(Cheesy u Potatoes) v FatToppings.
5</p>
        <sec id="sec-3-2-1">
          <title>Conclusions and Future Issues</title>
          <p>
            We have presented some preliminary results in defining a methodology for
automated revision of a DL terminology in presence of exceptions. We exploit
techniques and algorithms proposed in [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ], which are also extended to more
expressive DLs such as SHOIN , corresponding to ontology language OWL-DL.
On the one hand, we aim at extending our approach to such expressive DLs. On
the other hand, we intend to develop an implementation of he proposed
algorithms, by considering the integration with existing tools for manually modifying
ontologies when inconsistencies are detected.
          </p>
        </sec>
        <sec id="sec-3-2-2">
          <title>Acknowledgements</title>
          <p>The authors are supported by the twin projects ODIATI#1 and ODIATI#2:
“Ontologie, DIAgnosi e TIpicalit`a nelle logiche descrittive” of the local research
funds 2013 by the Universit`a degli Studi di Torino - part B, supporting young
researchers.</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Baader</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Calvanese</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McGuinness</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nardi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Patel-Schneider</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>The Description Logic Handbook - Theory, Implementation, and Applications, 2nd edition</article-title>
          . Cambridge (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Baader</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hollunder</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Embedding defaults into terminological knowledge representation formalisms</article-title>
          .
          <source>J. Autom. Reasoning</source>
          <volume>14</volume>
          (
          <issue>1</issue>
          ),
          <fpage>149</fpage>
          -
          <lpage>180</lpage>
          (
          <year>1995</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Casini</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Straccia</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          :
          <article-title>Rational Closure for Defeasible Description Logics</article-title>
          . In: Janhunen,
          <string-name>
            <surname>T.</surname>
          </string-name>
          , Niemela¨, I. (eds.)
          <source>Proceedings of the 12th European Conference on Logics in Artificial Intelligence (JELIA</source>
          <year>2010</year>
          ).
          <source>Lecture Notes in Artificial Intelligence</source>
          , vol.
          <volume>6341</volume>
          , pp.
          <fpage>77</fpage>
          -
          <lpage>90</lpage>
          . Springer, Helsinki, Finland (
          <year>September 2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Casini</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Straccia</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          :
          <article-title>Defeasible Inheritance-Based Description Logics</article-title>
          . In: Walsh,
          <string-name>
            <surname>T</surname>
          </string-name>
          . (ed.)
          <source>Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI</source>
          <year>2011</year>
          ). pp.
          <fpage>813</fpage>
          -
          <lpage>818</lpage>
          . Morgan Kaufmann, Barcelona,
          <source>Spain (July</source>
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Giordano</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gliozzi</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Olivetti</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pozzato</surname>
            ,
            <given-names>G.L.</given-names>
          </string-name>
          :
          <article-title>ALC+T: a preferential extension of Description Logics</article-title>
          .
          <source>Fundamenta Informaticae</source>
          <volume>96</volume>
          ,
          <fpage>1</fpage>
          -
          <lpage>32</lpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Giordano</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gliozzi</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Olivetti</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pozzato</surname>
            ,
            <given-names>G.L.</given-names>
          </string-name>
          :
          <article-title>A NonMonotonic Description Logic for Reasoning About Typicality</article-title>
          .
          <source>Artificial Intelligence</source>
          <volume>195</volume>
          ,
          <fpage>165</fpage>
          -
          <lpage>202</lpage>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Giordano</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gliozzi</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Olivetti</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pozzato</surname>
            ,
            <given-names>G.L.</given-names>
          </string-name>
          :
          <article-title>Minimal Model Semantics and Rational Closure in Description Logics</article-title>
          . In: Eiter,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Glim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Kazakov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Krtzsch</surname>
          </string-name>
          , M. (eds.)
          <source>Informal Proceedings of the 26th International Workshop on Description Logics (DL</source>
          <year>2013</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , vol.
          <volume>1014</volume>
          , pp.
          <fpage>168</fpage>
          -
          <lpage>180</lpage>
          . Ulm, Germany (7
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Kalyanpur</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parsia</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cuenca-Grau</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sirin</surname>
          </string-name>
          , E.:
          <article-title>Axiom pinpointing: Finding (precise) justifications for arbitrary entailments in SHOIN (owl-dl)</article-title>
          .
          <source>Technical report, UMIACS</source>
          ,
          <year>2005</year>
          -
          <fpage>66</fpage>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Kalyanpur</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parsia</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sirin</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grau</surname>
            ,
            <given-names>B.C.</given-names>
          </string-name>
          :
          <article-title>Repairing unsatisfiable concepts in owl ontologies</article-title>
          .
          <source>In: ESWC</source>
          . pp.
          <fpage>170</fpage>
          -
          <lpage>184</lpage>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Ke</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sattler</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          :
          <article-title>Next Steps for Description Logics of Minimal Knowledge and Negation as Failure</article-title>
          . In: Baader,
          <string-name>
            <given-names>F.</given-names>
            ,
            <surname>Lutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Motik</surname>
          </string-name>
          ,
          <string-name>
            <surname>B</surname>
          </string-name>
          . (eds.)
          <source>Proceedings of Description Logics. CEUR Workshop Proceedings</source>
          , vol.
          <volume>353</volume>
          . CEUR-WS.org, Dresden, Germany (May
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Kraus</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lehmann</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Magidor</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Nonmonotonic reasoning, preferential models and cumulative logics</article-title>
          .
          <source>Artificial Intelligence</source>
          <volume>44</volume>
          (
          <issue>1-2</issue>
          ),
          <fpage>167</fpage>
          -
          <lpage>207</lpage>
          (
          <year>1990</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Lehmann</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Magidor</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>What does a conditional knowledge base entail?</article-title>
          <source>Artificial Intelligence</source>
          <volume>55</volume>
          (
          <issue>1</issue>
          ),
          <fpage>1</fpage>
          -
          <lpage>60</lpage>
          (
          <year>1992</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Parsia</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sirin</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kalyanpur</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Debugging owl ontologies</article-title>
          .
          <source>In: WWW</source>
          . pp.
          <fpage>633</fpage>
          -
          <lpage>640</lpage>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Reiter</surname>
          </string-name>
          , R.:
          <article-title>A theory of diagnosis from first principles</article-title>
          .
          <source>Artificial Intelligence</source>
          <volume>32</volume>
          (
          <issue>1</issue>
          ),
          <fpage>57</fpage>
          -
          <lpage>96</lpage>
          (
          <year>1987</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Schlobach</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Diagnosing terminologies</article-title>
          .
          <source>In: AAAI</source>
          . pp.
          <fpage>670</fpage>
          -
          <lpage>675</lpage>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Schlobach</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cornet</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          :
          <article-title>Explanation of terminological reasoning: A preliminary report</article-title>
          . In: Description Logics (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Schlobach</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cornet</surname>
          </string-name>
          , R.:
          <article-title>Non-standard reasoning services for the debugging of description logic terminologies</article-title>
          .
          <source>In: IJCAI</source>
          . pp.
          <fpage>355</fpage>
          -
          <lpage>362</lpage>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Schlobach</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cornet</surname>
          </string-name>
          , R., van
          <string-name>
            <surname>Harmelen</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Debugging incoherent terminologies</article-title>
          .
          <source>Journal of Automated Reasoning</source>
          <volume>39</volume>
          (
          <issue>3</issue>
          ),
          <fpage>317</fpage>
          -
          <lpage>349</lpage>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>