=Paper= {{Paper |id=Vol-1193/paper_8 |storemode=property |title=Brave and Cautious Reasoning in EL |pdfUrl=https://ceur-ws.org/Vol-1193/paper_8.pdf |volume=Vol-1193 |dblpUrl=https://dblp.org/rec/conf/dlog/LudwigP14 }} ==Brave and Cautious Reasoning in EL== https://ceur-ws.org/Vol-1193/paper_8.pdf
          Brave and Cautious Reasoning in EL?

                        Michel Ludwig and Rafael Peñaloza

                Theoretical Computer Science, TU Dresden, Germany
                     Center for Advancing Electronics Dresden
                   {michel,penaloza}@tcs.inf.tu-dresden.de



        Abstract. Developing and maintaining ontologies is an expensive and
        error-prone task. After an error is detected, users may have to wait for
        a long time before a corrected version of the ontology is available. In
        the meantime, one might still want to derive meaningful knowledge from
        the ontology, while avoiding the known errors. We introduce brave and
        cautious reasoning and show that it is hard for EL. We then propose
        methods for improving the reasoning times by precompiling informa-
        tion about the known errors and using proof-theoretic techniques for
        computing justifications. A prototypical implementation shows that our
        approach is feasible for large ontologies used in practice.


1     Introduction
Description Logics (DLs) [3] have been successfully used to model many appli-
cation domains, and they are the logical formalism underlying the standard on-
tology language for the semantic web OWL [30]. Consequently, more and larger
ontologies are being built using these formalisms. Ontology engineering is expen-
sive and error-prone; the combination of knowledge from multiple experts, and
misunderstandings between them and the knowledge engineers may lead to errors
that are hard to detect. For example, several iterations of SNOMED CT [13, 29]
classified amputation of finger as a subclass of amputation of hand [7, 8].
    Since domain knowledge is needed for correcting an unwanted consequence,
and its causes might not be obvious, it can take long before a corrected version
of an ontology is released. For example, new versions of SNOMED are released
every six months; one should then expect to wait at least that amount of time
before an error is resolved. During that time, users should still be able to derive
meaningful consequences from the ontology, while avoiding known errors.
    A related problem is inconsistency-tolerant reasoning, based on consistent
query answering from databases [1, 9], where the goal is to obtain meaningful
consequences from an inconsistent ontology O. Inconsistency is clearly an un-
wanted consequence from an ontology, but it is not the only one. We generalize
the idea of inconsistency-tolerant reasoning to error-tolerant reasoning in which
other unwanted consequences, beyond inconsistency, are considered.
    We study brave and cautious semantics. Intuitively, cautious semantics refer
to consequences that follow from all the possible repairs of O; this guarantees
?
    Partially supported by DFG within the Cluster of Excellence ‘cfAED’.
                      Table 1. Syntax and semantics of EL.
                  Syntax   Semantics
                  >        ∆I
                  C uD     C I ∩ DI
                  ∃r.C     {x ∈ ∆I | ∃y ∈ ∆I : (x, y) ∈ rI ∧ y ∈ C I }

that, however the ontology is repaired, the consequence will still follow. For some
consequences, one might only be interested in guaranteeing that it follows from
at least one repair; this defines the brave semantics. As usual in inconsistency-
tolerant reasoning, the repairs are maximal subontologies of O that do not entail
the unwanted consequence. We also consider the IAR semantics, proposed in [21]
as a means to efficiently approximate cautious reasoning; see also [11, 28].
    In this paper, we focus on subsumption between concepts w.r.t. a TBox
in EL, which is known to be polynomial [12]. As every EL TBox is consistent,
considering inconsistency-tolerant semantics makes no sense in this setting. On
the other hand, SNOMED CT and other large-scale ontologies are written in
tractable extensions of this logic, and being able to handle errors written in them
is a relevant problem for knowledge representation and ontology development.
    We show that error-tolerant reasoning in EL is hard. More precisely, brave
semantics is NP-complete, and cautious and IAR semantics are coNP-complete.
These results are similar to the complexity of inconsistency-tolerant semantics
in inexpressive logics [10, 28]. We also show that hardness does not depend only
on the number of repairs: there exist errors with polynomially many repairs, for
which error-tolerant reasoning requires super-polynomial time (unless P = NP).
    To improve the time needed for error-tolerant reasoning, we propose to pre-
compute the information on the causes of the error. We first annotate every ax-
iom with the repairs to which it belongs. We then use a proof-theoretic approach,
coupled with this annotated ontology, to derive error-tolerant consequences. We
demonstrate the practical applicability of our approach for brave and cautious
reasoning by applying a prototype-implementation on large ontologies used in
practice.


2   Error-Tolerant Reasoning in EL
We first briefly recall the DL EL. Given two disjoint sets NC and NR of concept-,
and role-names, respectively, concepts are constructed by C ::= A | C uC | ∃r.C,
where A ∈ NC and r ∈ NR . A TBox is a finite set of GCIs of the form C v D,
where C, D are concepts. The TBox is in normal form if all its GCIs are of
the form A v ∃r.B, ∃r.A v B, or A1 u . . . u An v B with n ≥ 1 and
A, A1 , . . . , An , B ∈ NC ∪ {>}. The semantics of EL is defined through inter-
pretations I = (∆I , ·I ), where ∆I is a non-empty domain and ·I maps each
A ∈ NC to a set AI ⊆ ∆I and every r ∈ NR to a binary relation rI over ∆I .
This mapping is extended to arbitrary concepts as shown in Table 1. The inter-
pretation I is a model of the TBox T if C I ⊆ DI for every C v D ∈ T . The
main reasoning problem is to decide subsumption [2, 12]: C is subsumed by D
w.r.t. T (denoted C vT D) if C I ⊆ DI holds for every model I of T . HL is
the sublogic of EL that does not allow existential restrictions; it is a syntactic
variant of Horn logic: every Horn clause with at least one positive literal can be
seen as an HL GCI. An HL TBox is a core TBox if all its axioms are of the
form A v B with A, B ∈ NC .
    Error-tolerant reasoning refers to the task of deriving meaningful conse-
quences from a TBox that is known to contain errors. In the case of EL, an
erroneous consequence refers to an error in a subsumption relation. If a TBox T
entails an unwanted subsumption C vT D, then we are interested finding the
ways in which this consequence can be avoided.
Definition 1 (repair). Let T be an EL TBox and C vT D. A repair of T w.r.t.
C v D is a maximal (w.r.t. set inclusion) subset R ⊆ T such that C 6vR D.
The set of all repairs of T w.r.t. C v D is denoted by RepT (C v D).
We will usually consider a fixed TBox T , and hence say that R is a repair w.r.t.
C v D, or even simply a repair, if the consequence is clear from the context.
Example 2. The repairs of T = {A v ∃r.X, ∃r.X v B, A v Y , Y v B, A v B 0 }
w.r.t. the consequence A v B are the sets Ri := T \ Si , 1 ≤ i ≤ 4, where
S1 = {A v ∃r.X, A v Y }, S2 = {A v ∃r.X, Y v B}, S3 = {∃r.X v B, A v Y },
and S4 = {∃r.X v B, Y v B}.
The number of repairs w.r.t. a consequence may be exponential, even for core
TBoxes [26]. Each of these repairs is a potential way of avoiding the unwanted
consequence; however, there is no way of knowing a priori which is the best one
to use for further reasoning tasks. One common approach is to be cautious and
consider only those consequences that follow from all repairs. Alternatively, one
can consider brave consequences: those that follow from at least one repair.
Definition 3 (cautious, brave). Let T be a TBox, C vT D, and C 0 , D0 be
two concepts. C 0 is bravely subsumed by D0 w.r.t. T and C v D if there is a
repair R ∈ RepT (C v D) such that C 0 vR D0 ; C 0 is cautiously subsumed by
D0 w.r.t. T and C v D if for every repair R ∈ RepT (C v D) it holds that
C 0 vR D0 . If T or C v D are clear from the context, we usually omit them.
Example 4. Let T , R1 , . . . R4 be as in Example 2. A is bravely but not cautiously
subsumed by Y u B 0 w.r.t. T and A v B since A vR2 Y u B 0 but A 6vR1 Y u B 0 .
In the context of inconsistency-tolerant reasoning, other kinds of semantics which
have better computational properties have been proposed [11, 21, 28]. Among
these are the so-called IAR semantics, which consider the consequences that
follow from the intersection of all repairs. Formally, C 0 is IAR subsumed by D0
                          0      0
                                                T
w.r.t. T and C v D if C vQ D , where Q := R∈RepT (CvD) R.
Example 5. Let T and R1 , . . . , R4 be as T
                                           in Example 2. Then A is IAR subsumed
                                             4
by B 0 w.r.t. T and A v B as A v B 0 ∈ i=1 Ri .
A notion dual to repairs is that of MinAs, or justifications [7, 17]. A MinA for
C vT D is a minimal (w.r.t. set inclusion) subset M of T such that C vM D.
We denote as MinAT (C v D) the set of all MinAs for C vT D. There is a close
connection between repairs and MinAs for error-tolerant reasoning.
Theorem 6. Let T be a TBox, C, C 0 , D, D0 concepts with C vT D. Then
  (i) C 0 is cautiously subsumed by D0 w.r.t. T and C v D iff for every repair
      R ∈ RepT (C v D) there is an M0 ∈ MinAT (C 0 v D0 ) with M0 ⊆ R; and
 (ii) C 0 is bravely subsumed by D0 w.r.t. T and C v D iff there is a repair
      R ∈ RepT (C v D) and a MinA M0 ∈ MinAT (C 0 v D0 ) with M0 ⊆ R.


3   Complexity
We show that deciding cautious and IAR subsumptions is intractable already for
core TBoxes. Deciding brave subsumptions is intractable for EL, but tractable for
HL. We first prove the latter claim using directed hypergraphs, which generalize
graphs by connecting sets of nodes, rather than just nodes.
    A directed hypergraph is a pair G = (V, E), where V is a non-empty set of
nodes, and E is a set of directed hyperedges e = (S, S 0 ), with S, S 0 ⊆ V. Given
S, T ⊆ V, a path from S to T in G is a set of hyperedges {(Si , Ti ) ∈ E | 1 ≤ i ≤ n}
                                             Sn−1                   Sn
such that for every 1 ≤ i ≤ n, Si ⊆ S ∪ j=1 Tj , and T ⊆ i=1 Ti hold. The
reachability problem in hypergraphs consists in deciding the existence of a path
from S to T in G. This problem is decidable in polynomial time on |V| [15].
    Recall that HL concepts are conjunctions of concept names; we can represent
C = A1 u · · · u Am as its set of conjuncts SC = {A1 , . . . , Am }. Each GCI C v D
yields a directed hyperedge (SC , SD ) and every HL-TBox T forms a directed
hypergraph GT . Then C vT D iff there is a path from SC to SD in GT .
Theorem 7. Brave subsumption in HL can be decided in polynomial time on
the size of the TBox.
Proof. Let T be an HL TBox, and C, C 0 , D, D0 be HL concepts. C 0 is bravely
subsumed by D0 w.r.t. T and C v D iff there is a path from SC 0 to SD0 in GT
that does not contain any path from SC to SD . If no such path exists, then
(i) every path from SC 0 to SD0 passes through SD , and (ii) every path from SC 0
to SD passes through SC . We need to verify whether any of these two statements
is violated. The existence of a path that does not pass through a given set is
decidable in polynomial time.                                                 t
                                                                              u
However, for EL this problem is NP-complete. To prove this we adapt an idea
from [25] for reducing the NP-hard more minimal valuations (mmv) problem [7,
14]: deciding, for a monotone Boolean formula ϕ and a set V of minimal valua-
tions satisfying ϕ, if there are other minimal valuations V ∈V
                                                            / satisfying ϕ.
Theorem 8. Brave subsumption in EL is NP-complete.
   We now show that the cautious and IAR semantics are intractable already for
core TBoxes. This is a consequence of the intractability of the following problem.
Definition 9 (axiom relevance). The axiom relevance problem consists in
deciding, given a core TBox T , A v B ∈ T , and A0 vT B0 , whether there is a
repair R of T w.r.t. A0 v B0 such that A v B ∈/ R.
Lemma 10. Axiom relevance is NP-hard.

Proof. We reduce the NP-hard path-via-node problem [20]: given a directed
graph G = (V, E) and nodes s, t, m ∈ V, decide if there is a simple path from s
to t in G that goes through m. Given an instance of the path-via-node problem,
we introduce a concept name Av for every v ∈ (V \ {m}) ∪ {m1 , m2 }, and build
the core TBox

    T := {Av v Aw | (v, w) ∈ E, v, w 6= m} ∪ {Av v Am1 | (v, m) ∈ E, v 6= m} ∪
         {Am2 v Av | (m, v) ∈ E, v 6= m} ∪ {Am1 v Am2 }.

There is a simple path from s to t in G through m iff there is a repair R of T
w.r.t. As v At with Am1 v Am2 ∈ / R.                                         t
                                                                             u

Theorem 11. Cautious subsumption and IAR subsumption w.r.t. core, HL or
EL TBoxes are coNP-complete.

Proof. If C is not cautiously subsumed by D, we can guess a set R and verify in
polynomial time that R is a repair and C 6vR D. If C is not IAR subsumed by D,
we can guess a set Q ⊆ T , and for every GCI Ci v Di ∈   / Q a set Ri such that
Ci v Di ∈ / Ri . Verifying that each Ri is a repair and C 6vQ D is polynomial.
Thus both problems are in coNP. To show hardness, for a GCI C v D ∈ T ,
there is a repair R such that C v D ∈ / R iff C 6vR D iff C is neither cautiously
nor IAR subsumed by D. By Lemma 10 both problems are coNP-hard.                tu

The hardness of error-tolerant reasoning is usually attributed to the fact that
there can exist exponentially many repairs for a given consequence. However,
this argument is incomplete. For instance, brave reasoning is polynomial in HL,
although consequences may have exponentially many repairs already in this logic.
We show now that cautious and brave subsumption are also hard on the number
of repairs; i.e., they are not what we call repair-polynomial.

Definition 12 (repair-polynomial). An error-tolerant problem w.r.t. a TBox
T and a consequence C v D is repair-polynomial if it can be solved by an
algorithm that runs in polynomial time on the size of both T and RepT (C v D).1

Theorem 13. Unless P = NP, cautious and brave subsumption of C 0 by D0
w.r.t. T and C v D in EL are not repair-polynomial.

     The proof adapts the construction from Theorem 8 to reduce the problem
of enumerating maximal valuations that falsify a formula to deciding cautious
subsumption. The number of repairs obtained from the reduction is polyno-
mial on the number of maximal valuations that falsify the formula. Since this
enumeration cannot be solved in time polynomial on the number of maximal fal-
sifiers, cautious reasoning can also not be performed in time polynomial on the
1
    Notice that the repairs are not part of the input. This is closely related to output
    complexity measures [16].
Algorithm 1 Repairs entailing C 0 v D0
Input: Unwanted consequence C vT D, concepts C 0 , D0
Output: R ⊆ RepT (C v D): repairs entailing C 0 v D0
  R ← RepT (C v D)
  for each R ∈ RepT (C v D) do
     if C 0 6vR D0 then
         R ← R \ {R}
  return R


number of repairs. An analogous argument is used for brave reasoning. Thus,
error-tolerant reasoning is hard even if only polynomially many repairs exist;
i.e., there are cases where |RepT (C v D)| is polynomial on |T |, but brave and
cautious reasoning require super-polynomial time. The culprit for hardness is
therefore the computation of the repairs and not their number per se.
     We now propose a method for improving the reasoning times, by precomput-
ing a data structure based on the set of all repairs.


4   Precompiling Repairs
A naı̈ve solution for deciding brave or cautious subsumptions would be to enu-
merate all the repairs and check which of them entail the consequence that
should be checked (Algorithm 1). C 0 is then bravely or cautiously subsumed by
D0 iff R 6= ∅ or R = RepT (C v D), respectively. Each test C 0 vR D0 requires
polynomial time on |R| ≤ |T | [12], and exactly |RepT (C v D)| such tests are
performed. The for loop in the algorithm thus needs polynomial time on the sizes
of T and RepT (C v D). From Theorem 13 it follows that the first step, namely
the computation of all the repairs, must be expensive. In particular, these re-
pairs cannot be enumerated in output-polynomial time; i.e., in time polynomial
on the input and the output [16].
Corollary 14. The set of repairs for an EL TBox T w.r.t. C v D cannot be
enumerated in output polynomial time, unless P = NP.
For any given error, one would usually try to decide whether several brave or
cautious consequences hold. It thus makes sense to improve the execution time
of these reasoning tasks by avoiding a repetition of the first, expensive, step.
    The set of repairs can be computed in exponential time on the size of T ; this
bound cannot be improved in general since (i) there might exist exponentially
many such repairs, and (ii) they cannot be enumerated in output polynomial
time. However, this set only needs to be computed once, when the error is found,
and can then be used to improve the reasoning time for all subsequent subsump-
tion relations. Once RepT (C v D) is known, Algorithm 1 computes R, and hence
decides brave and cautious reasoning, in time polynomial on |T |·|RepT (C v D)|.
    Clearly, Algorithm 1 does more than merely deciding cautious and brave
consequences. Indeed, it computes the set of all repairs that entail C 0 v D0 . This
information can be used to decide more complex reasoning tasks. For instance,
Algorithm 2 Decide cautious and brave subsumption
Input: Labelled TBox T , concepts C 0 , D0
  procedure is-brave(T , C 0 , D0 )
     for each M ∈ MinAT (C 0 v D0 ) do
        if lab(M) 6= ∅ then
            return true
     return false
  procedure is-cautious(T , C 0 , D0 )
     ν←∅
     for each M ∈ MinAT (C 0 v D0 ) do
        ν ← ν ∪ lab(M)
        if ν = {1, . . . , n} then
            return true
     return false


one may be interested in knowing whether the consequence follows from most,
or at least k repairs, to mention just two possible inferences. IAR semantics can
      T decided in polynomial time on T and RepT (C v D): simply compute
also be
Q = R∈RepT (CvD) R, and test whether C 0 vQ D0 holds. The first step needs
polynomial time on RepT (C v D) while the second is polynomial on Q ⊆ T .
     As we have seen, precompiling the set of repairs already yields an improve-
ment on the time required for deciding error-tolerant subsumption relations.
However, there are some obvious drawbacks to this idea. In particular, storing
and maintaining a possibly exponential set of TBoxes can be a challenge in it-
self. Moreover, this method does not scale well for handling multiple errors that
are found at different time points. When a new error is detected, the repairs
of all the TBoxes need to be computed, potentially causing the introduction of
redundant TBoxes that must later be removed. We improve on this solution by
structuring all the repairs into a single labelled TBox.
     Let RepT (C v D) = {R1 , . . . , Rn }. We label every GCI E v F ∈ T with
lab(E v F ) = {i | E v F ∈ Ri }. Conversely, for every subset I ⊆ {1, . . . , n} we
define the TBox TI = {E v F ∈ T | lab(E v F ) = I}. A set I is a component if
TI 6= ∅. Every axiom belongs to exactly one component and hence the number of
components is bounded by |T |. One can represent these components using only
polynomial space and all repairs can be read from them and a directed acyclic
graph expressing dependencies between components. For simplicity we keep the
representation as subsets of {1, . . . , n}.
     The labelled TBox has full information on the   T repairs, and on their rela-
tionship with each other. For S ⊆ T , lab(S) := EvF ∈S lab(E v F ) yields all
repairs containing S. If M is a MinA for C 0 v D0 , lab(M)         is a set of repairs
entailing this subsumption. Moreover, ν(C 0 v D0 ) := M∈MinAT (C 0 vD0 ) lab(M)
                                                           S

is the set of all repairs entailing C 0 v D0 . Thus, C 0 is bravely subsumed by D0
iff ν(C 0 v D0 ) 6= ∅ and is cautiously subsumed iff ν(C 0 v D0 ) = {1, . . . , n}. The
set ν(C 0 v D0 ) is the boundary for the subsumption C 0 v D0 w.r.t. the labelled
TBox T [4]. Several methods for computing the boundary exist. Since we are only
interested in deciding whether this boundary is empty or equal to {1, . . . , n}, we
can optimize the algorithm to stop once this decision is made. This optimized
method is described in Algorithm 2. The algorithm first computes all MinAs for
C 0 vT D0 , and their labels iteratively. If one of this labels is not empty, then the
subsumption is a brave consequence; the procedure is-brave then returns true.
Alternatively, is-cautious accumulates the union of all these labels in a set ν
until this set contains all repairs, at which point it returns true.
    IAR semantics can be solved through one subsumption test, and hence in
polynomial time on the size of T , regardless of the number of repairs.
Theorem 15. Let n = |RepT (C v D)|. Then C 0 is IAR-subsumed by D0 iff
C 0 vTJ D0 , where J = {1, . . . , n}.
This shows that precompiling all repairs into a labelled ontology can help reduc-
ing the overall complexity and execution time of reasoning. In the next section,
we exploit the fact that number of MinAs for consequences in ontologies used in
practice is relatively small and compute them using a saturation-based approach.


5     Implementation and Experiments
We ran two separate series of experiments. The goal of the first series was to
investigate the feasibility of error-tolerant reasoning in practice. We implemented
a prototype tool in Java that checks whether a concept subsumption C v D is
brave or cautious w.r.t. a given TBox T and a consequence C 0 v D0 . The tool
uses Theorem 6 and the duality between MinAs and repairs, i.e. the repairs for
C 0 v D0 w.r.t T are obtained from the MinAs for C 0 v D0 by consecutively
removing the minimal hitting sets [27] of the MinAs from T . The tool first
computes all the MinAs for both inclusions C v D and C 0 v D0 w.r.t. T , and
then verifies whether some inclusions between MinAs for C v D and C 0 v
D0 hold to check for brave or cautious subsumptions. For the computation of
the MinAs we used a saturation-based approach based on a consequence-based
calculus [18]. More details regarding the computation of MinAs can be found
in [23].
    We selected three ontologies that are expressed mainly in EL and are typically
considered to pose different challenges to DL reasoners. These are the January
2009 international release of SNOMED, version 13.11d of the NCI thesaurus,2
and the GALEN-OWL ontology.3 All non-EL axioms (including axioms involving
roles only, e.g. role inclusion axioms) were first removed from the ontologies. The
number of axioms, concept names, and role names in the resulting ontologies is
shown in Table 2.
    For every ontology T we selected a number of inclusion chains of the form
A1 vT A2 vT A3 vT A4 , which were then grouped into
Type I inclusions, where A2 vT A4 was set as the unwanted consequence, and
Type II inclusions, where A2 vT A3 was defined as the unwanted consequence.
2
    http://evs.nci.nih.gov/ftp1/NCI_Thesaurus
3
    http://owl.cs.manchester.ac.uk/research/co-ode/
             Table 2. Metrics of the ontologies used in the experiments

              Ontology         #axioms     #conc. name          #role name
              GALEN-OWL          45 499           23 136                404
              NCI               159 805          104 087                 92
              SNOMED            369 194          310 013                 58


For the NCI and SNOMED ontologies we chose inclusions A2 v A4 (for Type I)
and A2 v A3 (for Type II) that were not entailed by the consecutive version of
the considered ontology, i.e. those that can be considered to be “mistakes” fixed
in the consecutive release (the July 2009 international release of SNOMED and
version 13.12e of the NCI Thesaurus). 500 inclusions of each type were found for
SNOMED, but only 26 Type-I inclusions and 36 Type-II inclusions were detected
in the case of NCI. For the GALEN-OWL ontology 500 inclusions chains of each
type were chosen at random. For every Type-I chain, we then used our tool to
check whether the inclusion A1 v A3 is a brave or cautious consequence w.r.t.
A2 v A4 . Similarly, for every Type-II inclusion we checked whether A1 v A4 is
a brave or cautious consequence w.r.t. A2 v A3 .
    All experiments were conducted on a PC with an Intel Xeon E5-2640 CPU
running at 2.50GHz. An execution timeout of 30 CPU minutes was imposed
on each problem in this experiment series. The results obtained are shown in
Table 3. The first two columns indicate the ontology that was used and the
inclusion type. The next three columns show the number of successful computa-
tions within the time limit, and the number of brave and cautious subsumptions,
respectively. The average and the maximal number of MinAs over the considered
set of inclusions are shown in the next two columns. The left-hand side of each
of these columns refers to the MinAs obtained for the consequence for which its
brave or cautious entailment status should be checked, and the right-hand side
refers to the unwanted consequence. The last column shows the average CPU
time needed for the computations over each considered set of inclusions.
    The number of successful computations was the lowest for the experiments
involving SNOMED, whereas no timeouts were incurred for NCI. Moreover, the
highest average number of MinAs was found for Type-II inclusions for SNOMED
with a maximal number of 54. GALEN-OWL required the longest computation
times, which could be a consequence of the fact that the (full) GALEN ontology
is generally seen as being difficult to classify by DL reasoners. The shortest com-
putation times were reported for experiments involving NCI. All the successful
computations required at most 11 GiB of main memory.

Table 3. Experimental results obtained for checking brave and cautious subsumption

ontology   type #succ. comp. #brave #cautious avg. #MinAs max #MinAs avg. time (s)
GALEN   I        498 / 500      495        39   1.707 | 1.663      4|4        335.680
       II        500 / 500      268        48   2.068 | 1.388      6|2        331.823
NCI    I          26 / 26        26         2   1.269 | 1.154      2|3         13.465
       II         36 / 36        16         8   3.111 | 1.111      7|3         15.338
SNOMED I         302 / 500      296        17   1.652 | 1.656     42 | 12     161.471
       II        314 / 500      154        34   3.908 | 1.879     54 | 54     150.566
                       4000                                                                    100000000
                       3500                                                                    10000000

Computation Time (s)
                       3000                                                                    1000000
                       2500                                                                    100000
                       2000                                                                    10000
                       1500                                                                    1000
                       1000                                                                    100
                       500                                                                     10
                         0                                                                     1
                                               Precompilation   Naive   Nr of Repairs


                              Fig. 1. Comparison of approaches for error-tolerant reasoning.

    In a second series of experiments we evaluated the advantages of performing
precompilation when checking the brave and cautious entailment status of several
inclusions w.r.t. an unwanted consequence. We implemented a slightly improved
version of Algorithm 1 which iterates over all the repairs for the unwanted conse-
quence and determines whether a consequence that should be checked is brave or
cautious by using the conditions from Definition 3. The implemented algorithm
stops as quickly as possible, e.g. when a non-entailing repair has been found, we
conclude immediately that the consequence is not cautious. The computation
of the repairs is implemented using the duality between MinAs and repairs as
described above. The minimal hitting sets were computed using the Boolean
algebraic algorithm from [22]. We used ELK [19] to check whether a given inclu-
sion follows from a repair. In the following we refer to this improved algorithm
as the naı̈ve approach.
    For comparing the performance of Algorithms 1 and 2 in practice, we selected
226 inclusions between concept names from SNOMED having more than 10
MinAs, with a maximum number of 223. For each inclusion A v B we randomly
chose five inclusions A0i v Bi0 entailed by SNOMED, and tested whether A0i v Bi0
is a brave or cautious subsumption w.r.t. A v B for every i ∈ {1, . . . , 5} using
the naı̈ve approach and Algorithm 2. In this series of experiments we allowed
each problem instance to run for at most 3600 CPU seconds, and 4 GiB of heap
memory, and 16 GiB of main memory in total, were allocated to the Java VM.
   The results obtained are depicted in Figure 1. The problem instances A v B
are sorted ascendingly along the x-axis according to the number of repairs for
A v B. The required computation times for each problem instance (checking
whether the five subsumptions are brave or cautious entailments w.r.t. the un-
wanted consequence) are shown along the y-axis on the left-hand side of the
graph. If no corresponding y-value is shown for a given problem instance, the
computation either timed out or ran out of memory. The number of repairs for
the unwanted consequences is depicted with a solid line, and the corresponding
values can be read off the y-axis on the right-hand side in logarithmic scale.
    One can see that a relatively small number of MinAs can lead to several thou-
sands (up to over 14 millions) of repairs. Also, if the number of repairs remains
small, i.e. below 400, the naı̈ve approach performs fairly well, even outperform-
ing the precompilation approach on a few problem instances. For larger number
of repairs, however, none of the computations for the naı̈ve approach succeeded.
The time required to perform reasoning with ELK outweighs the computation
times of all the MinAs for the precompilation approach. In total 115 instances
could be solved by the precompilation approach, whereas only 39 computations
finished when the naı̈ve approach was used. In our experiments the computation
of the MinAs was typically the most time consuming part; the computation of
the repairs once all the MinAs were available could be done fairly quickly.


6   Conclusions
We introduced error-tolerant reasoning inspired by inconsistency-tolerant se-
mantics from DLs and consistent query answering over inconsistent databases.
The main difference is that we allow for a general notion of error beyond incon-
sistency. We studied brave, cautious, and IAR reasoning, which depend on the
class of repairs from which a consequence can be derived. Although we focused
on subsumption w.r.t. EL TBoxes, these notions can be easily extended to any
kind of monotonic consequences from a logical language.
    Our results show that error-tolerant reasoning is hard in general for EL,
although brave reasoning remains polynomial for some of its sublogics. Interest-
ingly, IAR semantics, introduced to regain tractability of inconsistency-tolerant
query answering in light-weight DLs, is coNP-hard, even for the basic logic HL
with core axioms. Moreover, the number of repairs is not the only culprit for
the hardness of these tasks: for both brave and cautious reasoning there is no
polynomial-time algorithm on the size of T and the number of repairs that can
solve these problems unless P = NP.
    To overcome the complexity issues, we propose to compile the repairs into a
labeled ontology. While the compilation step may require exponential time, after
its execution IAR semantics can be decided in polynomial time, and brave and
cautious semantics become repair-polynomial. Surprisingly, the idea of precom-
puting the set of all repairs to improve the efficiency of reasoning seems to have
been overlooked in inconsistency-tolerant reasoning.
    To investigate the feasibility of error-tolerant reasoning in practice, we devel-
oped a prototype based on computing all MinAs, and annotating axioms with
the repairs they belong to. Our experiments show that despite their theoretical
complexity, brave and cautious reasoning can be performed successfully in many
practical cases, even for large ontologies. Our saturation-based procedure can
detect a large number of MinAs for some consequences in a fairly short amount
of time. There is a close connection between error-tolerant reasoning and axiom-
pinpointing [6, 7]; our labelled ontology method also relates to context-based
reasoning [4]. Techniques developed for those areas, like e.g. automata-based
pinpointing methods [5], could be useful in this setting.
    It is known that for some inexpressive DLs, all MinAs can be enumerated in
output-polynomial time [24,25]; the complexity of enumerating their repairs has
not, to the best of our knowledge, been studied. We will investigate if enumer-
ating repairs is also output-polynomial in those logics, and hence error-tolerant
reasoning is repair-polynomial.
References

 1. Arenas, M., Bertossi, L., Chomicki, J.: Consistent query answers in inconsistent
    databases. In: Proceedings of the 18th ACM SIGMOD-SIGACT-SIGART sympo-
    sium on Principles of Database Systems (PODS 1999). pp. 68–79. ACM (1999)
 2. Baader, F.: Terminological cycles in a description logic with existential restric-
    tions. In: Gottlob, G., Walsh, T. (eds.) Proceedings of the 18th International Joint
    Conference on Artificial Intelligence (IJCAI’03). pp. 325–330. Morgan Kaufmann
    (2003)
 3. Baader, F., Calvanese, D., McGuinness, D.L., Nardi, D., Patel-Schneider, P.F.
    (eds.): The Description Logic Handbook: Theory, Implementation, and Applica-
    tions. Cambridge University Press, 2nd edn. (2007)
 4. Baader, F., Knechtel, M., Peñaloza, R.: Context-dependent views to axioms and
    consequences of semantic web ontologies. Journal of Web Semantics 12–13, 22–40
    (2012), available at http://dx.doi.org/10.1016/j.websem.2011.11.006
 5. Baader, F., Peñaloza, R.: Automata-based axiom pinpointing. Journal of Auto-
    mated Reasoning 45(2), 91–129 (August 2010)
 6. Baader, F., Peñaloza, R.: Axiom pinpointing in general tableaux. Journal of Logic
    and Computation 20(1), 5–34 (2010)
 7. Baader, F., Peñaloza, R., Suntisrivaraporn, B.: Pinpointing in the description logic
    EL+ . In: Proceedings of the 30th German Conference on Artificial Intelligence
    (KI2007). LNAI, vol. 4667, pp. 52–67. Springer, Osnabrück, Germany (2007)
 8. Baader, F., Suntisrivaraporn, B.: Debugging SNOMED CT using axiom pinpoint-
    ing in the description logic EL+ . In: Proceedings of the 3rd Knowledge Repre-
    sentation in Medicine (KR-MED’08): Representing and Sharing Knowledge Using
    SNOMED. CEUR-WS, vol. 410 (2008)
 9. Bertossi, L.: Database repairing and consistent query answering. Synthesis Lectures
    on Data Management 3(5), 1–121 (2011)
10. Bienvenu, M.: On the complexity of consistent query answering in the presence
    of simple ontologies. In: Proceedings of the 26st Natonal Conference on Artificial
    Intelligence (AAAI 2012) (2012)
11. Bienvenu, M., Rosati, R.: Tractable approximations of consistent query answering
    for robust ontology-based data access. In: Rossi, F. (ed.) Proceedings of the 23rd
    International Joint Conference on Artificial Intelligence (IJCAI’13). AAAI Press
    (2013)
12. Brandt, S.: Polynomial time reasoning in a description logic with existential re-
    strictions, GCI axioms, and - what else? In: de Mántaras, R.L., Saitta, L. (eds.)
    Proceedings of the 16th European Conference on Artificial Intelligence, (ECAI
    2004). pp. 298–302. IOS Press (2004)
13. Cote, R., Rothwell, D., Palotay, J., Beckett, R., Brochu, L.: The systematized
    nomenclature of human and veterinary medicine. Tech. rep., SNOMED Interna-
    tional, Northfield, IL: College of American Pathologists (1993)
14. Eiter, T., Gottlob, G.: Identifying the minimal transversals of a hypergraph and
    related problems. Tech. Rep. CD-TR 91/16, Christian Doppler Laboratory for
    Expert Systems, TU Vienna (1991)
15. Gallo, G., Longo, G., Pallottino, S.: Directed hypergraphs and applications. Dis-
    crete Applied Mathematics 42(2), 177–201 (1993)
16. Johnson, D.S., Yannakakis, M., Papadimitriou, C.H.: On generating all maximal
    independent sets. Information Processing Letters 27(3), 119–123 (1988)
17. Kalyanpur, A., Parsia, B., Horridge, M., Sirin, E.: Finding all justifications of OWL
    DL entailments. In: Proceedings of the 6th International Semantic Web Conference
    and 2nd Asian Semantic Web Conference, ISWC 2007, ASWC 2007. LNCS, vol.
    4825, pp. 267–280. Springer (2007)
18. Kazakov, Y.: Consequence-driven reasoning for Horn SHIQ ontologies. In:
    Boutilier, C. (ed.) Proceedings of the 21st International Joint Conference on Arti-
    ficial Intelligence (IJCAI’09). pp. 2040–2045 (2009)
19. Kazakov, Y., Krötzsch, M., Simančı́k, F.: The incredible ELK: From polynomial
    procedures to efficient reasoning with EL ontologies. Journal of Automated Rea-
    soning (2013), to appear
20. Lapaugh, A.S., Papadimitriou, C.H.: The even-path problem for graphs and
    digraphs. Networks 14(4), 507–513 (1984), http://dx.doi.org/10.1002/net.
    3230140403
21. Lembo, D., Lenzerini, M., Rosati, R., Ruzzi, M., Savo, D.F.: Inconsistency-tolerant
    semantics for description logics. In: Hitzler, P., Lukasiewicz, T. (eds.) Proceedings
    of the 4th International Conference on Web Reasoning and Rule Systems (RR’10).
    LNCS, vol. 6333, pp. 103–117. Springer (2010)
22. Lin, L., Jiang, Y.: The computation of hitting sets: Review and new algorithms.
    Information Processing Letters 86(4), 177–184 (2003)
23. Ludwig, M.: Just: a tool for computing justifications w.r.t. EL ontologies. In: Pro-
    ceedings of the 3rd International Workshop on OWL Reasoner Evaluation (ORE
    2014) (2014)
24. Peñaloza, R., Sertkaya, B.: Complexity of axiom pinpointing in the DL-Lite family
    of description logics. In: Coelho, H., Studer, R., Wooldridge, M. (eds.) Proceedings
    of the 19th European Conference on Artificial Intelligence, (ECAI 2010). Frontiers
    in Artificial Intelligence and Applications, vol. 215, pp. 29–34. IOS Press (2010)
25. Peñaloza, R., Sertkaya, B.: On the complexity of axiom pinpointing in the EL family
    of description logics. In: Lin, F., Sattler, U., Truszczynski, M. (eds.) Proceedings of
    the Twelfth International Conference on Principles of Knowledge Representation
    and Reasoning (KR 2010). AAAI Press (2010)
26. Peñaloza, R.: Axiom-Pinpointing in Description Logics and Beyond. Ph.D. thesis,
    Dresden University of Technology, Germany (2009)
27. Reiter, R.: A theory of diagnosis from first principles. Artificial Intelligence 32(1),
    57–95 (1987)
28. Rosati, R.: On the complexity of dealing with inconsistency in description logic on-
    tologies. In: Walsh, T. (ed.) Proceedings of the 22nd International Joint Conference
    on Artificial Intelligence (IJCAI’11). pp. 1057–1062. AAAI Press (2011)
29. Spackman, K.: Managing clinical terminology hierarchies using algorithmic calcu-
    lation of subsumption: Experience with SNOMED-RT. Journal of the American
    Medical Informatics Association (2000), fall Symposium Special Issue.
30. W3C OWL Working Group: OWL 2 web ontology language document overview.
    W3C Recommendation (2009), http://www.w3.org/TR/owl2-overview/