=Paper= {{Paper |id=Vol-1193/paper_26 |storemode=property |title=Goal-Directed Tracing of Inferences in EL Ontologies |pdfUrl=https://ceur-ws.org/Vol-1193/paper_26.pdf |volume=Vol-1193 |dblpUrl=https://dblp.org/rec/conf/dlog/KazakovK14 }} ==Goal-Directed Tracing of Inferences in EL Ontologies== https://ceur-ws.org/Vol-1193/paper_26.pdf
    Goal-Directed Tracing of Inferences in EL Ontologies

                            Yevgeny Kazakov and Pavel Klinov

                             The University of Ulm, Germany
                        {yevgeny.kazakov, pavel.klinov}@uni-ulm.de


       Abstract. EL is a family of tractable Description Logics (DLs) that is the basis
       of the OWL 2 EL profile. Unlike for many expressive DLs, reasoning in EL can
       be performed by computing a deductively-closed set of logical consequences of
       some specific form. In some ontology-based applications, e.g., for ontology de-
       bugging, knowing the logical consequences of the ontology axioms is often not
       sufficient. The user also needs to know from which axioms and how the conse-
       quences were derived. Although it is possible to keep track of all inferences ap-
       plied during reasoning, this is usually not done in practice to avoid the overheads.
       In this paper, we present a goal-directed method that can generate inferences for
       selected consequences in the deductive closure without re-applying all rules from
       scratch. We provide an empirical evaluation demonstrating that the method is fast
       and economical for large EL ontologies. Although the main benefits are demon-
       strated for EL reasoning, the method can be easily extended to other procedures
       based on deductive closure computation using fixed sets of rules.


1    Introduction and Motivation
The majority of existing DL reasoners are based on optimized (hyper)tableau-based
procedures which essentially work by trying to construct counter-models for the en-
tailment [19, 15, 20]. If the reasoner could not find a counter-model by trying all alter-
natives, it declares that the entailment holds. It is not easy to use such procedures to
generate an explanation for the entailment, or even to determine which axioms are re-
sponsible for the entailment, because the axioms that were used to construct the models
are not necessarily the ones that are causing the clash. Recently another kind of rea-
soning procedures, which work by deriving logical consequences of ontology axioms
directly, became popular. Such consequence-based procedures were first introduced for
the EL family of tractable ontology languages [1], and later the same principle has
been extended to more expressive (non-tractable) languages such as Horn-SHIQ and
ALCH [9, 18]. The consequence-based procedures work by computing the closure un-
der the rules by forward chaining. The inference rules make sure that the result is com-
plete—all entailed conclusions of interest can be simply read off of the closure.
    It is easy to extend any consequence-based procedure so that for each derived con-
clusion, it also records the inferences that have produced it. This way, one can easily
generate proofs for the entailed conclusions. Unfortunately, saving all applied infer-
ences during reasoning is not practical, as each conclusion could be derived in many
ways and storing all inferences requires a lot of memory. In practice, one usually does
not need to retrieve all inferences, but just inferences for some particular (e.g., unex-
pected) consequences. In this paper, we demonstrate how these inferences can be traced
in a goal-directed way using the pre-computed set of conclusions. The main idea, is to
split the conclusions on small partitions, so that most inferences are applied within
each individual partition. It is then possible to re-compute the inferences for conclu-
sions within each partition by forward chaining using conclusions from other partitions
as set of support. A similar idea has been recently used for incremental reasoning in
EL+ [10]. We demonstrate empirically that only a small fraction of inferences is pro-
duced when generating proofs for EL+ consequences and that the inferences can be
computed in just a few milliseconds even for ontologies with hundreds thousands of
axioms. Some omitted details, e.g., proofs, can be found in the technical report [11].


2     Related Work
In the context of ontologies, most research relevant to the issue of explanations has
revolved around justifications—the minimal subsets of the ontology which entail the
result [5, 8, 3, 7, 16]. For simple ontologies, justifications are often small and can be
computed using generic axiom pinpointing procedures, e.g. based on model-based di-
agnosis [17], that use the decision procedure for the target (monotonic) logic as a black-
box. While these procedures can significantly narrow down the set of relevant axioms,
it is still up to the user to understand how to obtain the result from the justifications.
     The fact that justifications are minimal subsets is useful to the user but also makes
them intractable to compute [16]. Much research has gone into developing optimiza-
tions to cope with this complexity in practical cases. For example, employing ontol-
ogy modularity techniques for black-box methods can be seen as a step towards goal-
directed behavior [5]. The so-called locality-based modules [6] are well-defined frag-
ments of the ontology which, first, can be computed fast using syntactic analysis and,
second, contain all justifications for a given entailment. Modules bound the search for
justification to an (often small) set of axioms. Using our goal-directed tracing proce-
dure, one can limit the sets of relevant axioms even further: it is sufficient to take all
axioms used in side conditions of the computed inferences to obtain all axioms that
could be used in proofs. On the other hand, the glass-box methods (e.g., [3, 4]) tackle
the problem by using the properties of the underlying reasoning procedure (instead of
using it as a black-box). They are, however, not goal-directed and work similarly to
what we below call ‘full tracing’. Our method, in contrast, exploits specific properties
of the inference system also to perform computations in a goal-directed way.


3     Preliminaries
3.1   The Description Logic EL+
The syntax of EL+ is defined using a vocabulary consisting of countably infinite sets
of (atomic) roles and atomic concepts. EL+ concepts are defined using the grammar
C ::= A | > | C1 u C2 | ∃R.C, where A is an atomic concept, R an atomic role, and
C, C1 , C2 ∈ C. An EL+ axiom is either a concept inclusion C1 v C2 for C1 , C2 ∈ C,
a role inclusion R v S, or a role composition R1 ◦ R2 v S, where R, R1 , R2 , S are
role names. An EL+ ontology O is a finite set of EL+ axioms. Entailment of axioms
by an ontology is defined in a usual way; a formal definition can be found in [11]. A
concept C is subsumed by D w.r.t. O if O |= C v D. In this case, we call C v D an
entailed subsumption (w.r.t. O). The ontology classification task requires to compute
all entailed subsumptions between atomic concepts occurring in O.


3.2        Inferences, Inference Rules, and Proofs

Let Exp be a fixed countable set of expressions. An inference over Exp is an object inf
which is assigned with a finite set of premises inf.Premises ⊆ Exp and a conclusion
inf.conclusion ∈ Exp.1 When inf.Premises = ∅, we say that inf is an initialization
inference. An inference rule R over Exp is a countable set of inferences over Exp; it
is an initialization rule if all these inferences are initialization inferences. In this paper,
we view an inference system as one inference rule R representing all of their inferences.
    We say that a set of expressions Exp ⊆ Exp is closed under an inference inf if
inf.Premises ⊆ Exp implies inf.conclusion ∈ Exp. Exp is closed under an inference
rule R if Exp is closed under every inference inf ∈ R. The closure under R is the
smallest set of expressions Closure(R) that is closed under R. Note that Closure(R) is
always empty if R does not contain initialization inferences.
    We will often restrict inference rules to subsets of premises. Let Exp ⊆ Exp be
a set of expressions, and R an inference rule. By R(Exp) (R[Exp], RhExpi) we denote
the rule consisting of all inferences inf ∈ R such that inf.Premises ⊆ Exp (respec-
tively Exp ⊆ inf.Premises, and inf.conclusion ∈ Exp). We can arbitrarily combine
these filters: for example, R[Exp1 ](Exp2 )hExp3 i consists of those inferences in R whose
premises contain all expressions from Exp1 , are a subset of Exp2 , and produce a con-
clusion in Exp3 . The order of filters is not relevant: R[Exp1 ](Exp2 )hExp3 i is the same
as, e.g, R(Exp2 )hExp3 i[Exp1 ]. For simplicity, we write R(), R[], R(exp), R[exp], and
Rhexpi instead of R(∅), R[∅], R({exp}), R[{exp}], and Rh{exp}i respectively. Note that
R[] = RhExpi = R and R() consists of all initialization inferences in R.
    A proof (in R) is a sequence of inferences p = inf1 , . . . , infn (infi ∈ R, 1 ≤ i ≤ n)
such that infj .Premises ⊆ {infi .conclusion | 1 ≤ i < j} for each j with 1 ≤ j ≤ n. If
exp = infn .conclusion then we say p is a proof for exp. A proof p = inf1 , . . . , infn for
exp is minimal if no strict sub-sequence of inf1 , . . . , infn is a proof for exp. Note that in
this case infi .conclusion 6= infj .conclusion when i 6= j (1 ≤ i, j ≤ n).


3.3        The Reasoning Procedure for EL+

The EL+ reasoning procedure works by applying inference rules to derive subsump-
tions between concepts. In this paper, we use a variant of the rules that does not require
normalization of the input ontology [13]. The rules for EL+ are given in Fig. 1, where
the premises (if any) are given above the horizontal line, and the conclusions below.
Some rules have side conditions given after the colon that restrict the expressions to
which the rules are applicable. For example, rule R+
                                                   u contains one inference inf for each
C, D1 , D2 , such that D1 u D2 occurs in O with inf.Premises = {C v D1 , C v D2 },
      1
          There can be different inferences with the same sets of premises and the same conclusion.
                                           C v D1 C v D2
  R0          : C occurs in O        R+
                                      u                  : D1 u D2 occurs in O
       CvC                                   C v D1 u D2
  R>          : C and > occur in O         E v ∃R.C C v D ∃S.D occurs in O
       Cv>                           R∃                  :
                                               E v ∃S.D    R v∗O S
       CvD
  Rv       :DvE∈O                                                  S1 ◦ S2 v S ∈ O
       CvE                                 E v ∃R1 .C C v ∃R2 .D
                                      R◦                         : R1 v∗O S1
         C v D1 u D2                             E v ∃S.D
  R−                                                               R2 v∗O S2
   u
       C v D1 C v D2

                      Fig. 1. The inference rules for reasoning in EL+


inf.conclusion = C v D1 u D2 . The side conditions of rules R∃ and R◦ use the clo-
sure v∗O of role inclusion axioms—the smallest reflexive transitive relation such that
R v S ∈ O implies R v∗O S. Note that the axioms in the ontology O are only used in
side conditions of the rules and never used as premises of the rules.
    The rules in Fig. 1 are complete for deriving subsumptions between the concepts
occurring in the ontology. That is, if O |= C v D for C and D occurring in O, then
C v D can be derived using the rules in Fig. 1 [13]. Therefore, in order to classify
the ontology, it is sufficient to compute the closure under the rules and take the derived
subsumptions between atomic concepts. This procedure is polynomial since it derives
only subsumptions of the form C v D and C v ∃R.D where C and D occur in
O. Example 1 illustrates the application of rules in Fig. 1 for deriving the entailed
subsumption relations.
Example 1. Consider the EL+ ontology O consisting of the following axioms:
A v ∃R.B, B v ∃S.A, ∃H.B v C, ∃S.C v C, R v H.
Then subsumption B v C has the following proof using the rules in Fig. 1. We show
the premises used in the inferences in parentheses and side conditions after the colon:
      AvA           by R0 () : A occurs in O,                                         (1)
      BvB           by R0 () : B occurs in O,                                         (2)
      A v ∃R.B      by Rv (A v A) : A v ∃R.B ∈ O,                                     (3)
      B v ∃S.A      by Rv (B v B) : B v ∃S.A ∈ O,                                     (4)
      A v ∃H.B      by R∃ (A v ∃R.B, B v B) : ∃R.B occurs in O, R v∗O H,              (5)
      AvC           by Rv (A v ∃H.B) : ∃H.B v C ∈ O,                                  (6)
      B v ∃S.C      by R∃ (B v ∃S.A, A v C)        : ∃S.C occurs in O, R v∗O R,       (7)
      BvC           by Rv (B v ∃S.C) : ∃S.C v C ∈ O.                                  (8)
The inferences producing the subsumptions (1)–(8) are shown graphically in Fig. 2.


3.4    Computing the Closure under Inference Rules
Computing the closure under inference rules, such as in Fig. 1, can be performed using
a well-known forward chaining procedure detailed in Algorithm 1. The algorithm de-
rives consequences by applying inferences in R and collects those conclusions between
        R0           Rv                  R∃              Rv
             AvA          A v ∃R.B            A v ∃H.B         AvC

                                      BvB           B v ∃S.A         B v ∃S.C          BvC
                                 R0            Rv              R∃                 Rv


                      Fig. 2. The inference diagram for the proof in Example 1

    Algorithm 1: Computing the inference closure by saturation
        saturation(R):
        input : R: a set of inferences
        output : S = Closure(R)
    1 S, Q ← ∅;
    2 for inf ∈ R() do                                                 /* initialize */
    3      Q.add(inf.conclusion);
    4 while Q 6= ∅ do                                                            /* close */
    5     exp ← Q.takeNext();
    6     if exp ∈
                 / S then
    7          S.add(exp);
    8          for inf ∈ R[exp](S) do
    9               Q.add(inf.conclusion);

10      return S;



which all inferences are applied in a set S (the saturation) and the other conclusions
in a queue Q. The algorithm first initializes Q with conclusions of the initialization
inferences R() ⊆ R (lines 2–3), and then in a cycle (lines 4–9), repeatedly takes the
next expression exp ∈ Q, if any, inserts it into S if it does not occur there, and applies
all inferences inf ∈ R[exp](S) having this expression as one of the premises and other
premises from S. The conclusions of such inferences are then inserted back into Q. Note
that every inference in R(S) is applied by the algorithm exactly once: this happens in
line 8 only when exp is the last premise of the inference added to S.


4       Tracing of Inferences
In this section we describe algorithms for computing the inferences in R deriving given
expressions in the closure S under R. Depending on a particular application, one might
be interested in computing all inferences for every given expression or just one inference
per expression. One might also be interested in unfolding such inferences recursively,
that is, computing for each inference also (some or all) inferences deriving its premises.
This can be useful for generation of full proofs or for interactive debugging.
    Our algorithms are based on ‘tracing of inferences’, that is, we record the relevant
inferences during the forward chaining application of the rules. This has an advantage
that one can reuse the existing forward chaining infrastructure without requiring addi-
tional indexes, e.g., to identify matching premises of the rules by conclusions.
  Algorithm 2: Goal-directed tracing of inferences
      goalDirectedTracing(R, S, Exp):
      input : Rules: a set of inferences, S, Exp ⊆ S ∩ Closure(R(S)): sets of expressions
      output : M: a multimap from expressions to inferences such that M.Keys = Exp and for
              each exp ∈ Exp we have that M(exp) = R(S)hexpi
 1 M, Q ← ∅;
 2 for inf ∈ R(S \ Exp)hExpi do                                     /* initialize */
 3      Q.add(inf );
 4  while Q 6= ∅ do                                                        /* close */
 5      inf ← Q.takeNext();
  6     exp ← inf.conclusion;
  7     if exp ∈
               / M.Keys then
  8          M.add(exp 7→ inf);
  9          for inf ∈ R[exp](M.Keys ∪ (S \ Exp))hExpi do
 10               Q.add(inf );
 11       else
 12              M.add(exp 7→ inf);

 13   return M;



4.1    Full Tracing
It is relatively straightforward to adapt Algorithm 1 so that all inferences applied in the
computation are saved. (cf. [11] for a formal presentation). Instead of collecting the
expressions derived by the inferences in S, we collect the inferences themselves and
store them in a multimap M. A multimap is like a map, except that it can store several
values (in our case inferences) for the same key (in our case the conclusion of the in-
ferences). The newly found inferences are also processed through the queue Q. If the
conclusion of the inference was not derived before by any other inference, it is added
to M and all inferences between its conclusion and the previously derived conclusions
are produced. Otherwise, we just store the new inference and do nothing else. The algo-
rithm can be easily adapted to store only one traced inference per conclusion: for that
it suffices to make M an ordinary map. As Algorithm 1, closure computation with full
tracing generates every inference at most once and thus retains polynomial complexity.

4.2    Goal-directed Tracing of Inferences
It might be not practical to store all inferences used in the computation of the closure
because this can require much more memory compared to just storing the conclusions of
inferences. In practice, one usually does not require all inferences, but just inferences for
some conclusions of interest (e.g., resulting from modeling errors). While it is possible
to modify full tracing to record only inferences of interest, it is also possible to avoid
applying many unnecessary inferences if the closure S under R is already computed.
    A goal-directed method for computing R(S)hExpi, that is, the inferences in R(S)
that derive expressions from the given set Exp, is detailed in Algorithm 2. The algo-
rithm first generates only those inferences in R(S) that do not use any premises from
Exp (lines 2–3), and then repeatedly generates inferences by applying the rules to their
conclusions that are in Exp similarly to full tracing. Specifically, the rules are only ap-
plied to conclusions of the inferences saved so far (stored in M) and the expressions
in S that are not in Exp (see line 9). This is needed to ensure that every inference is
generated at most once: an inference inf is generated only when its last premise from
Exp is computed. Note that in Algorithm 2 we do not require S to be the closure under
R. We just require that Exp is a subset of S and Closure(R(S)), which means that ev-
ery expression exp ∈ Exp must be derivable using only inferences from R restricted to
premises in S. The following theorem asserts correctness of the algorithm.
Theorem 1 (Correctness of Algorithm 2). Let R be a set of inferences and S, Exp ⊆
S ∩ Closure(R(S)) be sets of expressions, and M be the output of Algorithm 2 on R, S,
and Exp. Then M.Keys = Exp and for each exp ∈ Exp we have M(exp) = R(S)hexpi.

4.3   Tracing of Inferences using Partitions
The reader may wonder, why we could not simply iterate over inf ∈ R(S)hExpi directly,
especially if we could iterate over inf ∈ R(S \ Exp)hExpi in line 2 of Algorithm 2?
In general, iteration over R(S0 )hExpi for S0 ⊆ S can be performed by iterating over
all inferences inf ∈ R(S0 ) using a forward chaining procedure similar to full tracing
and checking if inf.conclusion ∈ Exp. In this case, of course, Algorithm 2 does not
give any advantage over full tracing since both algorithms enumerate all inferences
in R(S). Algorithm 2, however, gives advantage if R(S \ Exp)hExpi is small and can
be computed more effectively. For example, in Sect. 5.1, we demonstrate that one can
select some non-trivial subsets Exp such that R(S \ Exp)hExpi = R()hExpi, that is, only
initialization inferences can produce expressions in Exp from expressions not in Exp.
If the set of all expressions Exp can be partitioned on such subsets, it is possible to
develop a more efficient inference tracing algorithm.
     Specifically, let Pts be a fixed countable set of partition identifiers (short parti-
tions), and assume that each expression exp ∈ Exp is assigned with exactly one parti-
tion exp.partition ∈ Pts. Further, let Exp[pt] = {exp ∈ Exp | exp.partition = pt}. Then
one can use Algorithm 3 to compute R(S)hExpi for arbitrary Exp ⊆ Exp. Essentially,
Algorithm 3 performs tracing of inferences using Algorithm 2 for the partitions as-
signed to expressions in Exp. This way, one can still enumerate many inferences that do
not derive expressions from Exp, but if the partitions are small enough, this overhead is
not significant, and the method is still more practical than using full tracing.
     Algorithm 3 can be easily modified to trace just one inference per expression and/or
to unfold the inferences recursively. For the former, it is sufficient to use the appropriate
modification of Algorithm 2 as discussed in Sect. 4.2. For the latter, it is sufficient to
repeatedly add the premises for each computed inference with the conclusion in Exp as
additional inputs of the algorithm until a fixpoint is reached.

5     Tracing of Inferences in EL+
In this section we apply Algorithm 3 for goal-directed tracing of inferences in EL+ on-
tologies. To this end, we assume that the closure S under the rules in Fig. 1 is computed,
  Algorithm 3: Goal-directed tracing of inferences using partitions
      goalDirectedPartitionTracing(R, S, Exp):
      input : R: a set of inferences, S ⊆ Closure(R(S)), Exp ⊆ S: sets of expressions
      output : M: Pts → Exp → 2R(S) a two level multimap such that for each exp ∈ Exp, we
              have M(exp.partition)(exp) = R(S)hexpi
 1 M ← ∅;
 2 for exp ∈ Exp do
 3     pt ← exp.partition;
 4     if pt ∈
             / M.Keys then
 5          M(pt) ← goalDirectedTracing(R, S, S[pt]) ;         /* by Algorithm 2 */

 6    return M;


      R0          Rv                R∃                Rv
           AvA         A v ∃R.B          A v ∃H.B          AvC

                                   BvB          B v ∃S.A           B v ∃S.C         BvC
                              R0           Rv                R∃                Rv


Fig. 3. The inferences applied for tracing partition B (solid lines) and for tracing partition A
(dashed lines) using Algorithm 2 for the closure S containing subsumptions (1)–(8).


and we are given a set of subsumptions Exp ⊆ S, for which the inferences that have
derived them in S should be found. We first describe our strategy of partitioning the
derived subsumptions, then discuss some issues related to optimizations, and, finally,
present an empirical evaluation of tracing performance on existing ontologies.

5.1    Partitioning of Derived EL+ Subsumptions
To partition the set Exp of concept subsumptions C v D derived by the rules R in
Fig. 1, we use the partitioning function used for incremental reasoning in EL+ [10].
Specifically, the set of partition identifiers Pts is the set of all EL+ concepts, and every
subsumption exp = C v D is assigned with the partition exp.partition = C ∈ Pts. It
is easy to see that each conclusion of a rule in Fig. 1 has the same left-hand side as one
of the premises of the rule, unless it is rule R0 or R> . Thus, for the set Exp = S[C]
of subsumptions C v D ∈ S that belong to partition C, we have R(Exp)hExpi =
R()hExpi. Therefore, tracing of inferences for expressions in S[C] can be performed
efficiently using Algorithm 2. Note that, according to Algorithm 3, to find all inferences
for exp = C v D, it is sufficient to trace just the inferences for partition C.

Example 2. Consider subsumptions (1)–(8) derived in Example 1. These subsumptions
are assigned to two different partitions A and B. To compute the inferences for B v
C, it is sufficient to apply Algorithm 2 for partition B (Exp = Exp[B]) using the
precomputed closure S containing (1)–(8). It is easy to see that only inferences for (2),
(4), (7), and (8) will be produced by this algorithm (compare Fig. 3 with Fig. 2). During
initialization (line 2), Algorithm 2 applies only rule R0 deriving B v B. During closure
(line 4), the algorithm applies only inferences to the derived subsumptions in partition
B, and keeps only those inferences that produce subsumptions in the same partition.
For example, the inference by R∃ producing (5) should be ignored, even though it uses
a premise B v B from partition B. Note that these inferences would not be computed
without S: to produce B v ∃S.C we use the premise A v C from S. But we do not
need to compute the inferences for A v C first. Thus, the precomputed set S is used in
our procedure as a set of support to reach the conclusions of interest as fast as possible.
    Now, if we want to unfold the inferences backwards to obtain the full proof for
B v C, we need to iterate over the premises that were used to derive B v C and
trace their partitions. In our case, B v C was derived from B v ∃S.C, for which the
partition B was already traced, but continuing further to the premise A v C will bring
us to the partition A. When tracing partition A, we produce the remaining inferences for
(1), (3), (5), and (6) used in the proof (see Fig. 3). Note that it is not necessary to trace
partition A to find the proof, e.g., for B v ∃S.A because no expression in partition A
was used to derive B v ∃S.A.

5.2   Optimizations
In this section we describe how our tracing algorithms can co-exist with known opti-
mizations for EL saturation and present one specific optimization for Algorithm 2 to
avoid tracing of unnecessary partitions.
     Forward redundancy: Existing implementations of the EL+ procedure based on
the rules in Fig. 1, employ a number of additional optimizations to reduce the num-
ber of operations. In particular, reasoner ELK avoids applications of some redundant
inferences—inferences that are not necessary for obtaining the required consequences.
For example, it is not necessary to apply rule R−                              +
                                                    u to conclusions of rule Ru , and it is
not necessary to apply rules R∃ and R◦ if their first premise was derived by rule R∃
[13]. Since the rules are applied by Algorithm 1 only when the conclusion is produced
for the first time, the result of the saturation can be different if the rules are applied
in a different order. Consequently, if we apply only non-redundant inferences during
tracing, due to a potentially different order of rule applications (which may well happen
when tracing only partitions), we may miss some of the inferences that were originally
applied, and may even fail to derive the expressions to be traced. So, to ensure that all
original inferences are traced, we also have to apply the redundant inferences (but if the
conclusion of the inference is not in the closure, we do not need to keep it).
     Cyclic inferences: As mentioned in Sect. 4.3, Algorithm 3 for tracing inferences us-
ing partitions, can be extended to unfold the proofs recursively by repeating the tracing
procedure for the premises of relevant inferences. In practice, one is usually interested
in minimal proofs, i.e., proofs without repetitions. However, not every inference that
is obtained by our procedure can be used in a minimal proof. If an inference is cyclic,
that is, some premise of the inference could only be derived from its conclusion, it can
be always removed from the proof. Disregarding cyclic inferences can result in fewer
traced partitions when unfolding inferences recursively. Note that if we only save the
first inference per expression in Algorithm 2, we never create cyclic inferences.
     We can avoid many, but not all, cyclic inferences by using a modification of Algo-
rithm 2 in which we save an inference inf in M only if every premise of inf is derived by
Table 1. Number of axioms, conclusions, and inferences in test ontologies, as well running time
(in ms.) and memory consumption (in MB) with and without full tracing during classification.

 Ontology / Number of axioms       Redun. Number of Number of No tracing Full tracing
                 (partitions)       elim. conclusions inferences time mem. time mem.
 GO                      87,492      on     2,183,400 3,614,034 1,309         643 2,698 1,066
 (extended)            (46,846)      off    2,450,505 5,715,611 1,877         546 5,004 1,102
 GALEN                   36,547      on     1,479,450 2,015,877     935       690 1,830 711
 (EL+ version)         (25,965)      off    1,922,549 3,944,921 1,326         722 3,786 765
 SNOMED CT              297,327      on    16,673,337 24,338,895 12,738       900 51,603 3,010
 (July 2013)          (371,642)      off   19,963,726 54,553,961 16,968       924 OOM OOM


some inference that does not use the conclusion of inf, i.e., we look only at short cycles.
More details and empirical evaluation can be found in the technical report [11].

5.3       Experimental Evaluation
We have implemented Algorithm 3 and optimizations described in Sect.s 5.1 and 5.2
in the EL+ reasoner ELK2 and evaluated their performance. We used three large OWL
EL ontologies which are often used in evaluations of EL reasoners [2, 12, 14]: a version
of the Gene Ontology (GO),3 an EL+ -restricted version of the GALEN ontology,4 and
the July 2013 release of SNOMED CT.5 Classification for each experiment was per-
formed with the redundancy optimizations enabled and disabled (the latter eliminates
non-determinism and computes all conclusions used in proofs). We used a PC with Intel
Core i5-2520M 2.50GHz CPU, with Java 1.6 and 4GB RAM available to JVM.
    Full tracing overhead: Our first experiment evaluates the overhead of full tracing
(cf. Sect. 4.1 and [11] for more details) comparing to pure saturation (Algorithm 1).
Each ontology was classified 10 times with and without full tracing and the results
were averaged (excluding 5 warm-up runs). The results in Table 1 show that there is
roughly x2–x4 time overhead as well x3 memory overhead on SNOMED CT when
the redundancy elimination is enabled.6 When redundant inferences are applied, the
overhead is bigger such that full tracing of SNOMED CT runs out of memory (OOM).
    Goal-directed tracing: Next we evaluate performance of goal-directed tracing (Al-
gorithm 3). The experimental setup is as follows: each ontology was classified and then
each direct subsumption between concept names was traced with recursive unfolding.
Each subsumption was traced independently of others. We separately report results for
tracing all of inferences for each conclusion or just of the first inference. The former is
useful for generating all proofs whereas the latter can be used to produce one proof.
      2
     http://elk.semanticweb.org
      3
     It is a richer version with concept equivalences, role hierarchies and chains. We thank Chris
Mungall from the Lawrence Berkley Lab for providing it. It is accessible at:
http://elk.semanticweb.org/ontologies/go_ext.owl.
   4
     http://www.co-ode.org/galen/
   5
     http://www.ihtsdo.org/snomed-ct/
   6
     With the smaller ontologies the difference is less observable because of the relatively high
(4GB) memory limit set to JVM; the difference is better observable with lower memory limits.
Table 2. Evaluation results for goal-directed tracing of all atomic subsumptions in the ontology
when all (resp. the first) inferences for each conclusion are unfolded.

 Ontology            Redun. # of traced       # of traced # of inferences # of used   Time
 (Num. subsumptions) elim. partitions         inferences used in proofs v axioms      (in ms.)
 GO                       on      2.5 (1.4) 241.3 (149.0)     52.1 (6.4) 17.7 (2.5) 0.9 (0.6)
 (73,270)                 off     3.7 (1.4) 456.2 (244.1)     94.9 (6.4) 18.6 (2.5) 2.0 (1.2)
 GALEN                    on      1.6 (1.5) 210.9 (188.9)     12.2 (9.9) 4.9 (4.4) 0.8 (0.7)
 (38,527)                 off     1.9 (1.5) 414.4 (350.9)     21.7 (10.1) 5.2 (4.4) 1.9 (0.8)
 SNOMED CT                on      2.9 (1.6) 187.8 (136.6)     49.6 (12.7) 8.9 (3.7) 0.7 (0.5)
 (443,402)                off     8.6 (1.6) 788.3 (332.9)    385.9 (12.7) 9.7 (3.7) 10.1 (1.9)



     The results are presented in Table 2. First, they demonstrate that the proposed
method is very fast as it usually takes around a millisecond to trace a subsumption. (the
only exception is SNOMED CT without redundancy elimination, which is likely due
to cyclic inferences not detected by the optimization from Sect. 5.2). This is the effect
of its goal-directed nature: it only traces inferences which belong to the same partition
as an inference used by some proof of the target subsumption. The performance results
thus follow from the low number of traced partitions and traced inferences. Second, the
algorithm traces more inferences when all proofs are needed but the difference is not
substantial and the running times are usually close. This is because alternative proofs
overlap and the algorithm rarely needs to trace new partitions when unfolding more than
one inference per conclusion. Third, the difference between the number of traced and
used inferences shows granularity of partitioning in EL+ (inferences not used in proofs
can be traced only if they happen to be in one partition with used inferences). Finally,
since the results are averaged over all subsumptions, the reader may wonder if the good
performance is because most of them can be proved trivially. Results in the technical
report [11], which are separately aggregated over subsumptions that are provable from
at least 10 axioms, show that performance stays on the same level.


6   Summary and Future Research

In this paper we have presented a simple, efficient, and generalizable method for goal-
directed tracing of inferences in EL+ . Depending on the application, the inferences
can be used to recover proofs or compute justifications. The method is goal-directed
in the sense that it re-applies only a limited number of inferences using the previously
computed conclusions as a set of support. It does not require storing additional indexing
information or any sort of bookkeeping during the normal classification. The method is
based on the same granularity property of reasoning in EL+ as was previously used for
concurrent and incremental reasoning [12, 10]. Specifically, concept subsumer in EL+
most of the time can be computed independently of each other. This enables efficient
partitioning of all derived expressions so that tracing a particular expression requires
re-applying inferences only in few partitions, as shown empirically. The same property
can be further exploited for other tasks, for example, distributed reasoning in EL+ .
References
 1. Baader, F., Brandt, S., Lutz, C.: Pushing the EL envelope. In: Proc. 19th Int. Joint Conf. on
    Artificial Intelligence (IJCAI’05). pp. 364–369 (2005)
 2. Baader, F., Lutz, C., Suntisrivaraporn, B.: Efficient reasoning in EL+ . In: Proc. 2006 Int.
    Workshop on Description Logics (DL’06). vol. 189. CEUR-WS.org (2006)
 3. Baader, F., Peñaloza, R.: Automata-based axiom pinpointing. J. of Automated Reasoning
    45(2), 91–129 (2010)
 4. Baader, F., Peñaloza, R., Suntisrivaraporn, B.: Pinpointing in the description logic EL. In:
    Proc. 2007 Int. Workshop on Description Logics (DL’07). vol. 250. CEUR-WS.org (2007)
 5. Baader, F., Suntisrivaraporn, B.: Debugging SNOMED CT using axiom pinpointing in the
    description logic EL+ . In: KR-MED. vol. 410. CEUR-WS.org (2008)
 6. Cuenca Grau, B., Horrocks, I., Kazakov, Y., Sattler, U.: Modular reuse of ontologies: Theory
    and practice. J. of Artificial Intelligence Research 31, 273–318 (2008)
 7. Horridge, M., Parsia, B., Sattler, U.: Laconic and precise justifications in OWL. In: Proc. 7th
    Int. Semantic Web Conf. (ISWC’08). LNCS, vol. 5318, pp. 323–338. Springer (2008)
 8. Kalyanpur, A., Parsia, B., Horridge, M., Sirin, E.: Finding all justifications of OWL DL
    entailments. In: Proc. 6th Int. Semantic Web Conf. (ISWC’08). LNCS, vol. 4825, pp. 267–
    280. Springer (2007)
 9. Kazakov, Y.: Consequence-driven reasoning for Horn SHIQ ontologies. In: Proc. 21st Int.
    Joint Conf. on Artificial Intelligence (IJCAI’09). pp. 2040–2045. IJCAI (2009)
10. Kazakov, Y., Klinov, P.: Incremental reasoning in OWL EL without bookkeeping. In: Proc.
    12th Int. Semantic Web Conf. (ISWC’13). LNCS, vol. 8218, pp. 232–247. Springer (2013)
11. Kazakov, Y., Klinov, P.: Goal-directed tracing of inferences in EL ontologies. Tech. rep.,
    University of Ulm (2014), available from http://http://elk.semanticweb.org/
    publications/elk-tracing-trdl-2014.pdf
12. Kazakov, Y., Krötzsch, M., Simančík, F.: Concurrent classification of EL ontologies. In:
    Proc. 10th Int. Semantic Web Conf. (ISWC’11). LNCS, vol. 7032, pp. 305–320. Springer
    (2011)
13. Kazakov, Y., Krötzsch, M., Simančík, F.: The incredible ELK: From polynomial procedures
    to efficient reasoning with EL ontologies. J. of Automated Reasoning (2013), to appear
14. Lawley, M.J., Bousquet, C.: Fast classification in Protégé: Snorocket as an OWL 2 EL rea-
    soner. In: Proc. 6th Australasian Ontology Workshop (IAOA’10). vol. 122, pp. 45–49 (2010)
15. Motik, B., Shearer, R., Horrocks, I.: Hypertableau reasoning for description logics. J. of
    Artificial Intelligence Research 36, 165–228 (2009)
16. Peñaloza, R., Sertkaya, B.: On the complexity of axiom pinpointing in the EL family of
    description logics. In: Proc. 12th Int. Conf. on Principles of Knowledge Representation and
    Reasoning (KR’10). pp. 280–289. AAAI Press (2010)
17. Reiter, R.: A theory of diagnosis from first principles. Artificial Intelligence 32(1), 57–95
    (1987)
18. Simančík, F., Kazakov, Y., Horrocks, I.: Consequence-based reasoning beyond Horn ontolo-
    gies. In: Proc. 22nd Int. Joint Conf. on Artificial Intelligence (IJCAI’11). pp. 1093–1098.
    AAAI Press/IJCAI (2011)
19. Sirin, E., Parsia, B., Grau, B.C., Kalyanpur, A., Katz, Y.: Pellet: A practical OWL-DL rea-
    soner. J. of Web Semantics 5(2), 51–53 (2007)
20. Tsarkov, D., Horrocks, I.: FaCT++ description logic reasoner: System description. In: Proc.
    3rd Int. Joint Conf. on Automated Reasoning (IJCAR’06). LNCS, vol. 4130, pp. 292–297.
    Springer (2006)