=Paper= {{Paper |id=Vol-1193/paper_10 |storemode=property |title=Bridging the Gap between Tableau and Consequence-Based Reasoning |pdfUrl=https://ceur-ws.org/Vol-1193/paper_10.pdf |volume=Vol-1193 |dblpUrl=https://dblp.org/rec/conf/dlog/KazakovK14a }} ==Bridging the Gap between Tableau and Consequence-Based Reasoning== https://ceur-ws.org/Vol-1193/paper_10.pdf
              Bridging the Gap between Tableau and
                  Consequence-Based Reasoning

                           Yevgeny Kazakov and Pavel Klinov

                             The University of Ulm, Germany
                        {yevgeny.kazakov, pavel.klinov}@uni-ulm.de



       Abstract. We present a non-deterministic consequence-based procedure for the
       description logic ALCHI. Just like the similar style (deterministic) procedures
       for EL and Horn-SHIQ, our procedure explicitly derives subsumptions between
       concepts, but due to non-deterministic rules, not all of these subsumptions are
       consequences of the ontology. Instead, the consequences are only those subsump-
       tions that can be derived regardless of the choices made in the application of the
       rules. This is similar to tableau-based procedures, for which an ontology is incon-
       sistent if every expansion of the tableau eventually results in a clash. We report
       on a preliminary experimental evaluation of the procedure using a version of
       SNOMED CT with disjunctions, which demonstrates some promising potential.


1   Introduction and Motivation

Consequence-based and tableau-based methods are two well-known approaches to rea-
soning in Description Logics (DLs). The former work by deriving logical consequences
of axioms in the ontology using inference rules while the latter by building countermod-
els for conjectures. Historically, consequence-based methods have been applied to Horn
DLs, most prominently, the EL family [1, 10] and Horn-SHIQ [6], for which they are
more efficient than tableau, both theoretically and empirically. Tableau-based methods,
in turn, continued to dominate for non-Horn logics and very expressive fragments of the
DL family, e.g., SROIQ. This was mostly because, first, reasoning in expressive log-
ics requires various forms of case analysis, for which designing complete goal-directed
inference rule systems has not been easy (but tableau can handle cases via backtrack-
ing), and second, the tableau algorithms have been deemed easier to extend to new
constructors, because they follow more closely their semantics.
     The most straightforward attempt to combine the best from both worlds is to im-
plement both a consequence-based and a tableau-based procedure and use one or an-
other depending on the ontology. Indeed, several tableau reasoners, including Pellet and
Konclude, switch to a consequence-based algorithm for EL inputs. A more advanced
approach, is to automatically split the ontology into parts and use a consequence-based
algorithm for the deterministic part (which often covers nearly all of the ontology) and
tableau for the rest. Such reasoners as MORe [11] and Chainsaw [14] use ontology
modularity techniques for such splitting and then combine the results.
     Another line of research is to extend consequence-based calculi to deal with non-
determinism. One notable approach has been developed for ConDOR, a reasoner for
full ALCH [13]. The ConDOR’s procedure deals with disjunctions using deterministic
inference rules akin to ordered resolution [3]. The calculus is complete, retains the opti-
mal worst-case complexity, and still enables the “one pass” classification. It shows im-
pressive performance speed-up over the existing (hyper)tableau-based reasoners [13].
    In this paper we present another consequence-based procedure for a non-Horn logic,
this time for ALCHI, but instead of resolution we incorporate some of the tableau fea-
tures into our inference rule system. Specifically, we present non-deterministic rules
with alternative conclusions to deal with non-determinism (instead of deriving disjunc-
tions) and prove soundness, completeness, and termination of the corresponding satura-
tion procedure. We show that our procedure is another step towards combining the best
from the consequence-based and tableau-based worlds: It retains the optimal worst-case
complexity, goal directedness, and the “granularity” property of consequence-based
reasoning (i.e. computation of subsumers for different concepts can be done with lim-
ited interaction which enables concurrent [9] and incremental reasoning [7]). At the
same time, the non-deterministic rules enable tableau-style backtracking to be used
during consequence-based reasoning which has advantages over the resolution-style
procedure, for example, can re-use memory instead of storing all derived disjunctions.
    Finally, we present results of a preliminary evaluation using Horn ontologies and
the same disjunctive extension of the anatomical part of SNOMED CT which was used
to evaluate ConDOR (SCT-SEP) [13]. The results demonstrate that the procedure im-
proves over ConDORs on SCT-SEP, is comparable with ELK [10] on large EL+ ontolo-
gies, and several times faster than the fastest available tableau reasoner Konclude [5].


2   Preliminaries
The syntax of ALCHI is defined using a vocabulary consisting of countably infinite
sets of atomic roles and atomic concepts. We use the letters R, S for roles, C, D for
concepts, and A, B for atomic concepts. Complex concepts are defined by the grammar

            C(i) ::= > | ⊥ | A | C1 u C2 | C1 t C2 | ¬C | ∃R.C | ∀R.C,

where R is either an atomic role or its inverse (written R− ). Semantics is defined in the
standard way; formal details can be found in the technical report [8].
    An ALCHI ontology is a finite set of concept inclusion and role inclusion axioms
of the form C v D and R v S, respectively. The ontology classification task requires
to compute all entailed subsumptions between atomic concepts occurring in O.
    An ALCHI ontology O is Horn if for every axiom C v D, the concepts C and D
satisfy, respectively, the following grammar definitions:

                 C(i) ::= > | ⊥ | A | C1 u C2 | C1 t C2 | ∃R.C,
                 D(i) ::= > | ⊥ | A | D1 u D2 | ∃R.D | ∀R.D | ¬C.

That is, negations and universal restrictions should not occur on the left-hand side of
concept inclusions, and disjunctions should not occur on the right-hand side.
    A consequence-based procedure for Horn-ALCHI can be presented using the rules
in Fig. 1. These rules are similar to the rules for Horn-SHIQ [6]; the main difference
                                               H v C1 H v C2
          R0          : H =K uC           R+
                                           u                 : C1 u C2 ∈ sub(O)
               HvC                               H v C1 u C2
         R+           : > ∈ sub(O)             H v C1 | H v C2
          >
               Hv>                        R+
                                           t                   : C1 t C2 ∈ sub(O)
                                                H v C1 t C2
               HvC                             H v ∃R.K K v ∀S.C
         Rv        :CvD∈O                 R←                     : R v∗O S −
               HvD                         ∀
                                                     HvC
                 H v C1 u C2
         R−                                    H v ∃R.K K v C ∃S.C ∈ sub(O)
          u
               H v C1 H v C2              R←                 :
                                           ∃
                                                   H v ∃S.C    R v∗O S
               H v C H v ¬C
         R−                                    H v ∃R.K H v ∀S.C
          ¬
                   Hv⊥                    R→
                                           ∀                     : R v∗O S
                                                 H v ∃R.(K u C)
               H v ∃R.K K v ⊥
         R←
          ⊥                                    H v ∃R.K H v C ∃S.C ∈ sub(O)
                    Hv⊥                   R→                    :
                                           ∃
                                               H v ∃R.(K u ∃S.C) R v∗O S −

                Fig. 1. The consequence-based rules for reasoning in Horn-ALCHI


is that they can operate directly on complex concepts without requiring the ontology to
be normalized. Here we denote by sub(O) the set of concepts occurring in the ontology
O (possibly as subconcepts). The rules derive subsumptions of the form H v U with

                                     U :: = ⊥ | C | ∃R.K,                                   (1)
                                            d
                                  H, K :: = i∈I Ci ,                                        (2)

where Ci ∈ sub(O) (i ∈ I) and C ∈ sub(O). The premises of each rule (if any)
are given above the horizontal line, the conclusion below, and the side conditions that
determine when each rule is applicable (in terms of O) after the colon.1 Note that the
concept inclusion axioms in O are used not as premises of the rules but as side condition
of Rv . The side conditions of the rules R←      ←     →         →
                                           ∀ , R∃ , R∀ , and R∃ use the precomputed
                 ∗
role hierarchy vO , which is the smallest reflexive transitive relation on roles such that
R v S ∈ O implies R v∗O S and R− v∗O S − . It can be shown (and follows from
the more general result of this paper) that the inference system in Fig. 1 is sound and
complete for deriving all entailed subsumptions with concepts occurring in the ontology,
namely that for every C ∈ sub(O) and every H of the form (2), we have O |= H v C
iff either H v ⊥ or H v C is derivable by the rules in Fig. 1 for O.


3       Non-Deterministic Consequence-Based Reasoning in ALCHI

Extending consequence-based procedures beyond Horn DLs is difficult because dealing
with non-deterministic constructors, such as disjunction, requires reasoning by case. In
tableau-style procedures which construct (representations of) models, dealing with non-
deterministic constructors is straightforward: if a domain element is an instance of BtC
then it should be an instance of B or C. In consequence-based procedures that derive en-
tailed subsumptions directly, this kind of principle is not valid: if A v B tC is entailed,
    1
        Rule R+
              t stands for two rules with single premises H v C1 and H v C2 respectively.
then it is not true that either A v B or A v C is entailed. The consequence-based pro-
cedure for (non-Horn) ALCH [13] solves this problem by retaining disjunctions and
recombining them using resolution-style inference rules. For example, if ontology O
contains B v D and C v D, then from A v B t C using B v D ∈ O, one can derive
A v D t C, from which similarly “resolving” on C v D ∈ O one obtains A v D t D,
which is factorized to A v D by merging duplicate disjuncts.
    One of the well-known disadvantages of resolution-based procedures compared to
search-based (DPLL-style) procedures, is that resolution can produce many long dis-
junctions which can occupy a considerable amount of space and make inferences slow
because merging disjunctions is not a trivial operation. In this paper we, thus, reconsider
the former idea of splitting disjunctions using a non-deterministic rule:
                                      H v C1 t C2
                               R−
                                t                                                      (3)
                                    H v C1 | H v C2
Just like in tableau-based procedures, this rule creates two branches in which subsump-
tions can be independently derived. As discussed above, it is not true that all subsump-
tions derived in one of the branches will be entailed by the ontology. However, it can
be the case for the subsumptions that are derived on every branch, i.e., regardless of
the non-deterministic choices made. For example, if we split subsumption A v B t C
on A v B and A v C, then in both branches we can derive A v D using axioms
B v D ∈ O and C v D ∈ O respectively. This idea of considering common con-
clusions in all non-deterministic branches is somewhat reminiscent of the Stålmarck’s
procedure for propositional logic (see, e.g., [12]). Unfortunately, adding just the rule
R−t to the system in Fig. 1 is not sufficient for obtaining a sound inference system:

Example 1. Consider the ontology O containing the following axioms:
A v ∃R.A, A v B t C, ∃S.C v B, R v S.
Then the subsumers of A can computed using two branches after the splitting rule:
              AvA             by R0 (),

            A v ∃R.A          by Rv (A v A): A v ∃R.A ∈ O,

            AvBtC             by Rv (A v A): A v B t C ∈ O,

       AvB            AvC             by R−
                                          t (A v B t C),

                    A v ∃S.C          by R←                       ∗
                                          ∃ (A v ∃R.A, A v C): R vO S,

                      AvB             by Rv (A v ∃S.C): ∃S.C v B ∈ O.

As can be seen, A v B can be derived in both of these branches, but O 6|= A v B.
    To understand the problem in Example 1, we first need to understand how to inter-
pret rules like (3). Given an interpretation I = (∆I , ·I ), we say that an element a ∈ ∆I
satisfies a subsumption C v D in I (or C v D is satisfied in I by a) if a ∈ C I implies
a ∈ DI . So, rule R−                                                      I
                      t can be understood as follows: for each a ∈ ∆ if a satisfies the
premise H v C1 t C2 then it satisfies one of the conclusions H v C1 or H v C2 .
Clearly, this is true for this rule (and every I), however, this property fails for rules R←⊥,
R←∀  , and R ←
             ∃  in  Fig. 1, even  if I  |= O. For a counter-example, take I =   (∆ I I
                                                                                     , · ) with
∆I = AI = {a, b}, B I = {b}, C I = {a}, and S I = RI = {ha, bi, hb, ai}. Clearly, I
is a model of O in Example 1, and a satisfies both A v ∃R.A and A v C, but, a does
not satisfy the conclusion A v ∃S.C obtained from these premises by R←         ∃ . Hence, the
inference by R←  ∃   in Example    1 is  unsound.
    To “fix” rule R←  ∃ , we use a similar observation as in ConDOR [13]. Specifically, if
a satisfies H v ∃R.K in I but does not satisfy H v ∃S.C, and I |= R v S, then a
should satisfy H v ∃R.(K u ¬C). Thus, we can generalize rule R←         ∃ as follows:

                         H v ∃R.K K v C         ∃S.C ∈ sub(O)
              R←                              :                                            (4)
               ∃
                    H v ∃S.C | H v ∃R.(K u ¬C) R v∗O S
Example 2. Consider the ontology O from Example 1. Using the updated rule (4), we
can now obtain the following inferences for A:
                 AvA              by R0 (),

               A v ∃R.A           by Rv (A v A): A v ∃R.A ∈ O,

              AvBtC               by Rv (A v A): A v B t C ∈ O,

       AvB                 AvC                by R−
                                                  t (A v B t C, A v ¬B),

               A v ∃S.C       A v ∃R.(A u ¬C) by R←
                                                  ∃ (A v ∃R.A, A v C).

                 AvB

Note that in the rightmost branch, we derive the subsumption A v ∃R.(A u ¬C) to
which rule R←∃ can potentially be applied. In order to check if this rule is applicable,
we need to derive subsumptions for A u ¬C:
              A u ¬C v A          by R0 (),

           A u ¬C v ∃R.A          by Rv (A u ¬C v A): A v ∃R.A ∈ O,

           A u ¬C v B t C         by Rv (A u ¬C v A): A v B t C ∈ O,

    A u ¬C v B         A u ¬C v C             by R−
                                                  t (A u ¬C v B t C),

                       A u ¬C v ⊥             by R⊥ (A u ¬C v C).

If we now consider the rightmost branch in the derivation for A and the leftmost branch
for A u ¬C we obtain a set of subsumptions for which all inferences are applied but
neither A v B nor A v ⊥ is derived. Hence, we can conclude that O 6|= A v B.
  Figure 2 presents a generalization and extension of the rules in Fig. 1 to (full)
ALCHI using the idea similar to rule (4). The new rules now derive subsumptions
                                          H v C1 H v C2
      R0           : H =K uC         R+
                                      u                 : C1 u C2 ∈ sub(O)
           HvC                              H v C1 u C2
           HvC                            H v C1 | H v C2
  R⊥           : H = K u ¬C          R+
                                      t                   : C1 t C2 ∈ sub(O)
           Hv⊥                             H v C1 t C2
  R+
   >               : > ∈ sub(O)      R+
                                      ¬                      : ¬C ∈ sub(O)
           Hv>                            H v ¬C | H v C
           HvC                       R+                             : ∀R.C ∈ sub(O)
  Rv           :CvD∈O                 ∀
                                          H v ∀R.C | H v ∃R.¬C
           HvD
             H v C1 u C2                     H v ∃R.K K v ∀S.C
  R−                                 R←
                                      ∀                              : R v∗O S −
   u
           H v C1 H v C2                  H v C | H v ∃R.(K u ¬∀S.C)

            H v C1 t C2                       H v ∃R.K K v C          ∃S.C ∈ sub(O)
  R−                                 R←                             :
   t
           H v C1 | H v C2
                                      ∃
                                          H v ∃S.C | H v ∃R.(K u ¬C) R v∗O S
           H v C H v ¬C                   H v ∃R.K H v ∀S.C
  R−
   ¬
                                     R→
                                      ∀                     : R v∗O S
               Hv⊥                          H v ∃R.(K u C)
           H v ∃R.K K v ⊥                 H v ∃R.K H v C ∃S.C ∈ sub(O)
  R←                                 R→                    :
   ⊥
                Hv⊥
                                      ∃
                                          H v ∃R.(K u ∃S.C) R v∗O S −

            Fig. 2. The non-deterministic consequence-based rules for reasoning in ALCHI


of the form H v U , where U is of the form (1), and
                                       l        l
                           H, K :=       Ci u      ¬Dj ,                                   (5)
                                             i∈I      j∈J


where Ci ∈ sub(O) (i ∈ I), and Dj ∈ sub(O) (j ∈ J). Note that the rule R←        ⊥ has
remained the same, and is still formally unsound under our new interpretation. This rule
will be dealt with by our procedure in a special way, which does not affect soundness.


3.1        The Rule Application Strategy

Just like for other consequence-based procedures [1, 6, 13], every set closed under the
rules in Fig. 2 contains all “relevant” subsumptions entailed by the ontology, that is, our
rules are complete. Specifically, we say that a set M of subsumptions is closed under a
rule R if, whenever all premises of R belong to M , all its conclusions (respectively one
of the conclusions if R is non-deterministic) belong to M .

Theorem 1 (Completeness). Let O be an ALCHI ontology, H a conjunction of form
(5), D a concept occurring in O such that O |= H v D, and M a set of subsumptions
closed under all rules in Fig. 2. Then either H v D ∈ M or H v ⊥ ∈ M .

   The proof of Theorem 1 (cf. [8]) works along the same lines as for EL [1] and
Horn-SHIQ [6], i.e., by constructing a canonical model of O from M .

Remark 1. Just like for EL, if the set of goal subsumptions H v D for which entail-
ment should be checked is known in advance (e.g., for classification), it is possible to
sharpen Theorem 1 by relaxing the requirement on the closure of M [10]. Specifically,
it is necessary to produce H v U by rules only if there is a goal subsumption H v D
for some D, or if K v ∃R.H has been derived for some K earlier. Furthermore, the
rules R+     +    +    +    +        ←
        > , Ru , Ru , R¬ , R∀ , and R∃ introducing new concepts in sub(O) (see the side
conditions), need to be applied only if the new concept is a subconcept of the left-hand
side of some concept inclusion in O or of some D in the right-hand side of goal sub-
sumptions. Both optimizations can be particularly useful to mitigate “blind guessing”
                    +
in rules R+¬ and R∀ , which do not have any premises, and thus apply to every H.

    It is possible to prove the converse of Theorem 1, namely that if O 6|= H v D,
then there exists a set of subsumptions M closed under the rules in Fig. 2 that contains
neither H v D nor H v ⊥. This result, however, is not very useful in practice because
in order to check whether O |= H v D using this result, one would need to enumerate
all M closed under the rules in Fig. 2. Just like tableau-based procedures do not need to
enumerate all completion graphs, we do not need to enumerate all such closed sets M .
In particular, subsumptions derived for different conjunctions H of the form (5) can be
in most cases considered independently.
    The following concept satisfiability procedure (short CSAT) checks satisfiability of
conjunctions H of the form (5) using the rules in Fig. 2:

 1. We apply the inference rules in Fig. 2 (introducing conjunctions H as necessary ac-
    cording to Remark 1) and record every derived subsumptions H v U in a separate
    local branch for each H, just like in Example 2.
 2. If a subsumption H v U was derived using a non-deterministic rule, we also re-
    member the alternative conclusion of this rule in a branching point.
 3. If we derive a local clash H v ⊥, we remove all conclusions on the branch for
    H starting from the last non-deterministically derived conclusion, and produce the
    alternative conclusion, which we now consider as deterministically derived.
 4. If local clash H v ⊥ is derived and there are no non-deterministically derived
    conclusions on the branch for H anymore, we mark H as inconsistent.
 5. Rule R← ⊥ is applied only if for the right premise of the rule K v ⊥ (local clash in
    the branch for K), the conjunction K is marked as inconsistent.
 6. The procedure terminates when no more rules can be applied and for each derived
    local clash H v ⊥ (on the branch for H), H is marked as inconsistent.

    Note that even though we apply rule R←   ⊥ in a restricted way, because of the last
condition, the resulting set M of subsumptions will be closed under all rules in Fig. 2.
Also note that if some conclusion H v U was derived using a premise K v V with
K 6= H (using rules R←       ←       ←
                        ⊥ , R∀ or R∃ ), we do not remove K v V when we backtrack
H v U as we remove only conclusion on the local branch for H. Likewise, if we
remove the premise K v V , we do not need to remove the conclusion H v U derived
from this premise. In other words, premises from other local branches can never result in
inconsistency of H, and so, deriving and backtracking of conclusions can be performed
for each local branch independently.
    By Theorem 1, each H that is not marked as inconsistent by our procedure must
be satisfiable since H v ⊥ ∈  / M for the resulting closure M . We will also prove the
converse, namely that only unsatisfiable H can be marked as inconsistent by CSAT.
   The rules in Fig. 2 can be used not only for checking satisfiability of H, but also for
computing its entailed subsumers. The following branch exploration procedure (short
BEXP) continues after CSAT, and computes the set S(H) of concepts D such that
H v D appear on every branch of H that does not contain a local clash:
 1. For each H not marked as inconsistent, create a set S(H) of candidate subsumers
    consisting of those D ∈ sub(O) such that H v D is on the branch for H.
 2. Remove all conclusions on the branch for H starting from the last non-deterministic
    conclusion, and produce the alternative conclusion.
 3. Apply the rules like in CSAT (this may extend other branches), backtracking if nec-
    essary, but do not mark H as inconsistent when no backtracking in H is possible.
 4. After all rules are reapplied and there is no local clash in H, remove from S(H) all
    D for which H v D no longer occurs on the branch.
 5. Repeat from Step 2 until no non-deterministic conclusions are left on the branch.
    It follows from Theorem 1, that S(H) contains all concepts D ∈ sub(O) such that
O |= H v D since D is removed from S(H) only when we find M closed under
the rules in Fig. 2 such that H v D ∈/ M . In the next section we prove the converse,
namely that if O 6|= H v D then there is a branch of H that does not contain H v D.

3.2   Soundness, Termination, and Complexity
To prove soundness of the procedures CSAT and BEXP described in the previous sec-
tion, we need to formally define our non-deterministic derivation strategy.
Definition 1 (Tableau, Local Tableau). A tableau (for O) is a triple T = (H, ≺, t),
where H is a set of conjunctions of the form (5), ≺ is a partial order on H, and t is a
function that assigns to every H ∈ H a tree t(H) called the local tableau for H whose
nodes are labeled by subsumptions H v U (with this H), U of the form (1), such that:
 1. For each node n of t(H), there is a rule in Fig. 2 for O such the label of n is a
    conclusion of this rule and all premises of the form H v U (with this H) are labels
    of some ancestors of n (the nodes on the path from the root of t(H) to n without n);
    if the inference is by R←
                            ⊥ , the second premise K v ⊥ must be such that K ≺ H.
 2. If the label is obtained by a non-deterministic rule (R−     +    +    ←    ←
                                                           t , R¬ , R∀ , R∀ , R∃ ), the
    node should have a sibling node labeled by the alternative conclusion of the rule.
 3. All labels of the nodes on the same branch in t(H) must be different.
 4. If K ≺ H then all branches of t(K) must contain the local clash K v ⊥.
    Intuitively, each local tableau t(H) for H describes the set of branches that are
encountered for H by procedure BEXP described in Sect. 3.1. For example, the two
trees in Example 2 are local tableaux respectively for H = A and H = A u ¬C. The
partial order ≺ is the order in which the conjunctions H are marked as inconsistent.
According to the last condition, we set K ≺ H if K is marked as inconsistent and H is
not. Thus, rule R←⊥ derives H v ⊥ from K v ⊥ by our procedure only if K ≺ H.

Theorem 2 (Soundness). Let T = (H, ≺, t) be a tableau for O. Then for each H ∈ H
and each concept D (not necessarily occurring in O) such that O 6|= H v D, there
exists a branch in t(H) that contains neither H v ⊥ nor H v D.
Proof. The proof is by induction on ≺. Assume that the claim holds for each K ≺ H,
and prove it for H. Since O 6|= H v D, there is a model I = (∆I , ·I ) of O and
a ∈ ∆I such that a does not satisfy H v D in I, i.e., a ∈ H I \ DI . We prove that
there is a branch in t(H) such that every label H v U on this branch is satisfied in I by
a, that is a ∈ U I . This implies that neither H v ⊥ nor H v D occurs on this branch.
    It is sufficient to show that for every node n in t(H), if a satisfies the labels of
all ancestors of n then a satisfies the label of n or of its sibling node (if there is one).
According to Definition 1, these labels are obtained by some rule in Fig. 2 in which
all premises of the form H v U must be the labels of some ancestors of n. That is,
all premises of the rule with the left-hand side H are satisfied in I by a. We will show
that in this case, the conclusion of the rule (respectively, one of the non-deterministic
conclusions if the rule is non-deterministic) is also satisfied in I by a:
                                  −     −
 – Rules R0 , R⊥ , R+                         −     +     +    +     →    →
                        > , Rv , Ru , Ru , R¬ , Ru , Rt , R¬ , R∀ , R∃ : These are deter-
   ministic rules that contain only premises with the left-hand side H. It is easy to see
   that for all these rules the conclusion is a logical consequence of the premises and
   O. Since I |= O, a satisfies the conclusion because it satisfies all premises.
 – Rules R−       +     +     ←    ←
            t , R¬ , R∀ , R∀ , R∃ : These are non-deterministic rules. For each rule we
   show how to choose one of the conclusions that is satisfied in a.
   For R−                              I                              I
          t , since a ∈ (C1 t C2 ) , we have either a ∈ C1 or a ∈ C2 . Hence a
                                                                                I
                                                                 +       +
   satisfies either H v C1 or H v C2 . The proofs for R¬ and R∀ are similar.
   For R←                               I                                 I
          ∀ , we have a ∈ (∃R.K) . That is, there exists b ∈ K such that ha, bi ∈
     I                ∗     −                         I             I
   R . Since R vO S , we have hb, ai ∈ S . If a ∈ C , then a satisfies the left
   conclusion H v C. Otherwise, a ∈         / C I and so b ∈   / (∀S.C)I since hb, ai ∈ S I .
                                  I
   Hence b ∈ (K u ¬∀S.C) and a ∈ ∃R.(K u ¬∀S.C). So, a satisfies the right
   conclusion H v ∃R.(K u ¬∀S.C). The proof for R←              ∃ is similar.
 – Rule R← ⊥  : This is a deterministic  rule, but for its right premise K v ⊥ we may have
   K 6= H. By Definition 1, however, we must have K ≺ H, and so, every branch of
   t(K) contains K v ⊥. Since K ≺ H, by induction hypothesis, O |= K v ⊥ (if
   O 6|= K v ⊥ there would be a branch in t(K) without K v ⊥). So, this case is not
   possible since a must satisfy the premise H v ∃R.K, but a ∈          / (∃R.K)I = ∅.

Remark 2. Note that the second premises of rules R←              ←
                                                        ∀ and R∃ were not used in the
proof of Theorem 2. That is, these premises do not have any impact on soundness.
The purpose of these premises, is to restrict the non-deterministic “guessing” made by
these rules to the cases that are sufficient for completeness. Note also that if the second
premises of these rules are entailed by the ontology, e.g., they are derived deterministi-
cally, then the right conclusion is equivalent to H v ⊥, thus it cannot be satisfied by a
that does not satisfy H v D. Hence the right conclusion is not necessary in this case,
and so, these rules could be applied deterministically as in Fig. 1. This strategy makes
sure that our procedure is deterministic when applied to Horn-ALCHI ontologies.

   To combine Theorems 1 and 2 and obtain soundness and completeness for our pro-
cedure, we need to define when the tableau is fully expanded under inferences.

Definition 2 (Fully Expanded Tableau). Let T = (H, ≺, t) be a tableau. A branch of
t(H) is closed if H v ⊥ occurs on this branch. Otherwise, the branch is called open. A
branch is fully expanded if there exists a set M of subsumptions closed under the rules
in Fig. 2 such that all subsumptions of the form H v U in M (with this H) occur on
this branch. T is fully expanded if every open branch of every t(H) is fully expanded.
    Note that each time all rules are reapplied without producing a local clash at Step
4 of procedure BEXP in Sect. 3.1, we obtain a set M closed under the rules (possibly
different from the previous one). So, we “fully expand” every branch in our procedure.
However, BEXP is far from enumerating all possible sets M closed under the inference
rules. E.g., we do not explore all possible combinations of branches in local tableaux.
Corollary 1 (Correctness). Let T = (H, ≺, t) be a fully expanded tableau for O. Then
for every H ∈ H and every D occurring in O, we have O |= H v D iff the subsumption
H v D is derived in every open branch of t(H).
    It is possible to prove that BEXP runs in exponential time in the size of O. Infor-
mally, the running time is bounded by the sum of sizes for each local tableau. There are
at most exponentially many local tableaux (one per each conjunct H), and it can fur-
ther be shown that the size of each local tableau is also exponential. See the technical
report [8] for more formal and detailed arguments.

4   Discussion and Preliminary Evaluation
The branch exploration procedure BEXP described in Sect. 3.1 can be readily used for
classification, i.e., computing the entailed subsumption between atomic concepts in O.
We need to iterate over all singleton conjuncts H of atomic concepts, and for each one
compute the set S(H) of atomic concepts, subsumptions with which occur on every
branch. Another way to compute atomic subsumers for H, is to reduce subsumption
entailment to concept satisfiability testing, as done in tableau-based procedures: O |=
H v D iff O |= H u ¬D v ⊥.We can use our first procedure CSAT for this purpose.
The next example demonstrates the key differences between these two approaches.
Example 3 (Long Fork). Consider the ontology O containing the following axioms:
 A v B t C, B v D, C v D, D v D1 u · · · u Dn , (n > 0).
The two branches in the tree for H = A contain the following sets of atomic subsumers
for A: S1 (A) = {A, B, D, D1 , . . . , Dn }, and S2 (A) = {A, C, D, D1 , . . . , Dn }. Hence
the procedure BEXP can find the set of common subsumers S(A) = S1 (A) ∩ S2 (A) =
{A, D, D1 , . . . , Dn } in O(n) steps. If we use CSAT instead and test satisfiability for
each candidate A v Di , (1 ≤ i ≤ n), then we potentially need O(n2 ) steps since for
each Hi = A u ¬Di we can produce O(n) conclusions A u ¬Di v Dj , 1 ≤ j ≤ n.
    This difference between the two strategies can be observed not only on toy examples
but also on real ontologies (cf. [8] for evaluation results).
    Tableau reasoners usually implement further optimizations exploiting a partially
constructed taxonomy that can reduce the number of subsumption tests [2, 4]. If in
Example 3 the reasoner computes the subsumers for D first (using O(n) steps because
they are derived deterministically), then after the first (positive) subsumption test for
A v D, all subsumers for A are completely determined. This optimization can be also
applied for BEXP to reduce the set of candidate subsumers S(H) to be verified.
Table 1. Comparison of the performance (in seconds) for classification of ontologies between the
new implementation and other reasoners. ELKN stands for ELK with N parallel worker threads.
Total time includes CSAT, BEXP, and taxonomy construction (transitive reduction).

               Ontology     CSAT BEXP Total ConDOR Konclude ELK1 ELK4
               SNOMED CT      22.7 –       29.8       38.2      45.2   25.7   13.1
               GALEN-EL        2.3 –        2.6        3.6       2.7    2.1    1.0
               SCT-SEP        16.9 15.9    37.5       59.4     246.6   N/A    N/A


4.1       Preliminary Experimental Evaluation
We have implemented our non-deterministic procedure, currently for a subset of ALCH
without universals and negations (but with disjunctions). The implementation supports
transitive roles via the well-known encoding [6]. The main goal is to make a preliminary
comparison of the new non-deterministic procedure with the deterministic resolution-
style procedure of ConDOR [13], which supports this fragment. We have also made a
comparison with Konclude (v.0.5.0), which was the fastest tableau-based reasoner for
our test ontologies, and with ELK (v.0.4.1) on EL+ ontologies. The latter is done to
evaluate the potential overheads (our procedure works deterministically on Horn on-
tologies, so, in theory, it should behave like the EL+ procedure on EL+ ontologies).
    We used the July 2013 release of SNOMED CT,2 an EL+ -restricted version of
GALEN, and the version of SNOMED CT with disjunctions mentioned in the intro-
duction (SCT-SEP). Performance is tested on classification and the results are averaged
over 5 runs (excluding 2 warm-ups). We used a PC with Intel Core i5-2520M 2.50GHz
CPU, running Java 1.6 with 4GB of RAM available to JVM.
    Performance results of classification are presented in Table 1. The procedure works
in two steps as described in Sect. 3.1: satisfiability testing for all atomic concepts using
CSAT, then exploring branches to compute subsumers using BEXP. First, the results
show that our implementation is slightly faster than ConDOR and considerably faster
than Konclude on SCT-SEP. Second, it is comparable to ELK on deterministic EL+
ontologies. The numbers for ELK with 1 and 4 worker threads demonstrate that the
difference is mostly due to concurrency: our implementation does not perform compu-
tations in parallel yet. BEXP is not necessary for this case as everything is deterministic.

4.2       Summary and Future Research Directions
In this paper, we presented a non-deterministic consequence-based reasoning procedure
for ALCHI, proved its correctness, and presented some promising preliminary evalu-
ation. The procedure retains many nice properties of previously known consequence-
based methods, such as optimal worst-case complexity, but also employs some tableau-
like features, such as backtracking, without need for blocking to achieve termination.
     Our future plan is to extend the implementation to fully support ALCHI, integrate
some optimizations used in ELK (including concurrency), and investigate extensions of
the procedure with other constructors, such as number restrictions. We conjecture that
it is easier to support new features using non-deterministic rules rather than resolution.
      2
          http://www.ihtsdo.org/snomed-ct/
References
 1. Baader, F., Brandt, S., Lutz, C.: Pushing the EL envelope. In: Kaelbling, L., Saffiotti, A.
    (eds.) Proc. 19th Int. Joint Conf. on Artificial Intelligence (IJCAI’05). pp. 364–369. Profes-
    sional Book Center (2005)
 2. Baader, F., Hollunder, B., Nebel, B., Profitlich, H.J., Franconi, E.: Am empirical analysis of
    optimization techniques for terminological representation systems. J. of Applied Intelligence
    4(2), 109–132 (1994)
 3. Bachmair, L., Ganzinger, H.: Equational reasoning in saturation-based theorem proving. In:
    Bibel, W., Schmitt, P.H. (eds.) Automated Deduction – A Basis for Applications, vol. I,
    chap. 11, pp. 353–397. Kluwer (1998)
 4. Glimm, B., Horrocks, I., Motik, B., Shearer, R., Stoilos, G.: A novel approach to ontology
    classification. J. of Web Semantics 14, 84–101 (2012)
 5. Gonçalves, R.S., Bail, S., Jiménez-Ruiz, E., Matentzoglu, N., Parsia, B., Glimm, B., Kaza-
    kov, Y.: OWL reasoner evaluation (ORE) workshop 2013 results: Short report. In: Proceed-
    ings of the 2nd International Workshop on OWL Reasoner Evaluation (ORE-2013), Ulm,
    Germany. pp. 1–18 (2013)
 6. Kazakov, Y.: Consequence-driven reasoning for Horn SHIQ ontologies. In: Boutilier, C.
    (ed.) Proc. 21st Int. Joint Conf. on Artificial Intelligence (IJCAI’09). pp. 2040–2045. IJCAI
    (2009)
 7. Kazakov, Y., Klinov, P.: Incremental reasoning in OWL EL without bookkeeping. In: Inter-
    national Semantic Web Conference. pp. 232–247 (2013)
 8. Kazakov, Y., Klinov, P.: Bridging the gap between tableau and consequence-based rea-
    soning. Technical report, The University of Ulm, Germany (2014), http://elk.
    semanticweb.org/publications/alc-tr-2014.pdf
 9. Kazakov, Y., Krötzsch, M., Simančík, F.: Concurrent classification of EL ontologies. In:
    Aroyo, L., Welty, C., Alani, H., Taylor, J., Bernstein, A., Kagal, L., Noy, N., Blomqvist,
    E. (eds.) Proc. 10th Int. Semantic Web Conf. (ISWC’11). LNCS, vol. 7032, pp. 305–320.
    Springer (2011)
10. Kazakov, Y., Krötzsch, M., Simančík, F.: The incredible ELK: From polynomial procedures
    to efficient reasoning with EL ontologies. J. of Automated Reasoning 53(1), 1–61 (2014)
11. Romero, A.A., Grau, B.C., Horrocks, I., Jiménez-Ruiz, E.: MORe: a modular OWL reasoner
    for ontology classification. In: Proceedings of the 2nd International Workshop on OWL Rea-
    soner Evaluation (ORE-2013), Ulm, Germany. pp. 61–67 (2013)
12. Sheeran, M., Stålmarck, G.: A tutorial on Stålmarck’s proof procedure for propositional
    logic. Formal Methods in System Design 16(1), 23–58 (2000)
13. Simančík, F., Kazakov, Y., Horrocks, I.: Consequence-based reasoning beyond Horn ontolo-
    gies. In: Walsh, T. (ed.) Proc. 22nd Int. Joint Conf. on Artificial Intelligence (IJCAI’11). pp.
    1093–1098. AAAI Press/IJCAI (2011)
14. Tsarkov, D., Palmisano, I.: Chainsaw: a metareasoner for large ontologies. In: Proceedings of
    the 2nd International Workshop on OWL Reasoner Evaluation (ORE-2013), Ulm, Germany
    (2012)