<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>DL</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>The Shape of EL Proofs: A Tale of Three Calculi</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Christian Alrabbaa</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefan Borgwardt</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Philipp Herrmann</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Markus Krötzsch</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Theoretical Computer Science, Technische Universität Dresden</institution>
          ,
          <addr-line>01062 Dresden</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>38</volume>
      <fpage>3</fpage>
      <lpage>6</lpage>
      <abstract>
        <p>Consequence-based reasoning can be used to construct proofs that explain entailments of description logic (DL) ontologies. In the literature, one can find multiple consequence-based calculi for reasoning in the ℰℒ family of DLs, each of which gives rise to proofs of diferent shapes. Here, we study three such calculi and the proofs they produce on a benchmark based on the OWL Reasoner Evaluation. The calculi are implemented using a translation into existential rules with stratified negation, which had already been demonstrated to be efective for the calculus of the Elk reasoner. We then use the rule engine Nemo to evaluate the rules and obtain traces of the rule execution. After translating these traces back into DL proofs, we compare them on several metrics that reflect diferent aspects of their complexity.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Explanations</kwd>
        <kwd>Proofs</kwd>
        <kwd>Rule Engine</kwd>
        <kwd>Nemo</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Consequence-based reasoning for DLs has a long tradition, starting from the first reasoning algorithms
for ℰ ℒ with general concept inclusions [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ], all the way up to expressive DLs like ℒℋℐ [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ].
Due to its favorable computational properties, it is employed in several reasoners, such as Elk, Konclude,
and Sequoia [
        <xref ref-type="bibr" rid="ref5 ref6 ref7">5, 6, 7</xref>
        ]. Another advantage of consequence-based approaches is the possibility to extract
proofs consisting of step-wise derivations, in order to explain consequences of the ontology to an
ontology engineer. In principle, the relatively simple rules of a reasoning calculus can immediately be
used to construct proofs. However, in practice, this would have to be implemented by each
consequencebased reasoner, and is so far only supported by Elk [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Several approaches have been developed to
extract proofs from reasoners in a black-box fashion [
        <xref ref-type="bibr" rid="ref10 ref11 ref9">9, 10, 11</xref>
        ], which do not use a fixed set of rules
and often lead to more complicated proof steps, but smaller proofs overall [12]. Our main research
question is how proofs resulting from consequence-based calculi difer based on the shape of the rules.
Specifically, we explore which calculi may be more appropriate in certain scenarios—for instance,
producing proofs suitable for specific types of visualizations, or ones in which information is introduced
and resolved locally, resulting in a reasoning structure that is easier to follow. We investigate this on
three calculi for the ℰ ℒ family of description logics [
        <xref ref-type="bibr" rid="ref2 ref5">2, 5, 13</xref>
        ].
      </p>
      <p>We follow an approach to encode DL reasoning into (an extension of) Datalog rules [14, 15, 16]. To
execute these rules, we use Nemo, a Datalog-based rule engine that is easy to use and ofers many
advanced features, such as datatypes, existential rules, aggregates, and stratified negation [ 16]. This has
already been demonstrated to be efective for the reasoning calculus of Elk, including pre-processing of
the ontology in OWL format,1 which essentially implements an ℰ ℒ⊥+ reasoner by existential rules with
stratified negation [ 16]. The advantage of this approach is that one can relatively quickly implement
new reasoning procedures with low efort.</p>
      <p>
        In this paper, we also consider two other calculi for ℰ ℒ-based logics from the literature: the original
ℰ ℒ++ calculus, denoted by Envelope [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] (restricted to ℰ ℒ⊥+, i.e., without nominals and concrete
domains), and the ℰ ℒ calculus Textbook [13] (extended to ℰ ℒℋ⊥, i.e., with role hierarchies and ⊥).
The modifications to the calculi allow us to compare them on a dataset of ℰ ℒℋ⊥ reasoning tasks
      </p>
      <p>init()
⊤  ⊑ ⊤
: ⊤ occurs negatively in</p>
      <p>R∃+ →−  ⊑ ∃. ⊑  : ∃ ⊑.* occurs negatively in 
extracted from the 2015 OWL Reasoner Evaluation (ORE) [17]. After encoding them into rules as for the
Elk calculus, we use the tracing capabilities of Nemo to compute proofs for the derived consequences.
However, since Nemo traces are based on translated rules and may include statements not expressible
as DL axioms (e.g., side conditions), we must still transform them to obtain DL proofs suitable for
inspection by ontology engineers. Finally, we compare the shape of the resulting proofs using several
measures, such as the size, depth, directed cutwidth [18], and cognitive complexity of inference steps [19].</p>
    </sec>
    <sec id="sec-2">
      <title>2. Calculi and Proofs</title>
      <p>+
Let NC and NR be two disjoint, countably infinite sets of concept- and role names, respectively. ℰ ℒ⊥
concepts are defined by the grammar ,  ::= ⊤ | ⊥ |  |  ⊓  | ∃., where  ∈ NC and  ∈ NR.</p>
      <p>+ axioms are either concept inclusions of the form  ⊑  or complex role inclusions of the form
ℰ ℒ⊥
1 ∘ · · · ∘  ⊑ +, wthhaetroenly1,a.l.l o.w,ss,im∈pleNrRo.leAinncℰluℒs⊥+ionTsBoofxtihseafofinrimte se⊑tof. Wℰ ℒe+⊥assauxmioemtsh.eℰrℒeaℋd⊥eritsothbee
fragment of ℰ ℒ⊥
familiar with the semantics of these logics, in particular the definition of entailment of an axiom  from
a TBox  , written  |=  [13].
+ that are tailored towards classification ,
Calculi. We consider three inference calculi for fragments of ℰ ℒ⊥
i.e., computing all entailments of the form  |=  ⊑  for ,  ∈ NC.
+ TBoxes. Side conditions are</p>
      <p>Figure 1 shows the inference rules used in the reasoner Elk for ℰ ℒ⊥
marked in gray, where “ occurs negatively” means that  occurs within a concept on the left-hand
side of some axiom in  , and ⊑</p>
      <p>
        * denotes the precomputed role hierarchy, i.e., the transitive closure
over simple role inclusions. Furthermore, complex role inclusions are assumed to be in the normal
form 1 ∘ 2 ⊑ , using only binary role composition. Axioms with longer role compositions can be
normalized with the help of fresh role names. Then,  |=  ⊑  holds if either  ⊑  or  ⊑ ⊥ can
be derived by these rules from init() [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>Figure 2 shows the Textbook classification rules for ℰ ℒ from [13] with a slight modification to
rule CR5 and the addition of a variant of R⊥ from Elk, in order to support ℰ ℒℋ⊥. Here, all input and
derived concept inclusions are required to be of one of the forms  ⊑ , 1 ⊓ 2 ⊑ ,  ⊑ ∃.
or ∃. ⊑ , where , 1, 2,  are concept names from  , ⊤ or ⊥, which is again without loss of
generality. This calculus is correct in the same sense as the Elk calculus above, but, instead of the
init() statement, it is initialized with all axioms of the input TBox  .</p>
      <p>
        + from [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], omitting rules CR6–CR9 for nominals and
      </p>
      <p>Figure 3 shows the Envelope rules for ℰ ℒ⊥
concrete domains. For a consistent representation, we translate the statements “ ∈ ()” and
“(, ) ∈ ()” from the original paper into  ⊑  and  ⊑ ∃., respectively. Note that  and 
must be concept names from  , ⊤ or ⊥. This calculus requires the concept and role inclusions in 
to be in normal form, and is correct in the same sense as for the other calculi, but stars only from the
tautologies  ⊑  and  ⊑ ⊤ for all concept names  from  .</p>
      <p>⊑ ⊥
CR1  ⊑⊑  :  ⊑  ∈</p>
      <p>CR2  ⊑ 1 ⊑  ⊑ 2 : 1 ⊓ 2 ⊑  ∈ 
CR3 ⊑⊑∃. :  ⊑ ∃. ∈</p>
      <p>CR4  ⊑ ∃.1⊑ 1 ⊑ 2 : ∃.2 ⊑  ∈ 
CR5  ⊑ ∃.⊑ ⊥ ⊑ ⊥</p>
      <p>⊑ ∃. :  ⊑  ∈</p>
      <p>CR10  ⊑ ∃.
CR11
 ⊑ ∃1.⊑ ∃.⊑ ∃2. : 1 ∘ 2 ⊑  ∈</p>
      <p>
        Proofs. Following the notion introduced in [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], a proof of  |=  ⊑ , where  and  are concept
names, is a finite, acyclic, directed hypergraph, where each vertex  is labeled with an axiom ℓ().
Hyperedges represent sound inference steps and are of the form (, ), where  is a set of vertices and
 is a vertex s.t. {ℓ() |  ∈ } |= ℓ(); note that the direction of the edge is from the vertices in  to
the vertex . The leaf vertices (without incoming hyperedges) of a proof are labeled with axioms from
 , and the unique root (without outgoing hyperedges) is labeled with  ⊑ . Moreover, to obtain
smaller proofs, they are also required to be non-redundant, which means that the same axiom cannot
have multiple subproofs: no two vertices can have the same label, every vertex can have at most one
incoming hyperedge, and a vertex has no incoming hyperedge if it is labeled by an axiom from  .
Usually, hyperedges are additionally distinguished by a label, which corresponds to the rule name from
a calculus. However, for simplicity, we dispense with hyperedges altogether and simply view proofs as
directed acyclic graphs (with edges pointing towards the root) without rule names. The children of a
vertex  are then the premises of the (unique) inference step that was used to derive ℓ(). The label
of a vertex without predecessors must then be either from  or a tautology that can be derived using
a rule without premises. Moreover, since most proof visualizations [20, 21] are based on tree-shaped
proofs, we only consider the tree-shaped unravelings of these directed acyclic graphs in the following
(see Figure 4). Thus, there can be multiple vertices with the same label, but they must have isomorphic
subproofs (subtrees).
      </p>
      <p>Consider, for example, the TBox = { ⊑ ,  ⊑ ∃.,  ⊑ , ∃. ⊑ ,  ⊑ ,  ⊑ } and
the entailment  |=  ⊑ . Since proofs are to be inspected by ontology engineers, they are restricted
to DL axioms. This means, however, that they cannot include side conditions and statements that are
not DL axioms (e.g., init()). Thus, we have to adapt the calculus rules before we can use them as
inference steps in proofs.</p>
      <p>We treat side conditions of the form  ⊑  ∈  as ordinary premises  ⊑ , since they are DL
axioms. However, this does not mean that R⊑ from Elk is now equivalent to CR3 from Textbook,
since the second premise in the former is still restricted to axioms from  .</p>
      <p>Similarly, we treat side conditions  ⊑*  as premises  ⊑ . However, to obtain a valid proof, we
also have to derive these axioms from the TBox. For both the Elk and the Textbook calculus, we
 ⊑   ⊑ ∃.  ⊑</p>
      <p>⊑ 
 ⊑ ∃.  ⊑ 
 ⊑ 
 ⊑   ⊑ ∃.
 ⊑ 
 ⊑ 
 ⊑ ∃. ∃. ⊑ 
 ⊑ ∃.  ⊑  ∃. ⊑ 
 ⊑ 
 ⊑ 
 ⊑   ⊑ ∃.
 ⊑ ∃.</p>
      <p>⊑ 
 ⊑ ∃.
 ⊑</p>
      <p>⊑ 
 ⊑ ∃.  ⊑  ∃. ⊑ 
 ⊑ 
therefore add the following two rules:
 ⊑ 
:  ∈ NR occurs negatively in 
 ⊑  :  ⊑  ∈ 
 ⊑</p>
      <p>
        Next, we replace statements of the form →−   in the Elk calculus by  ⊑ ∃., which preserves
the soundness of the inference rules, and is similar to the treatment of “(, ) ∈ ()” in the Envelope
calculus. We also omit init() statements, meaning that R0 and R1 from Elk become similar to CR1
and CR2 from Textbook, but they may only appear in a proof if init() was actually derived by the
original rules. We also add CR1 and CR2 to the Envelope calculus to express the initialization step,
which does not correspond to explicit rules in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. For the Elk and Envelope calculi, which are not
explicitly initialized with the TBox axioms, this results in proofs such as the following:
 ⊑   ⊑
      </p>
      <p>⊑ 
However, this violates non-redundancy since  ⊑  has multiple non-isomorphic proofs. In such cases,
we keep only a subproof of minimal size (in the example, only the leaf  ⊑ ). This also applies to
other inference rules, for example, R∃− , which now trivially derives  ⊑ ∃. from  ⊑ ∃.. Thus,
due to the above simplification, this rule will never appear in a proof.</p>
      <p>We also omit the side conditions of the form “ occurs negatively in  ”. To represent them in our
proofs, we could add the axiom in which  occurs negatively as a premise; however, this axiom is not
logically necessary for the inference step, and may be confusing because it has no connection to the
inference other than the occurrence of . For example, consider the following application of R+, which
∃
also requires that ∃. occurs negatively in  :
 ⊑ ∃.</p>
      <p>⊑ 
 ⊑ ∃.
 ⊑ 
We could add ∃. ⊑  as a premise to express the side condition, but this axiom is not required to
derive  ⊑ ∃.. Moreover, since ∃. ⊑  is also used in the subsequent application of R⊑, having it
appear in two consecutive steps—while only being necessary for one—may be unnecessarily confusing.</p>
      <p>The proofs resulting from all these considerations can be seen in Figure 4, where we draw them as
trees to emphasize their shape, which we analyze in the following. Despite the transformations and
simplifications we applied, the three calculi can still result in substantially diferent proofs, even for
such a simple entailment.
justification size
depth</p>
      <p>cutwidth</p>
    </sec>
    <sec id="sec-3">
      <title>3. Measures</title>
      <p>
        We evaluate the shape of proofs by several measures from the literature, which reflect diferent aspects
of how ontology engineers can inspect a proof in diferent representations (see Figure 5).
Size. The size of a proof [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] is the number of occurrences of axioms in the proof tree. For example, if
ontology engineers need to inspect the whole proof because they are unfamiliar with the ontology, the
size of the proof influences how much time they will need to inspect it. The size is proportional to the
height of a linear representation of the proof or a nested list visualization as in a file browser, e.g., in
the proof explanation plugin for Protégé [20].
      </p>
      <p>Depth. The depth of a proof [22] is the length of the longest path from a leaf to the root. This is related
to the task of finding an error in the ontology based on an erroneous inference. In this case, ontology
engineers would rather inspect the proof only along the “most suspicious-looking” branch (or a few
such branches), and thus the time they may need in the worst case depends on the length of the longest
branch. The depth is also proportional to the height of a more traditional proof-tree representation, as
used in Evonne [21].</p>
      <p>The width of such a tree visualization, assuming that adjacent subtrees are non-overlapping in the
sense that nodes from one subtree are not positioned vertically above nodes from another subtree,
can be expressed as a function of the number of leafs in the proof, which is also called its justification
size [23]. However, we do not consider the justification size in this paper, since it does not depend on
the reasoning calculus.</p>
      <p>Cutwidth. The (directed) cutwidth of a graph tries to measure how linear it is [18]. Given a linear
vertical arrangement of a proof’s nodes [21], the cutwidth is the maximal number of edges that would
be afected when cutting the graph horizontally between any two consecutive nodes (see Figure 5). The
cutwidth of the graph is then the minimum of the cutwidths of all possible such linear arrangements.
Hence, it is proportional to the maximal number of intermediate axioms an ontology engineer needs to
keep in memory when reading the proof in such an (optimal) linear representation from top to bottom.
Bushiness Score. We developed another score to measure how “bushy” (non-linear) a proof is. It
is computed as the ratio between the size of the proof and its depth (+1 for the root). Hence, it can
be interpreted as the average number of vertices per level (see Figure 5). A completely linear proof in
which all inference steps have only one premise results in a bushiness score of 1 (not bushy at all), and
the full binary tree with five levels gets a score of 351 = 6.2 (very bushy).</p>
      <p>Step Complexity. Finally, we consider a measure based on the cognitive complexity of inference steps,
which has been proposed in the context of justifications, and has been evaluated in user studies [ 19].
It reflects multiple aspects of the syntactical structure of the argument, such as the depth of involved
concepts, how many constructors and axiom types are used, or whether it uses the triviality of a concept
name (i.e., being equivalent to ⊥ or ⊤). Here, we consider the average of the step complexities of the
individual inference steps in the proof.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Directed Cutwidth on Trees</title>
      <p>Computing cutwidth is NP-complete in general [18], and the general-purpose implementations we tried
could not compute the metric for our complete evaluation set in a feasible time. The problem becomes
polynomial on trees [24], but the algorithm is complicated, and we are not aware of any implementation.
Directed cutwidth on trees, however, admits a far simpler algorithm, which we now introduce and prove
to be correct, since we could not find any publication that establishes this result.</p>
      <p>Consider a tree  = ⟨, ⟩ with vertices  and directed edges  pointing towards the leafs.2 A
serialization of  is a word  ∈  * that contains each vertex  ∈  at a unique position  ∈ {1, . . . , | |}
such that ⟨, ⟩ ∈  implies  &lt;  . For  ∈ {1, . . . , | | − 1}, the number of edges cut in the th
gap between vertices in  is cut() = |{⟨, ⟩ ∈  |  ≤  &lt;  }|. The cutwidth cw() of  is
max1≤ &lt;| | cut(), and the directed cutwidth cw( ) of  is the minimal cutwidth of any serialization
of  . The out-degree of any vertex in  is a lower bound for its directed cutwidth, though it can be
higher (e.g., for a full binary tree  , cw( ) is depth + 1). We omit directed below, since we consider no
other kind of cutwidth on trees.</p>
      <p>Definition 1. The standard serialization ( ) of a tree  = ⟨, ⟩ is defined inductively. If  = {},
then ( ) := . If | | &gt; 1, then let  be the root of  and let 1, . . . , ℓ be its direct subtrees, ordered
such that  &lt;  implies cw(()) ≤ cw(( )); then ( ) := (1) · · · (ℓ).</p>
      <p>For non-singleton trees  , each of the sub-sequences () has ℓ −  “overarching” edges from
the root to the roots of later  ,  &gt;  (see also Figure 5). Therefore, if the root of  has ℓ children,
cw(( )) = max({ℓ} ∪ {cw(()) + ℓ −  | 1 ≤  ≤ ℓ}). It is easy to compute ( ) and cw(( )) in
a bottom-up fashion. To do this in small steps, after computing () and cw(()) for all child trees
 of a vertex , we can add the children  to  one by one, in an order of non-decreasing cutwidth.
The following lemma is the key to showing that this recursive procedure yields, for each partial subtree
(with only some child trees added yet), the exact cutwidth. Its proof can be found in the extended
version of the paper [25].</p>
      <p>Lemma 1. For  ∈ {1, 2}, let  be a tree with root  and cutwidth  = cw() = cw() for an optimal
serialization . Assume that 1, . . . , ℓ are the direct child trees below 1 in an order of non-decreasing
cutwidth, and that 2 ≥ cw() for all 1 ≤  ≤ ℓ. Then the tree  obtained from 1 by adding 2 as an
additional direct child tree below 1 has cutwidth cw( ) = max(1 + 1, 2), and an optimal serialization
is 12.</p>
      <p>Now we can perform an easy induction over the structure of  to show that the recursive computation
of cw(( )) produces cw( ). The induction base are single-node trees (leafs), and the step is Lemma 1.
Theorem 1. For trees  , cw( ) = cw(( )), which can thus be computed in polynomial time.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Implementation</title>
      <p>We now describe the encoding of the calculi into existential rules with stratified negation in Nemo syntax,
and the subsequent translation of rule-based reasoning traces into DL proofs. We have implemented
this in the DL explanation library Evee.3</p>
      <sec id="sec-5-1">
        <title>5.1. Existential Rules</title>
        <p>The implementation of the Elk calculus in existential rules with stratified negation is split into three
stages [16]. The first stage consists of RDF import and normalization. Here, the input OWL ontology
is translated into facts over several predicates with the prefix nf. These facts serve as the input for
the calculus, e.g., nf:subClassOf(A,B) for  ⊑  ∈  .4 The normalization takes advantage of the
2This edge direction is more intuitive for this section, but our results are readily applied to proof trees with edges oriented the
other way (see Figures 4 and 5). Indeed, directed cutwidth is the same for the dual ordering.
3See https://github.com/de-tu-dresden-inf-lat/evee/tree/nemo-extractor/evee-nemo-proof-extractor
4More precisely, concept names are identified by IRIs, e.g., &lt;http://example.org/iriA&gt;, but we omit them here.
OWL RDF encoding, which already contains a blank node for each complex concept that can be used as
an auxiliary concept name identifying that concept. For example nf:exists(?C,?R,?D) states that
?C is an auxiliary concept name representing the existential restriction with role ?R and filler concept
?D. All such auxiliary concept names are marked by the predicate auxClass, which allows identifying
the named concepts occurring in the input ontology using the rule</p>
        <p>nf:isMainClass(?X) :- class(?X), ∼ auxClass(?X) .</p>
        <p>Auxiliary role names are treated similarly. Since there can be multiple blank nodes denoting the same
complex concept, there are additional existential rules that create unique representatives for these
concepts, in order to avoid redundant computations. Additionally, the normalization stage precomputes
the role hierarchy ⊑</p>
        <p>* in the predicate nf:subProp.</p>
        <p>The second stage consists of the actual inference rules of the Elk calculus, which derive additional
facts over predicates with the prefix inf. Due to the normalization, the implementation of these rules
is straightforward and close to the original calculus, for example for the rule R⊑:</p>
        <p>inf:subClassOf(?C,?E) :- inf:subClassOf(?C,?D), nf:subClassOf(?D,?E) .</p>
        <p>In the third stage, the interesting entailments, namely subsumptions between concept names, are
extracted with the help of the inf:subClassOf and nf:isMainClass predicates.
Modifications. We slightly adapted these rules for our purposes. All modifications are confined to the
normalization and extraction; the inference rules of the calculus are not afected.</p>
        <p>
          To the normalization stage, we added an inference rule that derives ⊥ ⊑  for each concept name 
in the input, which ensures that  ⊑  is inferred for any concept  with  |=  ⊑ ⊥. Such reasoning
with the ⊥-concept is handled outside the calculus of Kazakov et al. [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], we explicitly included it to
obtain a complete reasoner for classification.
        </p>
        <p>Next, we updated the precomputation of the role hierarchy, which is recursively computed in a
top-down manner, starting with the largest role in a set of connected role inclusions, rather than in a
bottom-up fashion (as in the original rules).</p>
        <p>directSubProp(?R,?S) :- TRIPLE(?R,rdfs:subPropertyOf,?S) .
nf:subProp(?R,?R) :- nf:exists(?C,?R,?D), nf:isSubClass(?C) .</p>
        <p>nf:subProp(?R,?T) :- directSubProp(?R,?S), nf:subProp(?S,?T) .
For example, starting from  ⊑ ,  ⊑ ,  ⊑ , we first derive  ⊑  and then  ⊑ . This modification
is necessary, because the original implementation did not compute the role hierarchy correctly. It
started from roles  occurring negatively in  (using nf:isSubClass as above), but only computed
the role hierarchy above , the opposite of what is needed by R+.
∃</p>
        <p>We also added support for transitivity and domain axioms by introducing two rules that translate
domain(, ) into ∃.⊤ ⊑  and trans() into  ∘  ⊑ . Moreover, we support the current OWL 2
encoding of role chains, i.e., owl:propertyChainAxiom, including role chains with more than two
role names. Finally, we added the possibility to prove equivalence axioms  ≡  directly from  ⊑ 
and  ⊑ .</p>
        <p>Other Calculi. The same normalization is also utilized in the implementation of the Textbook and
Envelope calculi. Additionally, we added the normalization predicates nf:subClassConj(?C,?D,?E)
for  ⊓  ⊑ , nf:subClassEx(?R,?C,?D) for ∃. ⊑ , and nf:supClassEx(?C,?R,?D) for
 ⊑ ∃., according to the normal forms expected by these calculi. The main inference rules of both
calculi are implemented as expected, for example
inf:subClassOf(?C,?F) :- inf:supClassEx(?C,?R,?D),</p>
        <p>inf:subClassOf(?D,?E), nf:subProp(?R,?S), inf:subClassEx(?S,?E,?F) .
for CR5′ from the Textbook calculus, and</p>
        <p>
          R(?C,?R,?E) :- S(?C,?D), nf:supClassEx(?D,?R,?E)
for CR3 from the Envelope calculus, where we used the original names  and  [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ].
        </p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Translation to OWL Proofs</title>
        <p>Nemo allows us to obtain a trace of how a particular entailment, e.g., inf:subClassOf(A,B), was
obtained using the rules. Such a trace is similar to a DL proof, but the vertices are labeled with ground
facts instead of DL axioms. In the Java class NemoProofParser, we implemented a translation from
such traces into valid DL proofs, which reads a Nemo trace in JSON format into an IProof&lt;String&gt;
object, where each String represents a fact, and outputs an IProof&lt;OWLAxiom&gt; object5 that can be
further processed or again saved in JSON format.</p>
        <p>The abstract class AbstractAtomParser, instantiated for each of the three calculi, translates
facts into DL axioms, e.g., inf:subClassOf(A,B) into  ⊑  and nf:supClassEx(A,r,B) into
 ⊑ ∃.. Any fact that does not have a corresponding DL axiom, e.g., inf:init(A), is translated into
⊥ ⊑ ⊤, which indicates that the atom is skipped and will not appear in the final proof (see Section 2).
In particular, traces contain many normalization steps that use various auxiliary predicates, which do
not show up in the final DL proof.</p>
        <p>Auxiliary concept names that are represented by blank nodes (either from the RDF encoding or
from existential rules) need special treatment, since they would not make sense in the translated proof.
Instead, we replace them by the concepts they denote. Since a fact containing a blank node, such as
inf:subClassOf(A,_:61), does not contain information about the complex concept the blank node
identifies, we need to collect other relevant facts from the trace. For example, nf:exists(_:61,t,D)
encodes that _:16 stands for ∃.. If D is also a blank node, then we have to recursively consider more
facts until we can construct the final concept.</p>
        <p>However, this approach fails for complex role inclusions with auxiliary role names. For example, if
 ∘  ∘  ⊑  is normalized into nf:subPropChain(r,s,_:5) and nf:subPropChain(_:5,t,u),
we cannot simply replace _:5 by  ∘ , as this would yield  ∘  ⊑  ∘ , which is not expressible in
OWL due to  ∘  on the right-hand side. Still, this axiom is a tautology, and can be omitted without
afecting the correctness of the inference. During reasoning, auxiliary roles can appear in existential
restrictions, e.g., in inf:supClassEx(C,_:5,D), which we translate into multiple nested existential
restrictions:  ⊑ ∃.∃..</p>
        <p>Lastly, after translating the inference steps in the described manner, we minimize the size of the
resulting OWL proof using the MinimalProofExtractor class, which eliminates redundant inferences
and unnecessary tautologies [22]. In particular, this removes all occurrences of ⊥ ⊑ ⊤. As a result, we
obtain a correct and complete DL proof from the Nemo reasoning trace.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Evaluation</title>
      <p>
        We compared the proofs resulting from the three calculi on a benchmark of 1,573 reasoning tasks that
were extracted from the ORE 2015 ontologies [
        <xref ref-type="bibr" rid="ref11">17, 11</xref>
        ]. Each reasoning task consists of a justification
and the entailed axiom, which avoids the overhead of having to deal with a large ontology and lets us
focus on the structure of the inference steps used to prove the entailments. The benchmark covers all
types of entailments that hold in the ORE ontologies, modulo renaming of concept and role names (and
some timeouts). The resulting proofs and measurements can be found on Zenodo [27]. The extended
version of the paper [25] contains detailed graphs for pairwise comparisons of the calculi.
      </p>
      <p>From the structure of the calculi, we expected proofs of the Textbook calculus to be less linear
and shallower, i.e., have larger directed cutwidth, larger bushiness score and smaller depth, because it
does not restrict any premises of the inference rules to be from the input TBox (see also Figure 5), and
therefore allows for more balanced proof trees. This was confirmed in the experiments, with the directed
cutwidth of Textbook proofs being higher in 1,486 and 1,381 cases compared to Elk and Envelope,
respectively, and never lower. Similarly, the bushiness score is higher for 1,534 and 1,512 proofs, and
lower in only 12 and 23 cases, respectively. Conversely, the depth is lower for 1,503 proofs, and higher
in only 8 cases, compared to both other calculi. We conjecture that the depth of the Textbook proofs is
5OWLAxiom is part of the Java OWL API [26].
approximately logarithmic in the depth of the corresponding Elk or Envelope proofs, and that the
relationship for directed cutwidth and bushiness score is exponential. On the same three measures, the
Elk and Envelope proofs behave very similarly, with Envelope proofs having slightly higher directed
cutwidth and bushiness score, but nearly identical depth, compared to the Elk proofs.</p>
      <p>We also compared directed cutwidth and bushiness score, since they both try to measure how linear
proofs are. We observe that there is indeed a correlation between them; however, whereas directed
cutwidth is always a natural number, bushiness score allows a more fine-grained comparison of proofs,
and also tends to increase faster than the directed cutwidth.</p>
      <p>For the remaining measures of size and average step complexity, the Elk calculus is the outlier, with
both lower size (in 1,025 and 584 cases compared to Envelope and Textbook, respectively) and lower
average step complexity (1,281 and 1,062 proofs, respectively). Here, Envelope and Textbook obtain
very similar results, with slightly lower values for Textbook.</p>
      <p>There is no clear winner across all the measures we considered. However, depending on specific use
cases, proofs generated using certain calculi may be more preferable. Overall, proofs generated with
the Elk calculus seem to be the better choice for providing explanations of entailments, since they are
smaller and rely on simpler inferences. Additionally, Elk proofs have lower cutwidth and bushiness
scores, indicating that axioms are used as soon as possible in the proof, which can be an advantage
when reading the proofs in a linear format (see Figure 5). Conversely, the low depth of Textbook
proofs may be a better option for other types of visualizations, since it can reduce the amount of vertical
or horizontal scrolling required, which also allows users to inspect the proof in a more linear manner.
Our experiment do not show a specific advantage for proofs generated using the Envelope calculus
over the other two.</p>
      <sec id="sec-6-1">
        <title>6.1. Limitations</title>
        <p>
          + proofs, mainly for two reasons. First, the
The benchmarks are not fully representative of general ℰ ℒ⊥
benchmark does not contain all possible justifications and entailments from the ORE 2015 ontologies,
since for some cases it was too costly to compute all justifications [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. Second, Nemo always computes
only one trace, even if there are diferent combinations of inference steps that result in the same axiom.
Since this is done by a specific algorithm in Nemo, there is a systematic bias in the resulting proofs. For
example, for the Textbook calculus, instead of very bushy proofs, Nemo could also have returned more
linear proofs, as for the other calculi. In this case, Nemo traces seem to be biased towards smaller depth,
which allows us to observe the diferences between the calculi and confirms our initial intuitions. In
future work, we plan to reevaluate this once Nemo is extended to consider all possible traces rather
than just one.
        </p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion</title>
      <p>
        We compared the proofs obtained from three reasoning calculi for the ℰ ℒ family of DLs. This was
facilitated by Nemo, a powerful rule engine that allowed us to quickly implement the calculi without
having to develop a dedicated reasoner. As expected, the Textbook calculus [13] indeed produces
more bushy and more shallow proofs, and it turns out that the Elk calculus [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] generally yields smaller
proofs whose inference steps are on average less complex [19]. This enables us to choose specific
calculi for diferent purposes, e.g., to show proofs in a visualization format where the screen space
is restricted either horizontally or vertically, or when the goal is to be able to understand individual
inference steps more quickly. In the future, we want to apply this method to consequence-based calculi
for more expressive logics, e.g., ℒ and beyond.
      </p>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgments</title>
      <p>This work was supported by BMFTR and DAAD in project 57616814 (SECAI, School of Embedded and
Composite AI), and by DFG in grant 389792660 (TRR 248: CPEC, https://perspicuous-computing.science).</p>
    </sec>
    <sec id="sec-9">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used ChatGPT 4 in order to: Paraphrase and reword,
and grammar and spelling check. After using this tool, the authors reviewed and edited the content as
needed and take full responsibility for the publication’s content.
(LPAR’20), volume 73 of EPiC Series in Computing, EasyChair, 2020, pp. 32–67. doi:10.29007/
nhpp.
[12] C. Alrabbaa, S. Borgwardt, T. Friese, A. Hirsch, N. Knieriemen, P. Koopmann, A. Kovtunova,
A. Krüger, A. Popovic, I. S. R. Siahaan, Explaining reasoning results for OWL ontologies with Evee,
in: P. Marquis, M. Ortiz, M. Pagnucco (Eds.), Proceedings of the 21st International Conference
on Principles of Knowledge Representation and Reasoning (KR 2024), 2024. doi:10.24963/KR.
2024/67.
[13] F. Baader, I. Horrocks, C. Lutz, U. Sattler, An Introduction to Description Logic, Cambridge</p>
      <p>University Press, 2017. URL: http://dltextbook.org/. doi:10.1017/9781139025355.
[14] M. Krötzsch, Eficient rule-based inferencing for OWL EL, in: T. Walsh (Ed.), Proceedings of the
22nd International Joint Conference on Artificial Intelligence (IJCAI’11), IJCAI/AAAI, 2011, pp.
2668–2673. doi:10.5591/978-1-57735-516-8/IJCAI11-444.
[15] D. Carral, I. Dragoste, M. Krötzsch, Reasoner = logical calculus + rule engine, Künstliche Intell. 34
(2020) 453–463. doi:10.1007/S13218-020-00667-6.
[16] A. Ivliev, L. Gerlach, S. Meusel, J. Steinberg, M. Krötzsch, Nemo: Your friendly and versatile
rule reasoning toolkit, in: P. Marquis, M. Ortiz, M. Pagnucco (Eds.), Proceedings of the 21st
International Conference on Principles of Knowledge Representation and Reasoning (KR’24), 2024.
doi:10.24963/KR.2024/70.
[17] B. Parsia, N. Matentzoglu, R. S. Gonçalves, B. Glimm, A. Steigmiller, The OWL reasoner
evaluation (ORE) 2015 competition report, J. Autom. Reason. 59 (2017) 455–482. doi:10.1007/
S10817-017-9406-8.
[18] H. L. Bodlaender, M. R. Fellows, D. M. Thilikos, Derivation of algorithms for cutwidth and related
graph layout parameters, J. Comput. Syst. Sci. 75 (2009) 231–244. doi:10.1016/J.JCSS.2008.
10.003.
[19] M. Horridge, S. Bail, B. Parsia, U. Sattler, Toward cognitive support for OWL justifications, Knowl.</p>
      <p>Based Syst. 53 (2013) 66–79. doi:10.1016/j.knosys.2013.08.021.
[20] Y. Kazakov, P. Klinov, A. Stupnikov, Towards reusable explanation services in Protege, in:
A. Artale, B. Glimm, R. Kontchakov (Eds.), Proceedings of the 30th International Workshop on
Description Logics (DL’17), volume 1879 of CEUR Workshop Proceedings, CEUR-WS.org, 2017. URL:
http://ceur-ws.org/Vol-1879/paper31.pdf.
[21] J. Méndez, C. Alrabbaa, P. Koopmann, R. Langner, F. Baader, R. Dachselt, Evonne: A visual tool
for explaining reasoning with OWL ontologies and supporting interactive debugging, Computer
Graphics Forum (2023). doi:https://doi.org/10.1111/cgf.14730.
[22] C. Alrabbaa, F. Baader, S. Borgwardt, P. Koopmann, A. Kovtunova, Finding good proofs for
description logic entailments using recursive quality measures, in: A. Platzer, G. Sutclife (Eds.),
Proceedings of the 28th International Conference on Automated Deduction (CADE’21), volume
12699 of LNCS, Springer, 2021, pp. 291–308. doi:10.1007/978-3-030-79876-5_17.
[23] C. Alrabbaa, F. Baader, S. Borgwardt, P. Koopmann, A. Kovtunova, On the complexity of finding
good proofs for description logic entailments, in: S. Borgwardt, T. Meyer (Eds.), Proceedings of
the 33rd International Workshop on Description Logics (DL 2020), volume 2663 of CEUR Workshop
Proceedings, CEUR-WS.org, 2020. URL: http://ceur-ws.org/Vol-2663/paper-1.pdf.
[24] M. Yannakakis, A polynomial algorithm for the min-cut linear arrangement of trees, J. ACM 32
(1985) 950–988. doi:10.1145/4221.4228.
[25] C. Alrabbaa, S. Borgwardt, P. Herrmann, M. Krötzsch, The shape of ℰℒ proofs: A tale of three
calculi (extended version), 2025. doi:10.48550/arXiv.2507.21851.
[26] M. Horridge, S. Bechhofer, The OWL API: A Java API for OWL ontologies, Semantic Web 2 (2011)
11–21. doi:10.3233/SW-2011-0025.
[27] C. Alrabbaa, S. Borgwardt, P. Herrmann, M. Krötzsch, The shape of EL proofs: A tale of three
calculi - DL25 - Resources, 2025. doi:10.5281/zenodo.16320822.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Brandt</surname>
          </string-name>
          ,
          <article-title>Polynomial time reasoning in a description logic with existential restrictions, GCI axioms, and-what else?</article-title>
          , in: R. L. de Mántaras, L. Saitta (Eds.),
          <source>Proceedings of the 16th European Conference on Artificial Intelligence (ECAI'04)</source>
          , IOS Press,
          <year>2004</year>
          , pp.
          <fpage>298</fpage>
          -
          <lpage>302</lpage>
          . URL: https://www. frontiersinai.com/ecai/ecai2004/ecai04/p0298.html.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Brandt</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Lutz, Pushing the EL envelope</article-title>
          , in: L. P.
          <string-name>
            <surname>Kaelbling</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Safiotti (Eds.),
          <source>Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI'05)</source>
          , Professional Book Center,
          <year>2005</year>
          , pp.
          <fpage>364</fpage>
          -
          <lpage>369</lpage>
          . URL: http://ijcai.org/Proceedings/05/Papers/0372.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Tena Cucala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. Cuenca</given-names>
            <surname>Grau</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Horrocks</surname>
          </string-name>
          ,
          <article-title>Consequence-based reasoning for description logics with disjunction, inverse roles, number restrictions, and nominals</article-title>
          , in: J.
          <string-name>
            <surname>Lang</surname>
          </string-name>
          (Ed.),
          <source>Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI'18)</source>
          , ijcai.org,
          <year>2018</year>
          , pp.
          <fpage>1970</fpage>
          -
          <lpage>1976</lpage>
          . doi:
          <volume>10</volume>
          .24963/IJCAI.
          <year>2018</year>
          /272.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Tena Cucala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. Cuenca</given-names>
            <surname>Grau</surname>
          </string-name>
          , I. Horrocks,
          <article-title>15 years of consequence-based reasoning</article-title>
          , in: C.
          <string-name>
            <surname>Lutz</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          <string-name>
            <surname>Sattler</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Tinelli</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Turhan</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Wolter</surname>
          </string-name>
          (Eds.),
          <string-name>
            <surname>Description</surname>
            <given-names>Logic</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Theory</given-names>
            <surname>Combination</surname>
          </string-name>
          , and All That:
          <article-title>Essays Dedicated to Franz Baader on the Occasion of His 60th Birthday</article-title>
          , volume
          <volume>11560</volume>
          <source>of LNCS</source>
          , Springer,
          <year>2019</year>
          , pp.
          <fpage>573</fpage>
          -
          <lpage>587</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -22102-7_
          <fpage>27</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kazakov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Krötzsch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Simančík</surname>
          </string-name>
          ,
          <article-title>The incredible ELK: From polynomial procedures to eficient reasoning with ℰ ℒ ontologies</article-title>
          ,
          <source>J. Autom. Reason</source>
          .
          <volume>53</volume>
          (
          <year>2014</year>
          )
          <fpage>1</fpage>
          -
          <lpage>61</lpage>
          . doi:
          <volume>10</volume>
          .1007/ s10817-013-9296-3.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Steigmiller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Liebig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Glimm</surname>
          </string-name>
          , Konclude: System description,
          <source>J. Web Semant</source>
          .
          <fpage>27</fpage>
          -
          <lpage>28</lpage>
          (
          <year>2014</year>
          )
          <fpage>78</fpage>
          -
          <lpage>85</lpage>
          . doi:
          <volume>10</volume>
          .1016/J.WEBSEM.
          <year>2014</year>
          .
          <volume>06</volume>
          .003.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bate</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Motik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. Cuenca</given-names>
            <surname>Grau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. Tena</given-names>
            <surname>Cucala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Simančík</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Horrocks</surname>
          </string-name>
          ,
          <article-title>Consequence-based reasoning for description logics with disjunctions and number restrictions</article-title>
          ,
          <source>J. Artif. Intell. Res</source>
          .
          <volume>63</volume>
          (
          <year>2018</year>
          )
          <fpage>625</fpage>
          -
          <lpage>690</lpage>
          . doi:
          <volume>10</volume>
          .1613/JAIR.1.11257.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kazakov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Klinov</surname>
          </string-name>
          ,
          <article-title>Goal-directed tracing of inferences in EL ontologies</article-title>
          , in: P. Mika,
          <string-name>
            <given-names>T.</given-names>
            <surname>Tudorache</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bernstein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Welty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. A.</given-names>
            <surname>Knoblock</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Vrandečić</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Groth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. F.</given-names>
            <surname>Noy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Janowicz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. A.</given-names>
            <surname>Goble</surname>
          </string-name>
          (Eds.),
          <source>Proceedings of the 13th International Semantic Web Conference (ISWC'14)</source>
          , volume
          <volume>8797</volume>
          <source>of LNCS</source>
          , Springer,
          <year>2014</year>
          , pp.
          <fpage>196</fpage>
          -
          <lpage>211</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -11915-1_
          <fpage>13</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Horridge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Parsia</surname>
          </string-name>
          , U. Sattler,
          <article-title>Justification oriented proofs in OWL</article-title>
          , in: P.
          <string-name>
            <given-names>F.</given-names>
            <surname>Patel-Schneider</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hitzler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mika</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Z.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Horrocks</surname>
          </string-name>
          ,
          <string-name>
            <surname>B.</surname>
          </string-name>
          Glimm (Eds.),
          <source>Proceedings of the 9th International Semantic Web Conference (ISWC'10)</source>
          , volume
          <volume>6496</volume>
          <source>of LNCS</source>
          , Springer,
          <year>2010</year>
          , pp.
          <fpage>354</fpage>
          -
          <lpage>369</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>642</fpage>
          -17746-0_
          <fpage>23</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Schlobach</surname>
          </string-name>
          ,
          <article-title>Explaining subsumption by optimal interpolation</article-title>
          , in: J. J.
          <string-name>
            <surname>Alferes</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          <string-name>
            <surname>Leite</surname>
          </string-name>
          (Eds.),
          <source>Proceedings of the 9th European Conference on Logics in Artificial Intelligence (JELIA'04)</source>
          , volume
          <volume>3229</volume>
          <source>of LNCS</source>
          , Springer,
          <year>2004</year>
          , pp.
          <fpage>413</fpage>
          -
          <lpage>425</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>540</fpage>
          -30227-8_
          <fpage>35</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>C.</given-names>
            <surname>Alrabbaa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Borgwardt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Koopmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kovtunova</surname>
          </string-name>
          ,
          <article-title>Finding small proofs for description logic entailments: Theory and practice</article-title>
          , in: E.
          <string-name>
            <surname>Albert</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          Kovács (Eds.),
          <source>Proceedings of the 23rd International Conference on Logic for Programming</source>
          ,
          <source>Artificial Intelligence and Reasoning</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>