<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Towards Explicative Inference for OWL</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Marvin R.G. Schiller</string-name>
          <email>marvin.schiller@uni-ulm.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Birte Glimm</string-name>
          <email>birte.glimm@uni-ulm.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Artificial Intelligence, University of Ulm</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Automated reasoning in OWL is capable of inferences that are nontrivial for people to understand. We argue that the understanding of inferences would benefit from stepwise explanations. To build a system that supports such explicative inference, we propose a framework based on inference rules and proof tactics for OWL ontologies. In particular, the goal is to present inferences in a suitable and adaptable way to human users, and to predict whether certain inferences are harder to understand than others. This article outlines the conception of this framework and its benefits whose implementation is currently work in progress.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Inferences in OWL vary in their degree of difficulty for people to understand. This
was recently shown in experiments, where subjects who were either considered
competent in OWL or who considered themselves competent in OWL were found
to dismiss inferences in spite of them being logically valid (see [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]). One task
where it is relevant to understand why certain entailments hold (when in fact,
they are undesired or counterintuitive) is ontology debugging, as posited by [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        Previous work has argued for reasoning services that present inference steps
in a transparent manner, e.g. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], to facilitate understanding. One problem that
has been addressed is the identification of justifications, that is, to make explicit
minimal sets of axioms within an ontology that are sufficient for a given
entailment to hold. Presenting such a precise version of justification to the user clearly
has a benefit, as demonstrated by [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Further work has argued for making
intermediate reasoning steps explicit, such as [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. McGuinness et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]
even consider the presentation of explanations at “different levels of granularity
and focus”, where different segments of a proof for an entailment are rewritten
to provide “an abstraction”. Further techniques that are used in the context of
explanations for entailments are graphical representations of proofs, and natural
language output, usually generated using patterns.
      </p>
      <p>
        An effective explanation in the spirit of the above work presupposes a
conception of what kinds of explanations are understandable to particular groups
of users (or in general), and where the potential difficulties are. That is, the
difficulties of (purported) OWL experts to understand some entailments but
not others should inform the design of such a system, to enable support (and
transparency) where needed. Consequently, such a system should be measured
by the degree to which it makes a difficult task easier. However, the difficulties
that are inherent to the understanding of certain patterns of inference have not
been investigated in sufficient depth. Horridge et al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] present a model for the
“cognitive complexity” of entailments. It received some support in a preliminary
experimental evaluation, but more work is clearly required to establish a
theoretical underpinning and a more robust evaluation for such kinds of models.
In a similar vein, in previous work we have investigated suitable levels of detail
and conciseness of proofs in a mathematical assistant system employed in the
context of (general mathematical) proof tutoring [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] . It remains to be shown in
how far this approach can be usefully applied to OWL reasoning.
      </p>
      <p>
        The objective of this article is to outline the design of a system currently
under development to facilitate inferences in OWL that are “explicative”, in
the spirit of the above stipulations. It is based on the assumption that the best
way to generate explanations of entailments in OWL are rule-based inference
systems. The hypothesis that the stepwise presentation of derivations is under
circumstances more useful than the presentation of “justifications” alone (i.e. the
axioms from which the entailment is derived) is yet to be tested, one of the
objectives to be addressed with the completed system. Furthermore, we are interested
not only in formal properties of inference rule sets (such as soundness, of course),
but in their plausibility for the user, i.e. whether the user would find it natural
or difficult to make a particular inference, or to understand a given inference. A
second hypothesis to be addressed is whether the difficulty of an entailment in
OWL can be predicted more precisely on the basis of derivations in a suitable
rule system than by looking at properties of the axioms and the derived formula
alone (the model in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] makes such a restriction), or by using an off-the-shelf
reasoner, for instance a tableau-based reasoner. A third objective is to enable
inference rules that can be decomposed further into more fine-grained steps, to
result in a hierarchical proof structure. Such rules that encode commonly used
proof patterns are referred to as tactics in the theorem proving community. The
idea to base proof presentations on hierarchical proofs has previously been
implemented in some systems, see [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ]. The main benefit is that the same proof
can be presented at different levels of detail or conciseness, depending on the
user’s choice [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], or some model of appropriate step size [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. One objective of this
work is to investigate tactics for OWL reasoning, and to establish the benefits
of such an approach for explanation.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>A Framework for Explicative Inference</title>
      <p>
        The goal for the presented framework is to provide a suitable representation for
derivations, as the basis for proof presentation, and for using these derivations
as a model for estimating the complexity/effort of making the entailment
represented by the derivation. As the formula language, the presented framework
focuses on the constructs supported by OWL 2 [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] by directly making use of
the OWL API [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].1
      </p>
      <sec id="sec-2-1">
        <title>1 http://owlapi.sourceforge.net</title>
        <p>
          R17 dom(r0, X) ∀r0.⊥ ⊑ X
⊤ ⊑ X
(a)
Γ, ⊤ ⊑ X, dom(r0, X), ∀r0.⊥ ⊑ X ⊢ Δ
Γ, dom(r0, X), ∀r0.⊥ ⊑ X ⊢ Δ
(b)
As a representation formalism for explicative inference, we use sequent style
proofs and sequent inference rules. This has the advantage that different sets of
inference rules can be used. On the one hand, this includes a set of sequent rules
for the ALC fragment from [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], proposed specifically for the explanation of
subsumptions in ALC. On the other hand, we interpret the entailments investigated
in an experiment by Nguyen et al. [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] as inference rules. As an example,
consider one of the 51 inference problems considered by Nguyen et al., the following
entailment (as worded in the experiment):
        </p>
        <p>Everything that has a worship leader is a fomorian.</p>
        <p>Everything that has no worship leader at all is a fomorian.</p>
        <p>Everything is a fomorian.</p>
        <p>
          The representation of the above entailment as an inference rule is shown in
Fig. 1, where (a) is the interpretation of the entailment according to [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], and (b)
the interpretation as a sequent style inference rule (here, we only consider the
“left hand side” variant of the rule, i.e. forward direction of inference
application). In Fig. 1 (b), Γ and Δ represent formulae in the antecedent and succedent
that are not manipulated by the inference. As it is common, we construct
sequent proofs from the endsequent (the sequent containing all premises and the
conclusion(s) to be shown) “upwards” (in terms of notation), this explains the
different configuration of rules (a) and (b). To avoid the contraction rule2, the
principal formulae of R17(left) are also part of the side formulae.
        </p>
        <p>
          The construct dom(r,X ) is an abbreviation3 provided by SHOIQ (cf. e.g.
[
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]), and is defined as ∃r.⊤ ⊑ X . This construct, instantiated with some
imaginary concepts and roles, i.e. dom(hasWorshipLeader, Fomorian), was verbalized
by Nguyen et al. as “Everything that has a worship leader is a fomorian.” in
the natural language presentation of the entailment. Note that this is logically
equivalent to dom(hasWorshipLeader, Fomorian) but not literally congruent.
        </p>
        <p>
          The entailment quoted above was correctly judged as valid by 39 out of 50
participants in the experiment by Nguyen et al., i.e. 11 participants wrongly
2 In fact, structural rules (which include the contraction rule) are often not considered
useful for proof explanation, for example see [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
3 We write dom(r,X ) instead of domain(r,X ) (the latter being the usual syntax, e.g.
        </p>
        <p>
          in [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]) to save space in figures.
judged the entailment not to hold. Using the Nguyen et al. rule set to justify
entailments obviously results in different proofs from the ones using more concise
rule sets such as those presented in [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. The relative difficulties of the
entailments/rules investigated by Nguyen et al. are reflected in their facility indices
(the proportion of correct responses from experiment participants, interpreted
as an estimate of the entailments’ difficulties of being understood by the
participants). These can potentially be used to determine whether the employed
inferences in a derivation are to be considered to be difficult or easy. How these
difficulties combine (whether they are purely additive, or whether their specific
combination matters, as suggested by [
          <xref ref-type="bibr" rid="ref2 ref7">2, 7</xref>
          ]) is not clear, however, and will be
one of the research questions to be addressed in future work with the presented
framework.
        </p>
        <p>
          As an example for the representation of such a proof in our system, consider
an entailment taken from [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. Given the following (justification) axioms, it holds
that Person ⊑ ⊥ (i.e. the class Person is unsatisfiable):4
        </p>
        <p>RRated ≡ (∃hasScript.ThrillerScript ⊔ ∀hasVL.High)</p>
        <p>
          Fig. 2 (a) shows the proof tree (for ease of presentation, we use the
Gentzenstyle tree, rather than reproducing the full sequents at each step, the latter
being the actual representation used in our system). The proof presentation
is generated via simple pretty printing, and results in a linear proof in Fig. 2
(b). The sequent representation would be a suitable starting point for natural
language generation (beyond the simple pretty printing we currently use) to
facilitate the verbalization of proofs, but this is outside the scope of this paper
(and often it is not necessary, for example in the case of ontology debugging,
where derivations consisting of concise formulae might be more useful for an
ontology engineer to quickly understand the source of a problem than
naturallanguage text). The graphical representation of proof trees also has its benefits
(see, for example, [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]). So far, we have not connected our framework to a tool
that visually lays out the proof trees, which might be useful in its own right.
        </p>
        <sec id="sec-2-1-1">
          <title>4 We write “hasVL” as a shorthand for the role “hasViolenceLevel ”.</title>
          <p>(∃hasScript.ThrillerScript ⊔ ∀hasVL.High) ⊑ RRated</p>
          <p>R4
∀-weakening
R12
Tactics represent commonly used proof patterns. In this work, we define a tactic
as a high-level inference rule (represented as a sequent style rule), for which an
alternative derivation exists with the same premises and the same conclusion,
but made up of several inference steps at a lower level. The more detailed version
of the derivation is called the expansion of the tactic, and can be considered an
alternative explanation of the inference.</p>
          <p>
            To represent such hierarchical proofs with several levels, we employ the
generic conception of the proof data structure PDS developed for the
mathematical assistant system Omega [
            <xref ref-type="bibr" rid="ref14">14</xref>
            ] (however, in our work we do not make use
of proof forests at this stage, which are also part of the PDS concept). More
precisely, proofs are represented as directed acyclic graphs, where sequent
formulae are linked to their premises via justification5 nodes. There can be several
justifications for the same sequent, which are ordered hierarchically. As an
illustrative example, we consider the entailment in Fig. 2 again. Rule R17 (as shown
in Fig. 1) is implemented as a tactic, and expansion results in two inference steps
using the inference rules ⊤-Cases and Def.Domain (R17 is part of the catalog of
entailments in [
            <xref ref-type="bibr" rid="ref1">1</xref>
            ], whereas ⊤-Cases and Def.Domain are extra rules that are not
part of Nguyen et al.’s catalog), as shown in Fig. 3. The inference rules that
constitute ⊤-Cases and Def.Domain are presented in Fig. 4. In Fig. 3, rectangular
boxes represent sequents, and rounded boxes the inference rules that “justify”
the derivations, which connect a sequent to its premise sequent(s) (in the figure,
from left to right). For ease of readability, those formulae that are added to a
sequent by the latest rule application are printed in bold. When the last step of
the expansion of a tactic finally links to the premise sequent of that tactic, it
may happen that some of the formulae that were derived intermittently are not
considered (formally, this corresponds to weakening). In our example, this is the
case for ∃hasVL.⊤ ⊑ Movie. However, not much is lost – if a proof is complete
using a particular tactic, of course it remains complete when the expansion is
added in this way. The dashed arrow represents a hierarchical ordering between
the justifying inference rules for the sequent on the left in the figure.
just.
          </p>
          <p>h</p>
          <p>R17
prem.
d∀ohmas(VhaLs.V⊥L⊑,MMoovvieie),, just. DoDmefa.in
. . . ⊢ Person ⊑ ⊥
prem. d∃ohmas(VhaLs.V⊤L⊑, MMoovviiee),, just.</p>
          <p>∀hasVL.⊥ ⊑ Movie,
. . . ⊢ Person ⊑ ⊥</p>
          <p>⊤⊑ Movie,
⊤-Cases prem. d∀ohmas(VhaLs.V⊥L⊑, MMoovviiee),,
. . . ⊢ Person ⊑ ⊥</p>
          <p>
            When tactics are used, there is a choice between different levels for presenting
the derivation/proof to the user. In the example, instead of presenting the step
using R17, the two more fine-grained steps using ⊤-Cases and Def.Domain can
be shown alternatively. Any complete proof alternative that can be generated
from this hierarchical representation is called a “view” in the terminology of [
            <xref ref-type="bibr" rid="ref14">14</xref>
            ].
Previous work has explored the usefulness of enabling users to switch between
such views in a mathematical assistant system [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ]. Another prospect is to use
measures for the difficulty (or “cognitive complexity” [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ]) to make an informed
selection of the right level(s) of justifications for a particular user, in the spirit
          </p>
        </sec>
        <sec id="sec-2-1-2">
          <title>5 Note that here, “justification” takes a slightly different meaning than in [5]; a sequent</title>
          <p>is considered justified when it follows from one or several premise sequent(s) by
means of an inference application.</p>
          <p>⊤-Cases
Γ, ⊤ ⊑ X, ∃r.⊤ ⊑ X, ∀r.⊥ ⊑ X ⊢ Δ</p>
          <p>
            Γ, ∃r.⊤ ⊑ X, ∀r.⊥ ⊑ X ⊢ Δ
Def.Domain (left)
Γ, ∃r.⊤ ⊑ X, dom(r, X) ⊢ Δ
Γ, dom(r, X) ⊢ Δ
(a)
(b)
of [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ]. For example, if the model for difficulty/complexity determines that R17 is
understandable to the user (possibly depending on some form of user model, or
the user’s choice, or a general assumption about the typical user of the system),
R17 is shown, whereas if it is considered too difficult, it can be split up into the
two rule applications at the lower level.
The envisaged process for the generation of explanations with the presented
system is a three stage process. First, for a given entailment and an ontology, a
standard tableau-based reasoner (currently Hermit6) is called to find a minimal
set of justifications according to [
            <xref ref-type="bibr" rid="ref15">15</xref>
            ]. Then, in the second stage, a simple proof
search procedure is used that applies the tactics and sequent inference rules to
find a proof as described above. This tandem construction of using two different
inference mechanisms (both tableau-based and sequent rule-based) is used since
a justification is a better starting point for constructing the sequent derivation
than the entire ontology. Further work will investigate in how far the construction
of the sequent-style proofs with tactics can be informed by the tableau proof
obtained in the first step. For instance, Borgida et al. [
            <xref ref-type="bibr" rid="ref12">12</xref>
            ] have shown in the
context of the ALC fragment that the trace of the (efficient) tableau construction
can be used to build sequent-style proofs for the purpose of explanation without
any further search, but this result needs to be extended to the other patterns of
inference available in the system presented here. We consider the work in [
            <xref ref-type="bibr" rid="ref12">12</xref>
            ] a
good basis to further explore this issue.
          </p>
          <p>The third stage consists in producing readable output from the generated
proof(s). This involves the selection within the hierarchical levels of a proof (i.e.
the selection of the “view”), and the generation of a human-readable
presentation. At current, only a relatively simple pretty-printer is used (resulting in the
output illustrated in Fig. 2), but a graphical interface (to draw the proofs as
graphs, and to visualize hierarchical layers) or a natural language output facility
would be useful additions.</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>6 www.hermit-reasoner.com/</title>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Discussion</title>
      <p>This work combines several ideas and hypotheses that have been previously
investigated by related work more or less in isolation;
– that for the purpose of explanations, entailments in OWL are suitably
represented by explicit derivations, rather than pointing to the relevant axioms
alone,
– that different kinds of inference schemata are more difficult to perform
and/or to understand than others,
– that hierarchical proofs offer the flexibility to adapt the level of abstraction
of derivations to the needs of the user.</p>
      <p>
        This echoes efforts in the general community of mathematical assistant
systems and automated reasoning that have sought to facilitate human-oriented
proofs, as opposed to “machine-oriented” proof mechanisms [16]. Such work has
pointed to challenges that are specific to such systems, in particular with respect
to efficiency. So far, the efficiency of our tandem construction for proof generation
(fast search for justifications using a standard tableau-based reasoner, and
relatively slow rule-based proof search with large rule sets on the simplified search
problem) has not been evaluated, an important topic that remains for future
work. In fact, it is to be expected that relatively rich rule sets (such as the
51-rule set generated from the inferences in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]) will result in a large branching
factor, and that naive, non-heuristic search will reach the limits of acceptable
performance fairly quickly.
      </p>
      <p>
        While the work presented here argues for the benefits of explicit
derivations to explain entailments, in contrast to the “justification-oriented” approach
(e.g. [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]), the question of which one of those competing approaches is the
most suitable is not likely to result in an all-or-nothing answer. For example,
justification-based explanations certainly have an advantage in situations where,
for instance, the logical structure of the problem is of a well-known or repetitive
form (e.g., a chain of subsumptions, or even simple OWL 2 EL justifications with
chains of existential restrictions and atomic subsumptions). In such a case, any
additional information (such as showing a derivation) would most likely be more
harmful than helpful. The argument should rather be addressed by a comparison
of the two approaches taking into account the background of the user and the
complexity of the derivations to be presented.
      </p>
      <p>
        A further and important topic for empirical investigation to be addressed in
future work is to determine the advantage of using hierarchical proofs/tactics
(and empirically investigated rule sets such as those from [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]) for the explanation
of entailments over previous approaches using sequent-style derivations, such
as [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>
        Another question is in how far such systems should mirror “human”
reasoning, in particular since there is controversy on the question “how do people
reason” between different schools of thought in cognitive psychology. For
example, some have argued (e.g. [17]) that human reasoning is essentially incomplete,
since humans were found to refuse certain inferences that are logically valid (this
interpretation could be transferred to those inferences that proved to be most
difficult in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]). For some classes of inferences (e.g. syllogisms), competing kinds
of theories exist (for instance, rule-based theories of inference, e.g. [17], vs.
mental models [18]). In our current work we submit to inference rules, since they
provide a middle ground between their suitability for automated reasoning and
their psychological plausibility as determined by cognitive psychologists.
      </p>
      <p>
        A predominant perspective in previous work on the difficulty of inferences in
ontologies (e.g. [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]) is that of difficulty being an essentially static property of
inference problems, i.e. individual differences in knowledge, mastery, or learning
have not been taken into account so far. Similarly, most experiments on reasoning
in cognitive psychology have been performed with untrained subjects, to
establish the “natural” ability of people to reason logically. Experiments involving
both learning and reasoning are more complex to set up and analyze. However,
most courses in logic are built on the assumption that instruction and training
will increase the learners’ ability over time, therefore the hypothesis that the
difficulty of inferences will decrease with practice seems to be a plausible one.
In learning, a process called “rule automatization” [19] is known. Taking this
into account, future work should not only study whether certain inferences are
generally more difficult for a particular group of people (at a specific level of
expertise) than others (which has been done in cognitive psychology, usually with
the level of expertise being fixed at “naive”), but also study how the specific
difficulty changes over time as a result of experience and learning. The presented
framework will be useful to model hypotheses and generate test problems for
empirical experiments that help to discriminate between different such
hypotheses (for example, whether the relative differences in difficulty between easy and
difficult problems for novices are still to be found in experts, or whether these
problem-specific differences disappear with practice).
      </p>
      <p>Finally, since this article describes a system that is currently work in progress,
future work will examine different aspects of usefulness of such a framework, both
from the viewpoint of practitioners (e.g. in the case of ontology debugging), and
for a deeper investigation of metrics associated with the difficulty of sequences
of inferences in a knowledge representation setting.</p>
      <p>Acknowledgements. We acknowledge the support of the Transregional
Collaborative Research Center SFB/TRR 62 “A Companion-Technology for Cognitive
Technical Systems” funded by the German Research Foundation (DFG). We
thank three anonymous reviewers for their useful comments.
Semantic Web Conference and the 2nd Asian Semantic Web Conference, Springer,
Berlin, Heidelberg (2007) 267–280
16. Bundy, A.: A survey of automated deduction. Technical Report EDI-INF-RR-0001,</p>
      <p>Edinburgh University (1999)
17. Rips, L.J.: The psychology of proof : deductive reasoning in human thinking. MIT</p>
      <p>Press, Cambridge, MA (1994)
18. Johnson-Laird, P.: Mental models and human reasoning. Proceedings of the
National Academy of Science of the United States of America 107(43) (2010) 18243–
18250
19. Sweller, J.: Cognitive technology: Some procedures for facilitating learning and
problem solving in mathematics and science. Journal of Educational Psychology
81 (4) (1989) 457–466
In the following, we list the inference rules used in the example derivation in
Fig. 2.</p>
      <p>Γ, X ⊑ Y, X ≡ Y ⊢ Δ
Γ, X ≡ Y ⊢ Δ</p>
      <p>R1a
Γ, Y ⊑ Z, (X ⊔ Y ) ⊑ Z ⊢ Δ
Γ, (X ⊔ Y ) ⊑ Z ⊢ Δ</p>
      <p>R4
Γ, X ⊑ Z, X ⊑ Y, Y ⊑ Z ⊢ Δ
Γ, X ⊑ Y, Y ⊑ Z ⊢ Δ</p>
      <p>R12(left)
Γ, ⊤ ⊑ X, dom(r, X ), ∀r.⊥ ⊑ X ⊢ Δ
Γ, dom(r, X ), ∀r.⊥ ⊑ X ⊢ Δ
a We assume that n-ary class-equivalence axioms are normalized into binary
classequivalence axioms, such that the rule for the binary case suffices.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Nguyen</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Power</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piwek</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Williams</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Measuring the understandability of deduction rules for OWL</article-title>
          .
          <source>In: First International Workshop on Debugging Ontologies and Ontology Mappings (WoDOOM12)</source>
          . (
          <year>2012</year>
          )
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Horridge</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bail</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parsia</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sattler</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          :
          <article-title>The cognitive complexity of OWL justifications</article-title>
          . In Rosati, R.,
          <string-name>
            <surname>Rudolph</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zakharyaschev</surname>
          </string-name>
          , M., eds.
          <source>: Proceedings of the 24th International Workshop on Description Logics (DL</source>
          <year>2011</year>
          ), Barcelona, Spain. Volume
          <volume>745</volume>
          <source>of CEUR Workshop Proceedings</source>
          . (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Horridge</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parsia</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sattler</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          :
          <article-title>Lemmas for justifications in OWL</article-title>
          . In Cuenca Grau,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Horrocks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            ,
            <surname>Motik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Sattler</surname>
          </string-name>
          , U., eds.
          <source>: Proceedings of the 22nd International Workshop on Description Logics (DL</source>
          <year>2009</year>
          ), Oxford, UK. Volume
          <volume>477</volume>
          <source>of CEUR Workshop Proceedings</source>
          . (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>McGuinness</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ding</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Glass</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chang</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zeng</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Furtado</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Explanation interfaces for the semantic web: Issues and models</article-title>
          .
          <source>In: Proceedings of the 3rd International Semantic Web User Interaction Workshop (SWUI'06). Accessed 19 April</source>
          <year>2013</year>
          from ftp://ftp.ksl.stanford.edu/pub/KSL_Reports/KSL-06-14.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Kalyanpur</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Debugging and repair of OWL ontologies</article-title>
          .
          <source>PhD thesis</source>
          , University of Maryland (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Horridge</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parsia</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sattler</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          :
          <article-title>Justification oriented proofs in OWL</article-title>
          .
          <source>In: ISWC'10 Proceedings of the 9th International Semantic Web Conference, November</source>
          <year>2010</year>
          , Shanghai, China, Volume
          <string-name>
            <surname>Part</surname>
            <given-names>I</given-names>
          </string-name>
          , Springer, Berlin 354-369
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Schiller</surname>
            ,
            <given-names>M.R.G.</given-names>
          </string-name>
          :
          <article-title>Granularity analysis for mathematical proofs</article-title>
          .
          <source>Topics in Cognitive Science</source>
          <volume>5</volume>
          (
          <year>2013</year>
          )
          <fpage>251</fpage>
          -
          <lpage>269</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Alexoudi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zinn</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bundy</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>English summaries of mathematical proofs</article-title>
          . In Benzmu¨ller,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Windsteiger</surname>
          </string-name>
          , W., eds.: Second International Joint Conference on
          <source>Automated Reasoning - Workshop on Computer-Supported Mathematical Theory Development, Cork, Ireland, July</source>
          <volume>5</volume>
          ,
          <year>2004</year>
          . Number 04-14
          <source>in RISC Report Series</source>
          , RISC Institute, University of Linz (
          <year>2004</year>
          )
          <fpage>49</fpage>
          -
          <lpage>60</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Fiedler</surname>
            ,
            <given-names>A.: P.</given-names>
          </string-name>
          <article-title>rex : An interactive proof explainer</article-title>
          . In Gor´e, R.,
          <string-name>
            <surname>Leitsch</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nipkow</surname>
          </string-name>
          , T., eds.:
          <source>Automated Reasoning - Proceedings of the First International Joint Conference (IJCAR</source>
          <year>2001</year>
          ), Siena, Italy, June 18-23,
          <year>2001</year>
          . Number 2083 in LNAI, Springer, Berlin, Heidelberg (
          <year>2001</year>
          )
          <fpage>416</fpage>
          -
          <lpage>420</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>Cuenca</given-names>
            <surname>Grau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Horrocks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            ,
            <surname>Motik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Parsia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Patel-Schneider</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Sattler</surname>
          </string-name>
          ,
          <string-name>
            <surname>U.</surname>
          </string-name>
          :
          <article-title>OWL 2: The next step for OWL</article-title>
          .
          <source>Web Semantics: Science, Services and Agents on the World Wide Web</source>
          <volume>6</volume>
          (
          <issue>4</issue>
          ) (
          <year>2008</year>
          )
          <fpage>309</fpage>
          -
          <lpage>322</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Horridge</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bechhofer</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>The OWL API: A java API for OWL ontologies</article-title>
          .
          <source>Semantic Web Journal</source>
          <volume>2</volume>
          (
          <issue>1</issue>
          ) (
          <year>2011</year>
          )
          <fpage>11</fpage>
          -
          <lpage>21</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Borgida</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Franconi</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Horrocks</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Explaining ALC subsumption</article-title>
          . In Horn, W., ed.
          <source>: Proceedings of the 14th European Conference on Artificial Intelligence (ECAI</source>
          <year>2000</year>
          ), IOS Press, Amsterdam (
          <year>2000</year>
          )
          <fpage>209</fpage>
          -
          <lpage>213</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Horridge</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Justification based explanations in ontologies</article-title>
          .
          <source>PhD thesis</source>
          , University of Manchester (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Autexier</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , Benzmu¨ller,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Dietrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Meier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Wirth</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.P.:</surname>
          </string-name>
          <article-title>A generic modular data structure for proof attempts alternating on ideas and granularity</article-title>
          . In Kohlhase, M., ed.
          <source>: Proceedings of the 5th International Conference on Mathematical Knowledge Management (MKM'05)</source>
          , Bremen, Germany,
          <source>July 15-17</source>
          ,
          <year>2005</year>
          . Volume 3863 of LNCS, Springer, Berlin, Heidelberg (
          <year>2006</year>
          )
          <fpage>126</fpage>
          -
          <lpage>142</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Kalyanpur</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parsia</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Horridge</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sirin</surname>
          </string-name>
          , E.:
          <article-title>Finding all justifications of OWL DL entailments</article-title>
          .
          <source>In: ISWC'07/ASWC'07 Proceedings of the 6th International</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>