<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>On the Eve of True Explainability for OWL Ontologies: Description Logic Proofs with Evee and Evonne</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Christian Alrabbaa</string-name>
          <email>christian.alrabbaa@tu-dresden.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefan Borgwardt</string-name>
          <email>stefan.borgwardt@tu-dresden.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tom Friese</string-name>
          <email>tom.friese@tu-dresden.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Patrick Koopmann</string-name>
          <email>patrick.koopmann@tu-dresden.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Julián Méndez</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alexej Popovič</string-name>
          <email>alexej.popovic@tu-dresden.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Theoretical Computer Science, Technische Universität Dresden</institution>
          ,
          <addr-line>01062 Dresden</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Interactive Media Lab Dresden, Technische Universität Dresden</institution>
          ,
          <addr-line>01062 Dresden</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>When working with description logic ontologies, understanding entailments derived by a description logic reasoner is not always straightforward. So far, the standard ontology editor Protégé ofers two services to help: (black-box) justifications for OWL 2 DL ontologies, and (glass-box) proofs for lightweight OWL EL ontologies, where the latter exploits the proof facilities of reasoner Elk. Since justifications are often insuficient in explaining inferences, there is thus only little tool support for explaining inferences in more expressive DLs. In this paper, we introduce Evee-libs, a Java library for computing proofs for DLs up to ℒℋ, and Evee-protege, a collection of Protégé plugins for displaying those proofs in Protégé. We also give a short glimpse of the latest version of Evonne, a more advanced standalone application for displaying and interacting with proofs computed with Evee-libs.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Proofs</kwd>
        <kwd>Explanation</kwd>
        <kwd>Protégé Plug-in</kwd>
        <kwd>Forgetting</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Description logics (DLs) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] have gained popularity through the standardization of the Web
Ontology Language OWL1 and the development of an OWL Java API [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], editing tools, and
reasoners. This popularity comes with the need of explaining description logic reasoning to
domain experts who are not familiar with DLs. We consider here the problem of explaining
the entailment of logical consequences from DL ontologies (as opposed to explaining
nonconsequences, which requires other techniques [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]). Research on explaining description logic
inferences in the beginning considered proofs that provide detailed inference steps through
which the consequence can be obtained [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ]. In many cases, especially in light-weight DLs, it
can be suficient to compute a justification , a minimal set of ontology axioms that entail the
consequence [
        <xref ref-type="bibr" rid="ref7 ref8 ref9">7, 8, 9</xref>
        ]. However, if justifications become very large or the ontology is formulated
in a more expressive DL, providing intermediate inference steps between a justification and its
consequence may be required for understanding. Generating proofs is achievable via heuristic
search for possible intermediate inferences [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], concept interpolation [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], forgetting [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ],
or directly using the inference rules underlying a consequence-based reasoner like Elk [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ],
which however only supports ℰ ℒ+. Recently, a plug-in for the ontology editor Protégé [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]
was developed that shows proofs generated by Elk in a human-readable form [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Related
work investigated the complexity of finding proofs that are optimal w.r.t. various measures,
e.g. proof size or depth [
        <xref ref-type="bibr" rid="ref12 ref16 ref17">12, 16, 17</xref>
        ]. For example, from a set of instantiated inference rules, e.g.
computed by Elk, one can extract a proof of minimal depth in polynomial time [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
      </p>
      <p>
        In this paper, we present a collection of tools for proof generation, also for expressive DLs other
than ℰ ℒ+. Our Java library Evee-libs (EVincing Expressive Entailments) implements various
proof generation methods that form the basis for Evee-protege, a collection of Protégé plug-ins
for interactively inspecting proofs, and Evonne, a more advanced proof visualization tool. This
library extends our previously reported implementations of proof generation algorithms [
        <xref ref-type="bibr" rid="ref12 ref18">12, 18</xref>
        ]
in various ways. One can extract proofs that are minimized w.r.t. size or depth from the output
of Elk, generate elimination proofs [
        <xref ref-type="bibr" rid="ref12 ref18">12, 18</xref>
        ] that can additionally be optimized w.r.t. several
parameters, or generate more detailed proofs directly via the resolution-based calculus used
in the uniform interpolation (UI) tool Lethe [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. Elimination proofs can support diferent
DLs, depending on the UI tool that is used internally in a black-box fashion — our library
currently uses a version of Fame [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] that supports ℒ, and Lethe, which supports ℒℋ.
While we discussed and evaluated the diferent methods for elimination proofs before [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], the
method for detailed proofs is new. We provide a quantitative comparison of this method with
the elimination proofs based on ontologies from BioPortal.
      </p>
      <p>
        To use the proofs in practice to support ontology engineers with explanation services, we
ofer two tools to show proofs generated by Evee-libs to users. Evee-protege utilizes PULi
(the Proof Utility Library) and the Protégé proofs plug-in [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] to show the proofs directly in
the ontology editor Protégé. This might be the easiest way for users to use our services, and
allowed us to perform a qualitative user study for our diferent proof methods presented in
this paper. Evonne is a stand-alone web application, early prototypes of which were presented
in [
        <xref ref-type="bibr" rid="ref21 ref22">21, 22</xref>
        ]. Since then, it has seen many visual improvements and new features, and ofers now
many diferent ways of displaying and interacting with proofs, together with a visualization
of the modular structure of ontologies, and guidance for repairing undesired entailments. A
larger publication discussing and evaluating the design decisions made in Evonne is currently
under preparation. We here just give a brief overview of its proof visualization facilities.
The source code and installation instructions for Evee-libs and Evee-protege can be found
at https://github.com/de-tu-dresden-inf-lat/evee (version 0.1), where also the plugins can be
downloaded. Evonne can be tested online and downloaded under https://imld.de/evonne/.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Preliminaries</title>
      <p>
        We consider a consequence  |=  that is to be explained, where  is a DL ontology [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and 
an axiom. A justification for  in  is a subset-minimal  ⊆  such that  |=  [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. In the
worst case, there can be exponentially many justifications for an entailment. Following [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ],
we define proofs of  |=  as finite, acyclic, directed hypergraphs, where vertices  are labeled
with axioms ℓ() and hyperedges are of the form (, ), with  a tuple of vertices and  a vertex
root by  . We depict hyperedges either using horizontal bars (e.g. 
such that {ℓ() |  ∈ } |= ℓ(); the leafs of a proof must be labeled by elements of  and the
→ ) or using graphs
with two kinds of nodes (see e.g. Figure 1). In this paper, all proofs are trees, i.e., no vertex can

appear in the first component of multiple hyperedges. The size of such a proof is the number of
its vertices (this is called tree size in [
        <xref ref-type="bibr" rid="ref12 ref17">12, 17</xref>
        ]), and its depth is the length of the longest path from
a leaf to the root. Given a finite collection of valid inference steps (hyperedges), one can extract
a (tree) proof that is composed from a subset of these steps and has minimal size or minimal
depth in polynomial time [
        <xref ref-type="bibr" rid="ref12 ref17">12, 17</xref>
        ].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Proof Generation with Evee-libs</title>
      <p>Evee-libs implements several (families of) proof generation methods with diferent advantages.
They all use the Java interface IProofGenerator&lt;OWLAxiom&gt; with a method getProof that
takes as input an OWLAxiom and returns an IProof&lt;OWLAxiom&gt; that can either be serialized
into JSON format using a JsonProofWriter, explored using the method getInferences, or
measured using an IProofEvaluator. For entailments that only depend on axioms in ℰ ℒ+,
we implemented a method that optimizes proofs generated by Elk using various proof measures
(in the evee-elk-proof-extractor library). For ℒ and ℒℋ, we implemented two other
proof generation methods, elimination proofs (evee-elimination-proofs) and Lethe-based proofs
(evee-lethe-proof-extractor).</p>
      <sec id="sec-3-1">
        <title>3.1. Optimized Elk Proofs</title>
        <p>
          We implemented the Dijkstra-like proof search algorithm from [
          <xref ref-type="bibr" rid="ref12 ref16 ref17">12, 16, 17</xref>
          ] that takes as input a
set of instantiated inference rules that prove a target consequence and outputs an optimized
proof. This works for any recursive measure of proof quality—intuitively, measures that can
be computed recursively over the tree structure of the proof. Examples include the (tree) size
weighted size, and depth of a proof. We apply this algorithm to the output of Elk, providing a
diferent view on
        </p>
        <p>
          Elk proofs than the existing plug-in [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], which may show multiple ways
to prove a DL axiom at the same time. Due to the fast reasoning with Elk, and because our
algorithm runs only on the relatively small input extracted from Elk, this is the fasted proof
generation method in Evee-libs. An experimental comparison of size-minimal Elk proofs and
elimination proofs can be found in [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Elimination Proofs</title>
        <p>
          For entailments that require more expressive DLs, such as ℒ, we implemented the
forgettingbased approach from [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. We now call those proofs elimination proofs, because they perform
inferences by eliminating names. Inferences in an elimination proof look as follows:
 1
and  does not occur in  . The general idea is that  is derived from the premises  1, . . .,  
by performing inferences on . Since usually the axiom we want to prove contains less names
than the axioms it is derived from, this allows for a step-wise sequence of inferences from the
ontology to the entailment. The result is a more high-level explanation than the other proof
types. Elimination proofs do not depend on a specific logic – provided we have a method to
compute elimination steps.
        </p>
        <p>
          We compute elimination proofs by making use of existing tools for forgetting: given an
ontology  and a name , the result of forgetting  from  is an ontology 
by  that does not contain , and preserves all entailments of  that can be expressed without
using . Forgetting has bad worst-case complexities [23, 24], and may not always be successful,
−  entailed
for instance if  contains cycles. Nonetheless, forgetting-tools like Fame and Lethe often
perform well in practice when applied to realistic ontologies [
          <xref ref-type="bibr" rid="ref19 ref20">19, 20</xref>
          ]. Since forgetting is not
always possible, these tools may introduce fresh names to simulate cyclic dependencies in the
result. Fame may even not return a result at all because it is not complete.
        </p>
        <p>We implemented three methods for computing elimination proofs using forgetting. All three
start from a justification</p>
        <p>2 for the entailment  |=  to be explained, and then generate
a sequence of ontologies  = 1, . . .  by eliminating the names that do not occur in 
one after the other. For robustness, we internally use a wrapper for the chosen forgetting
method — if the forgetting method “fails” to eliminate  (exceeds a fixed timeout set to 10
seconds by default, contains a definer, or fails for any other reason), we keep  and continue
forgetting the next name. The wrapper also makes sure that forgetting results are cached.
From this sequence of ontologies, the elimination proof is then constructed using justifications:
specifically, for  &gt; 1, we construct an inference step for 
justification for  from − 1. To further optimize the presentation, we added a technique to
merge several inference steps into one: in an inference  1 ...   , we may replace premises  

with the premises used in the inference producing  , provided that the number of premises in
the resulting inference step does not increase. This often leads to easier proofs in which several
names are eliminated at the same time, e.g. using the following inference:
∈  ∖ − 1 by taking an arbitrary3
 ⊑ ∀.(1 ⊓ 2 ⊓ 3)</p>
        <p>∃.(1 ⊓ 2 ⊓ 3) ⊑ 
 ⊓ ∃.⊤ ⊑ 
eliminate 1, 2, 3</p>
        <p>
          For generating the sequence of ontologies, we employed three strategies. 1) The heuristic
method selects names using the heuristic described in [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], 2) the name-optimized method uses
best-first search to optimize the order in which names are eliminated with the aim of finding
a shortest sequence of ontologies (thus, essentially minimizing the number of names being
eliminated), 3) the size-optimized method also uses best-first search, but optimizes the final
proof rather based on other measures than the length of the sequence: size and weighted size.
Forgetting role names early often leads to very short sequences and small proofs, but also hides
inferences. For this reason, we added the additional constraint that role names can only be
eliminated if they only occur in role restrictions without other names (e.g. ∃.⊤, ∀.⊥). An
example of an elimination proof generated using the heuristic approach is shown in Figure 1.
        </p>
        <sec id="sec-3-2-1">
          <title>2This is computed using the black-box justification algorithm [25] with</title>
          <p>HermiT [26].</p>
          <p>3We also implemented the possibility of looking for the best justification in each case. However, this adversely
afected the running time and did not lead to significantly smaller proofs.</p>
          <p>⊑ ∀.C1
1 ⊑ 3 ⊔ C2</p>
          <p>elim. C1
 ⊑ ∀r.C3
C1 ⊑ 3
elim. r, C3</p>
          <p>Lethe and Fame may reformulate axioms that do not involve the forgotten name due
normalization, which results in elimination inferences that do not really eliminate a symbol. For
example, such a inference could produce  ⊑  from  ≡ . Note that the inference pattern
above allows this. Because they improve readability of the proof, we keep those inferences, but
call them “normalization” rather than elimination inferences.</p>
          <p>We note that the version of Fame we use only supports ℒ-TBoxes, while the version of
Lethe supports ℒℋ-TBoxes. The generated proofs often look quite diferent because Lethe
puts more emphasis on “beautifying” the resulting axioms, that is, reformulating them to look
more human-readable. Thus, elimination proofs generated using Lethe may contain axioms
that are easier to read than those generated using Fame. On the other hand, it can happen as a
result of reformulating axioms that the relation between the premises and the conclusion of an
inference step becomes less obvious.</p>
          <p>
            Details and an experimental evaluation of the optimized algorithms can be found in [
            <xref ref-type="bibr" rid="ref18">18</xref>
            ].
          </p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Detailed Proofs Extracted from Lethe</title>
        <p>Elimination proofs provide high-level explanations, since the detailed inferences for eliminating
a symbol are hidden. For situations where this representation is too coarse, we provide a
method for generating more detailed proofs, which uses the forgetting tool Lethe in a diferent
way. When forgetting names in ℒℋ TBoxes, Lethe internally uses a consequence-based
reasoning procedure based on a normalized representation of axioms and the calculus shown in
Figure 2 [27]. To explain  |=  ⊑ , we forget all names other than  and  from , and
track the inferences performed by Lethe. The final set of clauses will contain a clause from
which  ⊑  can be straight-forwardly deduced.</p>
        <p>We modified the code of Lethe so that it logs all performed inferences, including information
on the side conditions. In addition to applying the inferences from Figure 2, normally Lethe
repeatedly normalizes and denormalizes relevant parts of the ontology, and applies further
simplifications to keep the current representation of the forgetting result small. For the logging
to operate properly, we added a logging mode where all optimizations are turned of, and the
entire ontology is normalized once at the beginning. This leads to slower reasoning performance,
which is okay since we apply the method only on justifications rather than large ontologies.
To avoid complicated inferences using the last rule in Figure 2, we make sure that inferences
on role names are performed at the very end. At this stage, all relevant information for the
⊤ ⊑ 1 ⊔  ⊤ ⊑ 2 ⊔ ¬
⊤ ⊑ 1 ⊔ 2
⊤ ⊑  ⊔ ∃.
⊤ ⊑  ⊔ ∃.
 ⊑ 
⊤ ⊑  ⊔ ∀.
⊤ ⊑  ⊔ ∀.
 ⊑ 
⊤ ⊑ 1 ⊔ ∃.1 ⊤ ⊑ 2 ⊔ ∀.2</p>
        <p>⊤ ⊑ 1 ⊔ 2 ⊔ ∃.(1 ⊓ 2)
⊤ ⊑ 1 ⊔ ∀1.1 ⊤ ⊑ 2 ⊔ ∀2.2
⊤ ⊑ 1 ⊔ 2 ⊔ ∀.(1 ⊓ 2)
where  |=  ⊑</p>
        <p>where  |=  ⊑ 1,  ⊑ 2
unsatisfiability is usually already propagated into a single existential role restriction, which
means that the inference has only one premise and the side conditions are not needed anymore.
Indeed, we found that the resulting inference steps were usually very straightforward.</p>
        <p>To produce a proof from the logging information, some additional steps are necessary: 1)
the normalization as well as the reasoning procedure introduce fresh names, which have to be
replaced again by the concepts they represent, 2) in the normal form, axioms are always
represented as concept disjunctions, which is not very user-friendly, and 3) the normalized axioms
still have to be linked to (non-normalized) input axioms, as well as to the conclusion. For 3), we
again use justifications computed using HermiT. For 2), we use the “beautification” functionality
of Lethe to produce more human-friendly forgetting results. This involves applying some
obvious simplifications of concepts (  ⊔ ¬ ≡  ⊔ ⊤ ≡ ∀ .⊤ ≡ ⊤ ,  ⊓ ¬ ≡  ⊓ ⊥ ≡ ∃ .⊥ ≡ ⊥ ),
pushing negations inside complex concepts, and moving disjuncts to the left-hand side of a GCI
if this allows to eliminate further negation symbols (eg.  ⊑  ⊔ ∃.¬ ⇒  ⊓ ∀. ⊑ ).
Applying those simplifications too rigorously, however, may again hide inferences performed
by Lethe. We thus “beautify” the axioms before we replace the introduced names for 1), so that
inferences performed on introduced names are still easily visible. The efect of this can be seen
in the proof in Figure 3, where negations that would otherwise be “beautified away” are still
shown. After transforming the logged inferences in this way, we use our Dijkstra-style proof
search algorithm to extract a single proof.
R40
T
EX30
20
10
0</p>
        <p>Proof size</p>
        <p>Run time (ms)
XTR104
E
103
0 10 20 30 40 50 60 70</p>
        <p>
          HEUR
3.3.1. Comparing Elimination Proofs to Lethe-based Proofs
We compared the elimination proofs constructed using Lethe and the heuristic method (HEUR)
with the detailed proofs extracted directly from Lethe (EXTR) on a collection of proof tasks
from the 2017 snapshot of BioPortal [28] (see also [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]). Up to isomorphism, we extracted 138
distinct ℒℋ proof tasks (consisting of an entailed axiom and the union of all its justifications)
that are not fully expressible in ℰ ℒℋ, representing 327 diferent entailments within the BioPortal
ontologies. We then computed proofs using the two methods, imposing a timeout of 5 minutes.
This was successful for 325/327 entailments using HEUR. Due to the deactivated optimizations,
EXTR timed out in 24/327 cases, which mostly happened when the signature of the task was large
(more than ~18 symbols). Users should thus use EXTR only if the signature of the justification is
suficiently small. In Figure 4, we compare the proof size (left) and the run time (right) of both
methods. On average, the proof size of EXTR was 90% higher than for HEUR, but the average
inference step complexity4 was 10% lower, supporting our expectation that EXTR computes
more detailed proofs. Interestingly, the maximum step complexity of EXTR was 29% higher,
indicating that the elimination proofs can hide complex inferences.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Evee Protégé Plugins</title>
      <p>
        The easiest way to use our proof generators is via plug-ins for Protégé, a popular editor for OWL
ontologies [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. In Protégé, users can navigate the concept and role hierarchies of the ontology.
When selecting a concept or role name, further information is shown, such as annotations
and asserted or inferred equivalent concepts and superconcepts. To explain reasoning results
in Protégé, one can click on the “?”-button next to an entailment. The standard explanation
      </p>
      <sec id="sec-4-1">
        <title>4This is measured using the justification complexity from [29].</title>
        <p>
          consists of a list of justifications for the axiom, but this has been extended to support proofs via
the protege-proof-explanation5 plug-in, which relies on the proof utility library6 (PULi) [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
        </p>
        <p>Our proof generators are available in several Protégé plug-ins under the umbrella name
Eveeprotege, utilizing this existing functionality. The plug-ins can be installed by copying their
JAR-files into Protégé’s plugin folder, where the elimination proof methods are available in
two versions for Lethe and Fame. Each proof generation method is registered as a ProofService
for the protege-proof-explanation plug-in, which can be accessed from a drop-down list in the
explanation dialog of Protégé (see Figure 5). Once selected, a ProofService starts generating the
actual proof and a progress bar shows updates to the user. After proof generation is completed,
the result can be explored interactively in a tree view. By clicking on the grey triangle next to
an axiom, the proof steps that lead to this conclusion can be recursively unraveled.</p>
        <p>Since proof generation can take some time to complete, users can abort the process by clicking
the “Cancel”-button or by closing the progress window. The proof generators that are based on
search will then assemble the best proof found so far, provided they already found one. If this is
the case, a warning message appears to inform the user that the proof may be sub-optimal.</p>
        <p>To facilitate modularity, the common functionalities of the proof services are implemented in
evee-protege-core, which is not a plug-in itself (in the sense that it does not provide any extension
to Protégé). Rather, evee-protege-core exports a collection of Java class files to Protégé that are
required by the actual plug-ins. Each of our plug-ins that provides a ProofService uses these
classes by inheriting from AbstractEveeProofService and instantiating one of our proof generators.</p>
        <p>
          In order to add a ProofService for Protégé, the method getProof needs to be implemented.
This method returns a DynamicProof which in turn needs to implement a method getInferences
to provide the actual inferences to Protégé. Our plug-ins define this method in the class
EveeDynamicProofAdapter of evee-protege-core. The dynamic nature of the proof comes in very
handy as it allows for the proof generation to be executed in a background thread, and to later
5https://github.com/liveontologies/protege-proof-explanation
6https://github.com/liveontologies/puli
update the proof explanation window with the constructed proof. To our knowledge [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], a
dynamic proof was originally intended to be automatically updated after an asserted axiom is
deleted. For this, a DynamicProof utilizes ChangeListeners which can be informed about any
changes made to a proof that is currently shown to the user. Our plug-ins use this feature by
ifrst showing a proof consisting only of a single “placeholder” inference that is displayed when
the proof generation is started. This inference has as its conclusion the axiom for which the
proof is created and as its name a short message indicating that the proof is currently being
computed (see Figure 5). Once the proof generation is complete, the method inferencesChanged
is called on all registered ChangeListeners, which causes Protégé to update the UI with the
actual proof. If no proof can be shown due to a cancellation or if an error occurred during proof
generation, the “placeholder” inference will be updated accordingly.
        </p>
        <sec id="sec-4-1-1">
          <title>4.1. User Study</title>
          <p>We conducted a small qualitative user study to evaluate the usability of the plug-ins, and to
validate the variety of proof generation options that Evee-protege provides. We held individual
interviews online, where participants installed the plug-ins, performed five tasks, 7 and answered
questions related to these tasks. Each task specified an entailment, for which the participants
were asked to generate two proofs using diferent methods, and compare them in terms of
ease of understanding (See the extended version [30] for all proofs used in the study). Lastly,
participants could provide feedback about what additional features they would like to have. In
total, we interviewed 10 participants, all of which often work with Description Logics.</p>
          <p>In Tasks 1 and 2, participants were asked to compare proofs generated for an atomic concept
inclusion in the Skin Phsyiology Ontology (SPO).8 For Task 1, they compared the detailed
Lethe-based proof with the elimination proof generated using Lethe with the heuristic method.
Asked which proof is the easier to understand, 60% chose the elimination proof, 10% the
detailed proof and 30% found no diference between the two. Most of the participants that
found no diference stated that it took them the same amount of efort to understand both proofs.
For the other participants, the structure of proofs and axioms and the type of expressions used
were the reasons to prefer one proof over the other.</p>
          <p>For Task 2, participants compared elimination proofs generated by Lethe and Fame using
the heuristic method. Here, 50% prefered the Fame proof and 30% the Lethe proof in terms of
understandability, while 20% of the participants found no diferences between them. Participants
that prefered the Fame proof pointed out a contradiction caused by the interplay between a
universal and an existential restriction. For these participants, inferences like that are easier
to follow. At the same time, the lack of such interplay in the Lethe proof, which used only
existential restrictions, is the reason why most of the other participants prefered this proof.</p>
          <p>
            In Task 3, participants were asked to prove a concept unsatisfiability in a modified version
of the pizza ontology9 using the proofs provided by the original Elk proof plugin [
            <xref ref-type="bibr" rid="ref15">15</xref>
            ] and
the size-optimized elimination proof using Fame. Asked which inference steps were easier to
understand, 70% chose the Elk proof, 20% preferred the elimination proof, and 10% found
7Tasks and ontologies used in the study: https://cloud.perspicuous-computing.science/s/cy5Y3kj94AGYSDB
8https://bioportal.bioontology.org/ontologies/SPO
9https://protege.stanford.edu/ontologies/pizza/pizza.owl
no diference. However, when asked if they agree that a large proof with simple inferences is
easier to understand than a smaller proof with intricate inferences, the answers varied. Some
agreed with the statement, some agreed only as long as a certain ratio between the size of a
proof and the dificulty of its inferences is not exceeded. Some pointed out that despite their
agreement they think that people will eventually get used to the more complex inferences and
how they work. On the other hand, there were some participants who totally disagreed with
the statement. The main reason is that a short proof provides a better overview.
          </p>
          <p>For the last two tasks, participants looked into proofs for an atomic concept inclusion in the
BioTop ontology.10 In Task 4, the comparison was between the original Elk plugin and the Elk
proofs optimized for size. In Task 5, the comparison was between the optimized Elk Proof and
detailed proof generated by Lethe. For both tasks, the answers were unanimous. Participants
found the easier proofs to be the optimized Elk proof and the detailed proof, respectively. The
reason is that the proofs get shorter with simpler axioms and inferences.</p>
          <p>The diversity of the answers confirms our assumption that the best method for generating
proofs is often subjective, which justifies having all the diferent proof generation methods in
Evee-protege. Some participants commented on the presentation and interaction with the
proofs in Protégé (e.g. the possibilities to expand the entire proof directly and not one inference
at a time, to abbreviate names, and to choose between alternative forms of the same expression,
etc.). Some of these comments have already been addressed in our tool Evonne.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Evonne</title>
      <p>
        The proof visualization and interaction techniques in Evee-protege are restricted by the
capabilities of Protégé’s explanation service. More advanced proof exploration is ofered by
Evee’s big sister Evonne [
        <xref ref-type="bibr" rid="ref18 ref21 ref22">18, 21, 22</xref>
        ], a web application that can be tried online and installed
locally using Docker.11 Evonne visualizes proofs with various interaction components and
layouts to support the exploration of large proofs. The proof visualization is linked to a second
view showing the context of the proof in the ontology by visualizing a subset (module) of the
ontology that is relevant to the proof. We only give an overview of the proof visualisation here.
      </p>
      <p>
        When starting Evonne, users need to create a new project and associate an OWL ontology
ifle. They can then select an entailed atomic concept inclusion to be explained, and choose
between the diferent proof generation methods described in Section 3. In addition, users can
choose to provide a set of terms that they are familiar with (a signature) – axioms using only
those terms are then assumed to be known to the user, and consequently are not explained in
the proof view (see [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] for more details on this).
      </p>
      <p>
        Proofs are shown as graphs with two kinds of vertices: colored vertices for axioms, gray
ones for inference steps. By default, proofs are illustrated using a tree layout. However, it is
also possible to present proofs in a vertical layout, placing axioms linearly below each other,
with inferences represented through edges on the side (without the inference vertices). It is
possible to automatically re-order vertices to minimize the distance between conclusion and
premises in each step. Another layout option is a bidirectional layout, which is a tree layout
10https://bioportal.bioontology.org/ontologies/BT
11https://www.docker.com/
where, initially, the entire proof is collapsed into a magic vertex that links the conclusion directly
to its justification, and from which individual inference steps can be pulled out and pushed
back from both directions. In all views, each vertex is equipped with multiple functionalities
for exploring a proof. For proofs generated using Elk, clicking on an inference vertex shows
the inference rule used, and the particular inference with relevant sub-elements highlighted
in diferent colors (see Figure 6). From each axiom vertex, users can alter the visual structure
of the proof by hiding branches, revealing the previous inference step from the current node
or showing the entire proof. Each of the diferent proof layouts has specific controls in this
regard - for example, the vertical layout does not show inference rules as nodes and instead
allows inspection on demand. Additionally, trays of buttons above and below nodes can be used
to manipulate the representation of axioms (by shortening names or using natural language),
and to highlight justifications and diagnoses in the ontology view [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ].
      </p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>
        We have presented a collection of methods for generating DL proofs that can be explored in
various ways. Future work includes adding support for specifying a “known” signature to
Evee-protege, the development of proof generation methods for even more expressive logics
(e.g. using consequence-based reasoners such as Sequoia [31]), and visualizing approaches for
explaining missing entailments such as counter-interpretations [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and abduction [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>This work was supported by DFG grant 389792660 as part of TRR 248 – CPEC, and the DFG
Research Training Group QuantLA, GRK 1763.
volume 2778 of CEUR Workshop Proceedings, CEUR-WS.org, 2020, pp. 28–40. URL: http:
//ceur-ws.org/Vol-2778/paper3.pdf.
[23] C. Lutz, F. Wolter, Foundations for uniform interpolation and forgetting in expressive
description logics, in: T. Walsh (Ed.), IJCAI 2011, Proceedings of the 22nd International
Joint Conference on Artificial Intelligence, IJCAI/AAAI, 2011, pp. 989–995. doi: 10.5591/
978-1-57735-516-8/IJCAI11-170.
[24] N. Nikitina, S. Rudolph, (non-)succinctness of uniform interpolants of general terminologies
in the description logic ℰℒ, Artif. Intell. 215 (2014) 120–140. doi:10.1016/j.artint.
2014.06.005.
[25] M. Horridge, B. Parsia, U. Sattler, Explaining inconsistencies in OWL ontologies, in:
L. Godo, A. Pugliese (Eds.), Scalable Uncertainty Management, Third International
Conference, SUM 2009, volume 5785 of Lecture Notes in Computer Science, Springer, 2009, pp.
124–137. doi:10.1007/978-3-642-04388-8_11.
[26] B. Glimm, I. Horrocks, B. Motik, G. Stoilos, Z. Wang, HermiT: An OWL 2 reasoner, J.</p>
      <p>Autom. Reason. 53 (2014) 245–269. URL: https://doi.org/10.1007/s10817-014-9305-1.
[27] P. Koopmann, R. A. Schmidt, Forgetting concept and role symbols in ℒℋ-ontologies,
in: K. L. McMillan, A. Middeldorp, A. Voronkov (Eds.), Logic for Programming,
Artiifcial Intelligence, and Reasoning - 19th International Conference, LPAR-19, volume
8312 of Lecture Notes in Computer Science, Springer, 2013, pp. 552–567. doi:10.1007/
978-3-642-45221-5_37.
[28] N. Matentzoglu, B. Parsia, Bioportal snapshot 30.03.2017, 2017. doi:10.5281/zenodo.</p>
      <p>439510.
[29] M. Horridge, S. Bail, B. Parsia, U. Sattler, Toward cognitive support for OWL justifications,</p>
      <p>Knowl. Based Syst. 53 (2013) 66–79. doi:10.1016/j.knosys.2013.08.021.
[30] C. Alrabbaa, S. Borgwardt, T. Friese, P. Koopmann, J. Méndez, A. Popovič, On the Eve of
True Explainability for OWL Ontologies: Description Logic Proofs with Evee and Evonne
(Extended Version), 2022. doi:10.48550/ARXIV.2206.07711.
[31] A. Bate, B. Motik, B. C. Grau, F. Simancik, I. Horrocks, Extending consequence-based
reasoning to ℛℐ, in: C. Baral, J. P. Delgrande, F. Wolter (Eds.), Principles of Knowledge
Representation and Reasoning: Proceedings of the Fifteenth International Conference, KR
2016, AAAI Press, 2016, pp. 187–196. URL: http://www.aaai.org/ocs/index.php/KR/KR16/
paper/view/12882.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          , I. Horrocks,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Sattler</surname>
          </string-name>
          , An Introduction to Description Logic, Cambridge University Press,
          <year>2017</year>
          . URL: http://dltextbook.org/. doi:
          <volume>10</volume>
          .1017/9781139025355.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Horridge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bechhofer</surname>
          </string-name>
          ,
          <article-title>The OWL API: A java API for OWL ontologies</article-title>
          ,
          <source>Semantic Web</source>
          <volume>2</volume>
          (
          <year>2011</year>
          )
          <fpage>11</fpage>
          -
          <lpage>21</lpage>
          . doi:
          <volume>10</volume>
          .3233/SW-2011-0025.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>C.</given-names>
            <surname>Alrabbaa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Hieke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Turhan</surname>
          </string-name>
          ,
          <article-title>Counter model transformation for explaining nonsubsumption in EL</article-title>
          , in: C.
          <string-name>
            <surname>Beierle</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Ragni</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Stolzenburg</surname>
          </string-name>
          , M. Thimm (Eds.),
          <source>Proceedings of the 7th Workshop on Formal and Cognitive Reasoning co-located with the 44th German Conference on Artificial Intelligence (KI</source>
          <year>2021</year>
          ),
          <year>September 28</year>
          ,
          <year>2021</year>
          , volume
          <volume>2961</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>9</fpage>
          -
          <lpage>22</lpage>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2961</volume>
          / paper_2.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>F.</given-names>
            <surname>Haifani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Koopmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Tourret</surname>
          </string-name>
          ,
          <article-title>Abduction in ℰ ℒ via translation to FOL</article-title>
          , in: R. A.
          <string-name>
            <surname>Schmidt</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Wernhard</surname>
            ,
            <given-names>Y</given-names>
          </string-name>
          . Zhao (Eds.),
          <source>Proceedings of the Second Workshop on SecondOrder Quantifier Elimination and Related Topics (SOQE</source>
          <year>2021</year>
          )
          <article-title>associated with the 18th International Conference on Principles of Knowledge Representation and Reasoning (KR</article-title>
          <year>2021</year>
          ),
          <source>Online Event, November</source>
          <volume>4</volume>
          ,
          <year>2021</year>
          , volume
          <volume>3009</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>46</fpage>
          -
          <lpage>58</lpage>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3009</volume>
          /paper2.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D. L.</given-names>
            <surname>McGuinness</surname>
          </string-name>
          ,
          <source>Explaining Reasoning in Description Logics, Ph.D. thesis</source>
          , Rutgers University, NJ, USA,
          <year>1996</year>
          . doi:
          <volume>10</volume>
          .7282/t3-q0c6-5305.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Borgida</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Franconi</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Horrocks</surname>
          </string-name>
          ,
          <article-title>Explaining ALC subsumption</article-title>
          , in: W. Horn (Ed.),
          <source>ECAI 2000, Proceedings of the 14th European Conference on Artificial Intelligence</source>
          , IOS Press,
          <year>2000</year>
          , pp.
          <fpage>209</fpage>
          -
          <lpage>213</lpage>
          . URL: http://www.frontiersinai.com/ecai/ecai2000/pdf/p0209.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Schlobach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cornet</surname>
          </string-name>
          ,
          <article-title>Non-standard reasoning services for the debugging of description logic terminologies</article-title>
          , in: G. Gottlob, T. Walsh (Eds.),
          <source>IJCAI-03, Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence</source>
          , Morgan Kaufmann,
          <year>2003</year>
          , pp.
          <fpage>355</fpage>
          -
          <lpage>362</lpage>
          . URL: http://ijcai.org/Proceedings/03/Papers/053.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Peñaloza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Suntisrivaraporn</surname>
          </string-name>
          ,
          <article-title>Pinpointing in the description logic EL+</article-title>
          , in: J.
          <string-name>
            <surname>Hertzberg</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Beetz</surname>
          </string-name>
          , R. Englert (Eds.),
          <source>KI 2007: Advances in Artificial Intelligence, 30th Annual German Conference on AI</source>
          , volume
          <volume>4667</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2007</year>
          , pp.
          <fpage>52</fpage>
          -
          <lpage>67</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>540</fpage>
          -74565-
          <issue>5</issue>
          _
          <fpage>7</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Horridge</surname>
          </string-name>
          ,
          <source>Justification Based Explanation in Ontologies, Ph.D. thesis</source>
          , University of Manchester, UK,
          <year>2011</year>
          . URL: https://www.research.manchester.ac.uk/portal/files/54511395/ FULL_TEXT.PDF.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Horridge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Parsia</surname>
          </string-name>
          , U. Sattler,
          <article-title>Justification oriented proofs in OWL</article-title>
          , in: P. F. PatelSchneider, Y. Pan,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hitzler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mika</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Z.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Horrocks</surname>
          </string-name>
          ,
          <string-name>
            <surname>B.</surname>
          </string-name>
          Glimm (Eds.),
          <source>The Semantic Web - ISWC 2010 - 9th International Semantic Web Conference, ISWC</source>
          <year>2010</year>
          , volume
          <volume>6496</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2010</year>
          , pp.
          <fpage>354</fpage>
          -
          <lpage>369</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>642</fpage>
          -17746-0_
          <fpage>23</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Schlobach</surname>
          </string-name>
          ,
          <article-title>Explaining subsumption by optimal interpolation</article-title>
          , in: J. J.
          <string-name>
            <surname>Alferes</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          <string-name>
            <surname>Leite</surname>
          </string-name>
          (Eds.),
          <source>Logics in Artificial Intelligence, 9th European Conference, JELIA</source>
          <year>2004</year>
          , volume
          <volume>3229</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2004</year>
          , pp.
          <fpage>413</fpage>
          -
          <lpage>425</lpage>
          . doi:
          <volume>10</volume>
          .1007/ 978-3-
          <fpage>540</fpage>
          -30227-8_
          <fpage>35</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>C.</given-names>
            <surname>Alrabbaa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Borgwardt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Koopmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kovtunova</surname>
          </string-name>
          ,
          <article-title>Finding small proofs for description logic entailments: Theory and practice</article-title>
          , in: E.
          <string-name>
            <surname>Albert</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          Kovács (Eds.),
          <source>LPAR 2020: 23rd International Conference on Logic for Programming</source>
          ,
          <source>Artificial Intelligence and Reasoning</source>
          , volume
          <volume>73</volume>
          of EPiC Series in Computing, EasyChair,
          <year>2020</year>
          , pp.
          <fpage>32</fpage>
          -
          <lpage>67</lpage>
          . doi:
          <volume>10</volume>
          . 29007/nhpp.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kazakov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Krötzsch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Simancik</surname>
          </string-name>
          ,
          <article-title>The incredible ELK - from polynomial procedures to eficient reasoning with ℰ ℒ ontologies</article-title>
          ,
          <source>J. Autom. Reason</source>
          .
          <volume>53</volume>
          (
          <year>2014</year>
          )
          <fpage>1</fpage>
          -
          <lpage>61</lpage>
          . doi:
          <volume>10</volume>
          .1007/ s10817-013-9296-3.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Musen</surname>
          </string-name>
          ,
          <article-title>The Protégé project: A look back and a look forward</article-title>
          ,
          <source>AI</source>
          Matters
          <volume>1</volume>
          (
          <year>2015</year>
          )
          <fpage>4</fpage>
          -
          <lpage>12</lpage>
          . doi:
          <volume>10</volume>
          .1145/2757001.2757003.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kazakov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Klinov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Stupnikov</surname>
          </string-name>
          ,
          <article-title>Towards reusable explanation services in protege</article-title>
          , in: A.
          <string-name>
            <surname>Artale</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Glimm</surname>
          </string-name>
          , R. Kontchakov (Eds.),
          <source>Proceedings of the 30th International Workshop on Description Logics</source>
          , volume
          <volume>1879</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2017</year>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-1879/paper31.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>C.</given-names>
            <surname>Alrabbaa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Borgwardt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Koopmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kovtunova</surname>
          </string-name>
          ,
          <article-title>On the complexity of finding good proofs for description logic entailments</article-title>
          , in: S. Borgwardt, T. Meyer (Eds.),
          <source>Proceedings of the 33rd International Workshop on Description Logics (DL</source>
          <year>2020</year>
          ), volume
          <volume>2663</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2020</year>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2663</volume>
          /paper-1.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>C.</given-names>
            <surname>Alrabbaa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Borgwardt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Koopmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kovtunova</surname>
          </string-name>
          ,
          <article-title>Finding good proofs for description logic entailments using recursive quality measures</article-title>
          , in: A.
          <string-name>
            <surname>Platzer</surname>
          </string-name>
          , G. Sutclife (Eds.),
          <source>Automated Deduction - CADE 28 - 28th International Conference on Automated Deduction</source>
          , volume
          <volume>12699</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2021</year>
          , pp.
          <fpage>291</fpage>
          -
          <lpage>308</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -79876-5_
          <fpage>17</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>C.</given-names>
            <surname>Alrabbaa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Borgwardt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Dachselt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Koopmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Méndez</surname>
          </string-name>
          , Evonne:
          <article-title>Interactive proof visualization for description logics (system description)</article-title>
          ,
          <source>in: Automated Reasoning - 11th International Joint Conference, IJCAR 2022, Lecture Notes in Computer Science</source>
          , Springer,
          <year>2022</year>
          . URL: https://lat.inf.tu-dresden.de/research/papers/2022/ ALBABODAKOME-IJCAR22.pdf, to appear.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>P.</given-names>
            <surname>Koopmann</surname>
          </string-name>
          ,
          <article-title>LETHE: Forgetting and uniform interpolation for expressive description logics</article-title>
          ,
          <source>Künstliche Intell</source>
          .
          <volume>34</volume>
          (
          <year>2020</year>
          )
          <fpage>381</fpage>
          -
          <lpage>387</lpage>
          . doi:
          <volume>10</volume>
          .1007/s13218-020-00655-w.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <surname>FAME:</surname>
          </string-name>
          <article-title>An automated tool for semantic forgetting in expressive description logics</article-title>
          , in: D.
          <string-name>
            <surname>Galmiche</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Schulz</surname>
          </string-name>
          , R. Sebastiani (Eds.),
          <source>Automated Reasoning - 9th International Joint Conference, IJCAR</source>
          <year>2018</year>
          , volume
          <volume>10900</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2018</year>
          , pp.
          <fpage>19</fpage>
          -
          <lpage>27</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -94205-
          <issue>6</issue>
          _
          <fpage>2</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>C.</given-names>
            <surname>Alrabbaa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Dachselt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Flemisch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Koopmann</surname>
          </string-name>
          ,
          <article-title>Visualising proofs and the modular structure of ontologies to support ontology repair</article-title>
          , in: S. Borgwardt, T. Meyer (Eds.),
          <source>Proceedings of the 33rd International Workshop on Description Logics (DL</source>
          <year>2020</year>
          ), volume
          <volume>2663</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2020</year>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2663</volume>
          /paper-2.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>T.</given-names>
            <surname>Flemisch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Langner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Alrabbaa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Dachselt</surname>
          </string-name>
          ,
          <article-title>Towards designing a tool for understanding proofs in ontologies through combined node-link diagrams</article-title>
          , in: V.
          <string-name>
            <surname>Ivanova</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Lambrix</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Pesquita</surname>
          </string-name>
          , V. Wiens (Eds.),
          <source>Proceedings of the Fifth International Workshop on Visualization and Interaction for Ontologies and Linked Data (VOILA</source>
          <year>2020</year>
          ),
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>