<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Bologna, Italy
$ t.zendron@vu.nl (T. Zendron); rafael.penaloza@unimib.it (R. Peñaloza)
 https://rpenalozan.github.io/ (R. Peñaloza)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Guiding Interactive Ontology Repair Through Prolific and Relevant Axioms</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tommaso Zendron</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rafael Peñaloza</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Milano-Bicocca</institution>
          ,
          <addr-line>Milan</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Vrije Universiteit Amsterdam</institution>
          ,
          <addr-line>Amsterdam</addr-line>
          ,
          <country country="NL">Netherlands</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>000</volume>
      <fpage>9</fpage>
      <lpage>0009</lpage>
      <abstract>
        <p>Knowledge engineering is an error-prone task and the need for automated tools to help correct existing errors has been long recognised. While many ontology repair methods have been proposed, most ignore a fundamental issue: only a domain expert can state whether an ontology reflects the domain knowledge or not. In this paper, we consider an interactive ontology repair procedure where a domain expert guides the system by signalling axioms that are identified as erroneous from sets of suspicious axioms. To avoid overwhelming the human expert with the number and complexity of information requests, we propose methods to reduce the class of potentially suspicious axioms: first, the expert may signal a class of consequences they desire to preserve in the corrected ontology; second, we introduce a new notion of prolificacy to understand the impact axioms have in the derivation of consequences.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;ontology repair</kwd>
        <kwd>description logics</kwd>
        <kwd>iterative debugging</kwd>
        <kwd>user interaction</kwd>
        <kwd>EL</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Description logics (DLs) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] are a family of knowledge representation formalisms which provide a clear
and simple syntax and formal semantics based on first-order logic. One of the greatest successes is
the use of the light-weight DLs from the ℰℒ family [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] as the basis for representing large ontologies,
specially within the life-sciences fields, like the medical ontology Snomed [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], which uses over half a
million ℰℒ axioms.
      </p>
      <p>There are many reasons why knowledge engineering is an error-prone task: experts may disagree on
some information, details may be lost in the translation from human expertise to logical formalism,
or simply syntactic misunderstandings and typos may occur. The fact is that it is not uncommon to
face errors in an ontology. For instance, the Snomed knowledge base was found to entail that every
amputation of finger is also an amputation of hand, which is obviously unwanted in the modelled domain.
Managing these errors, specially in large ontologies, requires the help of specialised automated tools.</p>
      <p>
        Throughout the last two decades, many methods have been proposed to explain [
        <xref ref-type="bibr" rid="ref4 ref5 ref6 ref7">4, 5, 6, 7</xref>
        ] and
repair [
        <xref ref-type="bibr" rid="ref10 ref11 ref8 ref9">8, 9, 10, 11</xref>
        ] ontologies, and to allow for sound reasoning in the presence of errors [
        <xref ref-type="bibr" rid="ref12 ref13 ref14">12, 13, 14</xref>
        ],
where an error can be any consequence that does not mirror the application domain. These proposals
are not fully adequate from a knowledge engineering point of view. Indeed, error-tolerant reasoning is
a short-term solution to keep using an ontology in production after some error has been identified, but
by no means it is intended to be a final solution to the existence of these errors. On the other hand,
errors may have multiple, mutually dependent, causes. This in particular means that there are many
possible repairs—potentially exponentially many on the size of the ontology. Deciding which one of
them to choose is far from obvious.
      </p>
      <p>
        To-date, most proposals for selecting one ontology which corrects the observed errors follow an
idea akin to the minimal-change restriction used in belief revision [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]: select the ontology that is the
“closest” to the original one, for an appropriate notion of closeness. The formal definition of a repair—a
subset-maximal sub-ontology which does not entail the error—is already a step in this direction. To
reduce the number of choices, it has been proposed to consider the repairs of maximum cardinality. On
the other hand, minimal change could also be interpreted as preserving most of the original knowledge,
which could also be implicit. Thus, rather than eliminating axioms, it has been proposed to modify
them through syntactic refinement operators [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] to obtain a weaker ontology [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ], or to compute
semantically optimal repairs [
        <xref ref-type="bibr" rid="ref17 ref18">17, 18</xref>
        ] among many others.
      </p>
      <p>While all these methods are adequately motivated and provide some guarantees, they still ignore a
fundamental element of error correction: correctness of an ontology—in the sense of its alignment to the
knowledge domain it is supposed to represent—is a meta-logical property, and no amount of syntactic
or semantic logical manipulation can access this “ground truth.” Indeed, only domain experts can state
whether an ontology correctly reflects the domain knowledge or not. Thus, to fully correct an ontology,
we cannot escape asking for the expert feedback.</p>
      <p>
        Since human expert access is limited and expensive, ontology repair requires a delicate balance
between automation and interaction. While too much automation may lead to incorrect ontologies,
human experts may at some point be unable to answer too many or too complex requests for feedback.
Indeed, while in theory one could automatically compute all possible repairs and ask the expert to
choose among them, the number of existing solutions makes this approach infeasible in practice. The
goal is then to build an iterative procedure in which a knowledge engineer (henceforth called the user)
is asked a series of questions designed to yield a correct repair. The idea is somehow reminiscent of
iterative ontology completion methods [
        <xref ref-type="bibr" rid="ref19 ref20">19, 20</xref>
        ]. The main diference is with the kinds of questions
asked. In our case, the underlying simple idea is to present the user with one justification at a time
(that is, a set of axioms that produce the error, guaranteeing that at least one of them is erroneous) and
ask them to identify the wrong axiom until a full diagnosis is obtained.
      </p>
      <p>The goal of this paper is to devise a technique to reduce the number and size of the sets presented
to the user for verification, without first computing all possible repairs. To this end, we propose two
complementary techniques to reduce the search space for the “correct” repair. The first is a manual
phase, where the user is allowed to provide a class of consequences which must be preserved. These in
general refer to well-established, uncontroversial, and clear knowledge from the representation domain,
which might be implicitly derivable from the ontology, even if the user does not know exactly from
which ontology axioms. Given this information, the class of potentially wrong axioms is reduced, and
the potential for deriving an answer that does not align with the domain is diminished.</p>
      <p>The second phase is an automated one, which relies on a new notion of prolificacy of axioms. In a
nutshell, prolificacy refers to how many consequences depend on the axiom in question. In the spirit
of minimal change, most prolific axioms are those that preserve the most original consequences, and
hence most desired in a final repair. Interestingly, if restricted to atomic consequences only, prolificacy
of all axioms in ontologies from the ℰℒ family is computable in polynomial time. In order to decide
how many, and which, prolific axioms to save, we focus on preserving the prolificacy of justifications
(see Algorithm 2 for details).</p>
      <p>These two phases combined allow us to focus on fewer axioms to present to the user, at each step
and in total, and construct a better ordering strategy. We emphasise that for this paper we focus mainly
on these pruning strategies, but the overarching goal is to deal with the full iterative repair process.
In particular, the pruning phase can be repeated every time that the user has identified an erroneous
axiom, to further decrease the number of remaining questions to be asked to them.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Preliminaries</title>
      <p>
        We first introduce all the preliminaries needed for understanding the rest of this work. Specifically,
we briefly explain the description logic ℰℒ⊥ as a representative of the ℰℒ family, and the notions of
justifications and repairs.
2.1. The Description Logic ℰℒ⊥
ℰℒ⊥ [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] is a lightweight member from the family of description logics (DLs) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], which is well known
for its polynomial time reasoning problems. The main building blocks of ℰℒ⊥ are concepts (unary
predicates from first-order logic) and roles (binary predicates). Its syntax and semantics are formalised
next.
      </p>
      <p>Consider three countable, mutually disjoint sets  ,  , and  of individual names, concept names,
and role names, respectively. ℰℒ⊥ concepts are built through the grammar rule  ::=  | ⊤ | ⊥ |
 ⊓  | ∃., where  ∈  and  ∈ . The semantics is based on interpretations, which are
pairs ℐ = (∆ ℐ , · ℐ ), where ∆ ℐ is a non-empty (potentially infinite) set called the domain, and · ℐ is the
interpretation function which maps every individual name  ∈  to an object ℐ ∈ ∆ ℐ ; every concept
name  ∈  to a set ℐ ⊆ ∆ ℐ ; and every role name  ∈  to a binary relation ℐ ⊆ ∆ ℐ × ∆ ℐ .
This function is extended to arbitrary concepts by setting: ⊤ℐ := ∆ ℐ ; ⊥ℐ := ∅; ( ⊓ )ℐ := ℐ ∩ ℐ ;
and (∃.)ℐ := { ∈ ∆ ℐ | ∃ ∈  ℐ .(, ) ∈  ℐ }.</p>
      <p>The goal of DLs is to represent the knowledge of a domain in a formal way which allows for automated
reasoning. This is achieved by introducing restrictions on the class of interpretations of interest; namely,
those which comply with what is known about the knowledge domain.</p>
      <sec id="sec-2-1">
        <title>Definition 1 (ontology). A general concept inclusion (GCI) is an expression of the form  ⊑  where</title>
        <p>,  are ℰℒ⊥ concepts. A TBox is a finite set of GCIs. An assertion is an expression of the form () or
(, ) where  is a concept,  ∈ , and ,  ∈  . An ABox is a finite set of assertions. An ontology is
a pair  = (,  ) consisting of an ABox  and a TBox  . For brevity, we will often use the term axiom
to refer generally to GCIs and assertions alike.</p>
      </sec>
      <sec id="sec-2-2">
        <title>The interpretation ℐ satisfies the GCI  ⊑  if ℐ ⊆  ℐ . It satisfies the assertion () if ℐ ∈ ℐ</title>
        <p>and the assertion (, ) if ( ℐ , ℐ ) ∈ ℐ . ℐ is a model of the ABox, TBox, or ontology  (represented as
ℐ |= ) if it satisfies all the axioms in .</p>
        <p>
          In general, we say that an ontology  is consistent if it has a model, and it entails an axiom  (written
 |=  ) if every model of  is also a model of {} . It is well known that consistency and entailments
for ℰℒ⊥ ontologies can be decided in polynomial time through consequence-based algorithms which
make implicit consequences explicit [
          <xref ref-type="bibr" rid="ref2 ref21">2, 21</xref>
          ].
        </p>
        <p>
          When dealing with ℰℒ⊥ it is often useful to assume that the ontology axioms are limited to a kind of
normal form. We say that a TBox is in normal form if all GCIs are of one of the forms
 ⊑ , 1 ⊓ 2 ⊑ ,  ⊑ ∃.,
∃. ⊑ ,
where , 1, 2 ∈  ∪ {⊤} and  ∈  ∪ {⊤, ⊥}. An ABox is in normal form if concept assertions
are atomic (i.e., refer to concept names only). An ontology is in normal form if its TBox and ABox
are both in normal form. It is easy to see that any ontology can be transformed in polynomial time
into another one that preserves the same consequences over the original vocabulary, but is in normal
form [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. In a nutshell, a complex concept  can be substituted with new fresh concept name , which
is then required to be interpreted as  by adding adequate GCIs to the TBox. Similarly, a complex
assertion () can be substituted by () with  a fresh concept name, by adding  ⊑  to the
TBox.
        </p>
        <p>Ontologies written in ℰℒ⊥ and other variants from the ℰℒ family have been successfully used to
represent various application domains, particularly from the biomedical fields. The complexity of these
domains, and the simplicity of the language (specially if axioms are presented in normal form), means
that real-world ontologies tend to have tens of thousands—or hundreds of thousands—axioms, making
the process of constructing them error-prone, and corrections very dificult. Hence techniques for
helping understand and correct unwanted consequences have been developed.</p>
        <sec id="sec-2-2-1">
          <title>2.2. Justifications and Repairs</title>
          <p>
            Finding the axioms responsible for an error in an ontology among several thousands is infeasible
without the help of some automated deduction tool. In monotonic languages (like all classical DLs
including ℰℒ⊥), this corresponds to a task known as axiom pinpointing; that is, identifying one or all
minimal sub-ontologies that entail the consequence—also known as justifications [
            <xref ref-type="bibr" rid="ref22">22</xref>
            ].
Definition 2 (justification). Let  be an ontology and  an axiom such that  |=  . An ontology ℳ is
called a justification for  w.r.t.  if (i) ℳ ⊆ ; (ii) ℳ |= ; and (iii) for all ℳ ′ ⊊ ℳ, ℳ ′ ̸|= .
          </p>
          <p>Note that this definition includes justifications for ontology inconsistency, by considering the axiom
⊤ ⊑ ⊥ as the consequence .</p>
          <p>
            Justifications can be seen as means to logically “explain” the consequence  by retaining only the
necessary information for deriving it and removing all superfluous knowledge. Yet, justifications are
not unique; in fact, a consequence may have exponentially many justifications w.r.t. a single ontology
 (on the size of ) [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ].
          </p>
          <p>
            If the goal is to get rid of an unwanted consequence, it makes sense to rather consider the notion of a
repair; that is, maximal sub-ontologies which do not entail this consequence [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ].
          </p>
          <p>Definition 3 (repair). Let  be an ontology and  an axiom such that  |=  . An ontology ℛ is a repair
for  w.r.t.  if (i) ℛ ⊆ ; (ii) ℛ ̸|= ; and (iii) for all ℛ ⊊ ℛ ′ ⊆ , ℛ ′ |= .</p>
          <p>
            Justifications and repairs are dual not only from a definitorial point of view, but in fact they can be
computed from each other. Indeed, if J is the class of all justifications, then any minimal hitting set of J
is the complement of a repair (also known as a diagnosis). Dually, the minimal hitting sets of the class
of the diagnoses form all the justifications. Moreover, the number of diagnoses and repairs can also
be exponential on the size of the ontology. All these are well-known classical results; see e.g. [
            <xref ref-type="bibr" rid="ref23 ref24">23, 24</xref>
            ].
Intuitively, to guarantee that a consequence cannot be derived any more from an ontology, one must
make sure that no justification is contained in it. This is equivalent to removing at least one axiom from
each justification, a process simulated by the computation of the hitting sets and complementation.
          </p>
          <p>Methods for computing justifications and repairs have been studied over the years, along with
techniques for reasoning in the presence of errors or inconsistencies. However, from the point of view
of an ontology life-cycle, we cannot be content with using an ontology which is known to contain
errors. Rather, we want to correct the ontology for future use. In the next section, we provide a brief
overview of existing ontology repair methods.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Ontology Repair</title>
      <p>From the point of view of knowledge engineering, we are not interested in one arbitrary repair or in
all repairs to correct an ontology. We are rather interested in the “right” one, which actually fixes the
issues.</p>
      <p>
        The task of finding an adequate repair is similar to contraction from belief revision [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. In this setting,
the goal is to modify an ontology so that a consequence is not entailed any more, while satisfying
some rationality postulates and typically also a minimal change condition: the contraction should be as
close to the original ontology as possible. Diferent solutions have been proposed depending on the
interpretation given to the notion of “closeness.”
      </p>
      <p>
        Every repair, as in Definition 3, can be considered to be minimally diferent from the original ontology,
as no further axioms can be re-introduced without reintroducing the error. It has also been proposed to
consider only repairs of maximal cardinality, as they minimize the number of removed axioms. Yet,
these are not only hard to compute [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], but also sufer from the same limitations as general repairs:
there can be exponentially many of them, and may still remove too much (desired) knowledge.
      </p>
      <p>
        A diferent perspective comes from trying to extend the class of models of the ontology, without
necessarily preserving its full syntactic structure. From this point of view, one idea is to weaken (i.e.,
make more general or less restrictive) the axioms in a diagnosis through syntactic refinement operators.
These gentle repairs preserve more of the original consequences, and do not damage too much the
original structure, by substituting an axiom by one of its maximally strong weakenings. On the other
hand, so-called optimal repairs aim to semantically preserve as many of the original consequences as
possible, with less regard towards the ontology syntax [
        <xref ref-type="bibr" rid="ref14 ref17">14, 17</xref>
        ].
      </p>
      <p>The main limitation of all these approaches is that they are purely logical, and hence cannot really
take into account the domain knowledge to generate the ontology which best coincides with the intent
of the modeller. Indeed, correctness of an ontology is a meta-logical property, and logic alone cannot
distinguish which of all the possible repairs available best describes the domain knowledge. To find the
“true” repair, the only solution is to ask domain experts which are able to identify erroneous constraints.
However, the sheer number of options available makes it impossible for any human expert to select
among them. Thus, we propose an iterative approach which relies on user-input to identify the best
possible ontology repair. The main dificulties and proposed solutions are described in the next section.</p>
      <p>We recall that the final goal is to repair an ontology; yet, as repairs are almost as big as the original
ontologies (with up to hundreds of thousands of axioms), presenting a user with the choice between
two repairs (or more) is not helpful. Rather, we take advantage of the duality between justifications and
repairs and focus on the computation of the former—which are typically of a much smaller size—and
the diagnoses that arise from them. A naïve idea is to enumerate all the justifications, one at a time, and
ask the user to identify one axiom to be removed, until at least one axiom from each justification has
been deleted. This idea is not only impractical given the potentially exponential number of justifications
available, but might also lead to the deletion of too many axioms if the user was unable to identify that
a diferent choice would “touch” other yet-unseen justifications. Indeed, by studying one justification at
a time, the user cannot grasp the full picture of the repair process, which forces them to think about
individual axioms instead of their global consequences. Note that despite being a domain expert, the
user is not omniscient towards all the consequences that are afected by the removal of an axiom.</p>
    </sec>
    <sec id="sec-4">
      <title>4. User-guided Ontology Repair</title>
      <p>We present in detail our novel approach for interactively repairing ontologies. The basic idea is very
simple: ask a sequence of questions from the knowledge engineer which will allow to identify the
correct repair among the large number of candidates. However, as human experts are scarce, costly, and
tire easily, we should try to minimise the number of questions and their dificulty, thus maximising the
investment on human work. Moreover, the order in which the questions are asked may influence the
quality of the final result.</p>
      <p>The first step towards an adequate user-guided ontology repair is to prune the search space; i.e.,
reduce the amount of answers required by the user. We introduce a two-phase procedure. In the first
phase (interactive pruning), the user specifies some consequences and axioms which should be preserved
in the repair. In the second phase (automated pruning) the system identifies and protects structurally
central axioms from the ontology. After pruning, the approach goes into a diagnosis computation phase,
which can be performed either manually, by interacting with the user, or fully automatically, letting the
system using predefined strategies to select the most appropriate diagnosis according to the desired
repair goals. Yet, an important element of our approach is that we do not want to compute the whole
set of justifications or repairs either a-priori or during the interaction. Instead, we try to minimise this
computational efort as well.</p>
      <p>
        To simplify the procedure and its exposition, we assume that the ontology is in normal form (see
Section 2). If this is not the case, we can always first pre-process the ontology through a normalisation
method like the ones suggested in [
        <xref ref-type="bibr" rid="ref1 ref25">1, 25</xref>
        ]. We note that most ideas work, albeit in a much more technical
manner, with general ontologies.
      </p>
      <p>It can be seen that conjuncts appearing in a normalised ontology are necessarily syntactically disjoint;
i.e., they refer to diferent syntactic (concept) names even if they may have implicit semantic connections
derivable from the ontology. This syntactic disjointness and the simplicity of the normal form allow for
more fine grained repairs, without removing more information than what is needed. This property is
guaranteed in gentle repairs, since the maximally strong weakening would preserve as much information
as possible without any need for preprocessing. Nevertheless, the preprocessing is useful in general,
as it allows to present to the user smaller axioms and to avoid the problem of having justifications
consisting of very few, but very complex axioms. Normalisation renders axioms and justifications more
comprehensible for the user, who can manage them more eficiently, leading to an improvement in the
user experience.</p>
      <sec id="sec-4-1">
        <title>4.1. Pruning the Search Space</title>
        <p>As we have seen, the naïve approach to repairing the ontology may lead to a deletion of too many
axioms. The main issue is that presenting one justification or one axiom at a time to the user may
not let them understand the implications of its removal. Thus, our algorithm should select a subset
of relevant questions from all the possible ones, structuring them in a convenient way to let the user
express their preferences obtaining the repair that better fits their needs.</p>
        <p>First note that only axioms which appear in at least one justification will ever need to be removed to
obtain a repair. Hence we start with the set  defined as the union of all justifications. Although it is
expensive to compute this set, we can do that process once, in the background, before any interaction
with the user. The next step is to identify which axioms are more important than others for reasoning
with the ontology. Once these important axioms are identified, we can prevent them from being excluded
from the repair, by removing them from the set  . This efectively ensures that the repair, whatever
it will be, will maintain such axioms and their consequences. We call this technique of identifying
important axioms that should be retained in the final repair justification pruning. As we will see, this
pruning allows to overcome many of the limitations of the naïve approach. However, we still have not
formally defined what does it mean for an axiom to be important. As we explain next, in the context of
interactive repair, it is useful to distinguish between two complementary notions of axiom importance,
which guide the reduction of the repair search space: relevance and prolificacy.</p>
        <p>We call an axiom relevant when it is identified as such by the user. This could be either because
the user explicitly stated its importance for the ontology; because it is frequently used in deployed
reasoning tasks; or because it is required to derive other specific consequences that the user wishes to
preserve. The importance of relevant axioms is thus subjective and driven by the user’s intent. In the
context of our strategy, we identify relevant axioms as those that appear in at least one justification for
a user-specified desired consequence. These axioms are considered important precisely because they
enable the derivation of such consequence.</p>
        <p>On the other hand, the notion of prolificacy assesses the objective importance of an axiom in the
knowledge base for deducing consequences. Informally, we say that an axiom  1 is more prolific than
an axiom  2 if the number of consequences for which  1 is necessary is greater than the number for
which  2 is necessary. In this case, we say that  1’s prolificacy is greater than  2’s. In our framework,
most prolific axioms are automatically identified and protected during the automated pruning phase;
see Section 4.3 for details.</p>
        <p>By combining both perspectives, the repair process benefits from a dual safeguard: relevant axioms,
computed via desired consequences, ensure the preservation of the consequences relevant to the user;
whereas the identification of most prolific axioms avoids removing semantically meaningful knowledge
when less expensive alternatives are available.</p>
        <p>Before going into details, note that in general it seems reasonable to assume that prolific axioms (from
which many consequences can be derived) are also relevant; that is, are also subjectively important
to the user. However, this correlation depends on how the ontology is used and on the user’s specific
goals. If this overlap were guaranteed, there would be no need for an interactive approach, and it would
sufice to compute the repair that maximises prolificacy. Since we need to understand what are the
specific preferences of the user, we implement two distinct strategies for identifying important axioms,
corresponding to two phases of justification pruning: interactive pruning, which preserves axioms
based on user specified desired consequences and feedback; and automated pruning, which protects
most prolific axioms in a justification according to objective metrics.</p>
        <p>To order these strategies, we assume that the user preferences are more important than the objective
entailments of single axioms, as they represent the ground truth for the domain knowledge. Thus, the
starting point is to ask the user for desired consequences. In doing so, we establish that axiom relevance
is more important than prolificacy. In particular this means that, whenever it is not possible to preserve
relevance while maximising prolificacy, priority will be given to the former, as exemplified next.
Example 4. Consider the ontology  composed of the ABox  := { 1 : (),  5 : (, )} and the
TBox  := { 2 :  ⊑ ,  3 :  ⊑ ⊥,  4 : ∃.⊤ ⊑ ⊥}, where the prefixes   are just names given
to the axioms for brevity of presentation. It is easy to see that  is inconsistent, and that there are two
justifications for inconsistency; namely, 1 := { 1,  2,  3} and 2 := { 4,  5}; this means that there
are six diagnoses and repairs for this inconsistency.</p>
        <sec id="sec-4-1-1">
          <title>Suppose now that the user expresses that  ⊓∃.⊤() is a desired consequence, which should be preserved.</title>
        </sec>
        <sec id="sec-4-1-2">
          <title>Since the only justification of this consequence (i.e., the only necessary axioms for deriving it) is { 1,  2,  5},</title>
          <p>these three axioms are marked as relevant, and must be protected through the repair process.</p>
          <p>At this point, the system moves to the automated pruning phase. Consider that the system identifies  4
as the most prolific axiom. Protecting it from removal would result in preserving the full justification 2 (as
 5 is already protected) and hence the inability to repair , or on removing  5, and thus the consequence
requested by the user. As the user preference is prioritised, the system does not protect the axiom  4. This
illustrates the precedence of interactive pruning over automated pruning: the system should prioritise the
will of the user over automatic choices. In fact, considering this precedence, the system could have avoided
(in this simple example) computing the prolificacy of  3 and  4 and conclude that they form the only
available diagnosis that satisfies the user’s request.</p>
          <p>In brief, justification pruning is a method which allows to protect some of the axioms from  so
that they cannot appear in any computed diagnosis—and by extension, are guaranteed to appear in the
ifnal repair found. This has the advantage of reducing the number and complexity of questions to be
asked to the user in the repair process, and ultimately reduce the human efort. Note that there is no
pre-defined number of axioms that should be saved in the repair: this number varies depending on the
specific case. However,  is always suficient for computing a diagnosis after pruning.</p>
          <p>Having introduced justification pruning, we can now explore more in detail each of the two phases.
In the following, as done before, we will use the expressions "save," "preserve," and "protect" to indicate
that some axioms are prevented from being included in the diagnosis. We will sometimes express
the same idea by saying that these axioms are removed from the union of all justifications  . This is
motivated by the fact that diagnoses will be built as subsets of  .</p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Interactive Pruning: Desired Consequences</title>
        <p>As seen, justification pruning is needed to reduce the search space for a repair by reducing the amount
of questions to submit to the user. In the interactive pruning phase, what we do is to interact with the
user, asking them to freely express desired consequences and axioms that they want to preserve in their
repair. That is, user expresses desiderata by stating knowledge that they know to be correctly reflecting
the representation domain.</p>
        <p>
          If the user knows that an axiom is important for their scopes, they are free to instruct the system
to protect such axiom, and it will immediately be removed from the set  of the axioms from where
a diagnosis is being built. Yet, the user may not want to point to specific axioms, but rather to more
general consequences of the ontology. In this case, we propose that, after a desired consequence 
is indicated by the user, the system removes one justification for  from the set  . This efectively
reduces the number of potential axioms to be verified by the user. Computing one justification for a
consequence requires polynomial time, so is completely feasible in ℰℒ⊥ [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. Note that this is a heuristic,
which allows the computations to be performed eficiently and avoids further interactions with the
user. Efectively, this heuristic guarantees that all desired consequences are preserved in the final repair,
without any need for further verification throughout the process. To guarantee that a solution exists,
we must assume that all such requests from the user are error-free. In practice, we expect the user to
state only a few, simple, desiderata referring to the fundamental knowledge to preserve.
Algorithm 1 Modified Black-Box Algorithm for Desired Consequences
Require: Ontology , set  , desired consequence 
Ensure: Set  of axioms to be protected to preserve 
1:  ← ∅
2: if  ∖  ̸|=  then
3: for each axiom  ∈  do
4: if  ∖ {} |=  then
5:  ←  ∖ {}
6: else
7:  ←  ∪ {}
8: if ( ∖  ) ∪  |=  then
9: Break
10: end if
11: end if
12: end for
13: end if
14: return
        </p>
        <p>Pinpointing the desired consequences is a smart strategy to restrict the search space for repairs
with minimal eforts from the user, since they do not have to compare diferent options, but can
directly indicate their preferences on consequences. Indeed, thanks to this idea of specifying desired
consequences, the user is free to input whatever consequence they want (as long as it is correct). In this
sense, we have addressed the problem of structuring questions. Indeed, this design is way better than
presenting sets of axioms or one justification at a time as in the naïve approach, that does not allow to
reason about implicit knowledge.</p>
        <p>To compute the justification for  we can use the well-known black-box algorithm that makes a
linear number of calls to a standard reasoner. The algorithm tries to remove one axiom at a time from
the ontology, while  is still entailed. While irrelevant for correctness, the order in which the axioms
are removed influences the justification found. Here, two conflicting forces come into place. On the
one hand, we would like to reduce  and hence, try to keep as many axioms from  as possible in the
justification. On the other hand, if  is entailed already by  ∖  , then we know that  is preserved
in all repairs, meaning that no special handling of  is needed. Under these conditions, we prioritise
preserving more potential repairs, as this helps guarantee that the “correct” one (from the domain
knowledge point of view) can still be computed. This has the added advantage of preserving as many of
the original consequences as possible as well. Intuitively, we want to order the axioms so that those
belonging to  are tested (and potentially excluded from the justification) first.</p>
        <p>As our interest is mainly on the axioms that will be removed from  , and not on the precise
justifications of  , we modify the black-box algorithm to further reduce the number of calls to the
reasoner. First, we verify whether  ∖  |=  . In that case, we know that there is a justification w.r.t.
 ∖  and no axioms need to be protected. Otherwise, we apply the standard black-box algorithm,
but only verifying whether the axioms in  can be removed. This idea is described more formally in
Algorithm 1 and exemplified next.</p>
        <p>Example 5. Consider the ontology  with an empty ABox and the TBox
 := { 1 :  ⊑ ,  2 :  ⊑ ∃.,  3 :  ⊑ ∃.,  4 : ⊤ ⊑ ,
 5 :  ⊑ ,  6 : ∃. ⊑ ,  7 :  ⊑ ∃.,  8 :  ⊑ ,
 9 :  ⊑ ∃.,  10 : ∃. ⊑ ,  11 : ∃. ⊑ 
}
having the erroneous consequence  :=  ⊑ ∃. . The error  has two justifications; namely,  1 :=
{ 1,  2} and  2 := { 3,  4}. So,  := { 1, . . . ,  4}.
justifications:  1 := { 1,  5} and  2 := { 6,  7}.</p>
        <p>Suppose that  :=  ⊑  is a desired consequence provided by the user, which can be seen to have two

computed the justification  2 , thus (unnecessarily) protecting  1.</p>
        <p>Applying Algorithm 1, we see that  ∖  |=  because 2 ⊆  ∖  . Hence,  is preserved in any repair,
and no axiom needs to be protected. Note that without the first check, the black-box algorithm could have
Consider now the desired consequence  :=  ⊑ ∃.
whose only justification is 
 := { 1,  7}.</p>
        <p>Algorithm 1 first verifies that</p>
        <p>∖  ̸|=  and enters the for loop with the axioms in  . In the first iteration,
 ∖ { 1} ̸|=  , and hence enters the else in line 6, updating  to { 1}. At this point, the algorithm verifies
that ( ∖  ) ∪  = { 1,  5, . . . ,  11} |=  which breaks the for loop and terminates the algorithm with</p>
        <p>If the user wishes to preserve multiple consequences, a simple idea would be to apply Algorithm 1 to
axioms. Yet, as we see next, this may lead to erroneous results.
each of them sequentially, updating the set  between calls to take into account the known protected
 1 := { 1,  8} and  2 := { 2,  9,  10,  11}.
and  :=  ⊑  . As seen already,  has only one justification 
Example 6. Continuing Example 5, suppose that the user specifies as desired consequences  :=  ⊑ ∃.
 = { 1,  7}. Instead,  has justifications:</p>
        <sec id="sec-4-2-1">
          <title>At the end of Example 5 we saw that  1 needs to be protected, so it can be removed from  before calling</title>
        </sec>
        <sec id="sec-4-2-2">
          <title>Algorithm 1 for  . Hence, at the moment of the call,  contains only  2,  3, and  4 and the first check verifies that  ∖  |=  and no further axioms need to be protected.</title>
          <p>unwanted consequence  and no repair could be computed.</p>
        </sec>
        <sec id="sec-4-2-3">
          <title>Note that if we did not remove  1 from  before the call, we run the risk of computing the justification</title>
          <p>2 which would require  2 to be protected. But in that case, we would preserve a justification for the</p>
          <p>However, note that the order in which the desired consequences are analysed influences the final
result. Indeed, in the previous example, if  had been analysed before  , we could have protected  2,
which, as we have seen, would later lead to a wrong path. To solve this, we further extend the algorithm
so that the justification is not computed for a single desired consequence, but for all simultaneously;
that is, instead of considering an input  , it considers the whole set Γ of desired consequences to be
entailed in each test. This guarantees that all requirements are satisfied, minimising the number of
protected axioms.</p>
          <p>Other modifications can be applied to try to improve eficiency. For instance, one may focus on
justifications or diagnoses of minimal cardinality, either for protecting fewer axioms or for diminishing
the potential options remaining. Alternatively, one may consider axioms that appear in the most
justifications, as they allow for smaller hitting sets, and thus smaller diagnoses, as illustrated in the
following example.</p>
          <p>Example 7. Consider the ontology  = ( , ) formed by
 := { ⊑  | 1 ≤  ≤ } ∪ { ⊑ } ∪ { ⊓</p>
          <p>⊑  | 1 ≤  ≤ }
 := {()} ∪ {() | 1 ≤  ≤ }
and suppose that () is an erroneous consequence. The justifications for this error are
{{(), (),  ⊓  ⊑ B} | 1 ≤  ≤ }.</p>
          <p>Suppose now that the user specifies a desired consequence  = (), which has the justifications

 := {(),  ⊑ },
+1 := {(),  ⊑ }
1 ≤  ≤ ;</p>
        </sec>
        <sec id="sec-4-2-4">
          <title>Without further instructions, Algorithm 1 may compute +1, thus pruning  by protecting () and</title>
          <p>⊑ . As a consequence, the diagnoses left for () are all the possible hitting sets of the justifications

for this unwanted consequence which do not contain (). In other words, the diagnosis will necessarily
contain  axioms. On the other hand, if the black-box algorithm is instructed to remove first the axioms
that are shared with other justifications from the set  , then the axioms (),  ⊑  will be protected
for some , making it possible to reach the smallest diagnosis {()} which has only one axiom to remove
from the ontology to reach a repair.</p>
          <p>According to the definition of justification, it cannot be the case that a justification for an error entails
another justification for the same error. However, this property may no longer hold after the interactive
pruning phase.</p>
        </sec>
        <sec id="sec-4-2-5">
          <title>Example 8. Consider the ontology  with an empty ABox, and a large TBox  containing the axioms</title>
        </sec>
        <sec id="sec-4-2-6">
          <title>The justifications for the error  ⊑ ∃. are</title>
          <p>1 = { 1,  2,  3},
2 = { 1,  2,  4,  5}.</p>
        </sec>
        <sec id="sec-4-2-7">
          <title>Suppose that the user specifies the desired consequence  :  ⊑ ∃. , and the system computes the</title>
          <p>justification  = { 3,  60} for  . As a result,  3 is marked as relevant and protected. Looking into 1,
we see that only  1 and  2 could be selected in a diagnosis to compute a repair. Since both of those axioms
appear also in 2, any of these choices is already a hitting set of the justifications and no further analysis
is necessary.</p>
          <p>This example highlights an important efect of interactive pruning: the decision to preserve an axiom
in one justification can directly influence others by narrowing the available options for removal. In
particular, when two justifications share a subset of axioms, and pruning removes one of such axioms
in order to maintain a desired consequence, the remaining axioms in the second justification may no
longer need to be considered during the repair process. As a result, some axioms can be automatically
preserved or ignored, further narrowing the set  .</p>
        </sec>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Automated Pruning: Prolific Axioms</title>
        <p>We now change our attention to the second stage of axiom pruning, which allows to automatically
prune justifications of size greater than or equal to two. It should be clear that we cannot prune
justifications of size one, as any diagnosis must contain them (otherwise, it would not be a hitting set of
all justifications). As mentioned earlier, we want to prune justifications according to the prolificacy of
their axioms, where prolificacy is a measure of axiom importance for deducing consequences. Defining
and assessing axiom prolificacy is not straightforward. Intuitively, prolificacy refers to the number
of consequences that depend on the axiom; however, as the number of consequences of an axiom
or an ontology is typically infinite, we need a way to restrict to only relevant information. In ℰℒ⊥,
we focus on atomic subsumptions (i.e., GCIs of the form  ⊑  with ,  ∈  ) and concept name
instances (assertions () with  ∈  and  ∈  ) as basic consequences of interest. The class of
these consequences has polynomially many objects in the vocabulary of the ontology. For the rest of
this paper, we use C to refer to the (finite) class of relevant consequences of interest. In addition, one
should also consider the interactions between axioms in the derivation of these consequences.
Example 9. Consider the very simple ontology  := { 1 :  ⊑ ,  2 :  ⊑ ,  3 :  ⊑ }. In
addition to the axioms,  entails only the atomic subsumptions  :  ⊑ ,  :  ⊑ , and  :  ⊑ .</p>
        <p>We see that none of the axioms, by itself, sufices to derive any of the consequences, yet, in pairs and
triples they entail some or all of the consequences:
{ 1,  2} |= { 1,  2, }
{ 1,  3} |= { 1,  3}
This analysis clearly shows how much each axiom contributes to verify consequences.</p>
        <p>Assuming that  is the unwanted consequence, it should be excluded from the number of relevant
entailments. We can see that  1 contributes only to the derivation of  (and the error), while  2 and  3
contribute also to the derivation of . Hence, we consider these latter axioms as the most prolific from .</p>
        <p>Without consider prolificacy, any of the two diagnoses 1 := { 1} and 2 := { 2} is an equally good
solution, from a logical point of view, for getting rid of the error. Yet, in light of the entailments computed,
we observe that opting for 2, i.e. removing  2 from the ontology leads to a repair which does not entail  ,
while removing 1 from  still yields  as a consequence. As the goal is to preserve the most consequences,
we prefer to use the diagnosis 1 or, more in general, to remove first the least prolific axioms.</p>
        <p>This example has illustrated that including a prolific axiom in a diagnosis may result in the loss of
relevant knowledge. To avoid this problem, we suggest to identify a set of highly prolific axioms to
preserve in the repair. Moreover, it has explained that to assess the prolificacy of an axiom we should
look at how it interacts with others in the knowledge base. There are diferent possibilities to assess the
prolificacy of an axiom: usage statistics (frequency of use in inference tests or user tasks); structural
measures (centrality in the graph); impact of removal (found by simulating removal and measuring loss
of desired inferences); and many others.</p>
        <p>
          Usage statistics are dificult to obtain, since they require a set of queries that are submitted by users,
which typically are not available, nor easy to produce. For what concerns the structural measures, it
is possible to estimate the importance of an axiom by analysing the dependency graph or the atomic
decomposition the ontology. In such a graph, the knowledge base is partitioned into modules, each
containing a set of axioms supporting related entailments [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ]. The axioms that appear in many or
central modules can be considered prolific, as they contribute to multiple parts of the inferred knowledge.
This solution is valuable since the graph clearly displays not just the single axioms, but also how set of
axioms interact in a knowledge base, showing the relation of concepts and role assertions. However,
analysing such dependency graphs allows just to consider axioms from the TBox, without considering
those of the ABox. Instead, our interest is in considering all the axioms. For this reason, we propose
impact of removal as an appropriate way to understand the efects of removing one axiom, since it
computes which entailments are lost when a given axiom is removed from the ontology. Based on this
analysis, we now provide a formal definition of prolificacy, which refers to the number of consequences
that are lost if the axiom is removed from the ontology.
        </p>
        <p>Definition 10 (prolificacy). Let cl() = { ∈ C |  |= } be the closure of  over C; that is, set of all
the consequences in C entailed by the ontology . The prolificacy of an axiom  ∈  is defined as
 () = |cl()| − |cl( ∖ {})|.</p>
        <p>Note that this definition of P() makes sense just if |cl()| &lt; ∞, which is the reason why we restrict
to relevant consequences. Still we emphasise that within the logics of the ℰℒ family, it is common to
focus on atomic consequences only. Interestingly, in ℰℒ⊥ the full classification (i.e., computing all the
atomic subsumptions) and all the basic entailed instances can be computed in polynomial time in the
size of the ontology. Thus overall it is possible to compute the prolificacy of all axioms in polynomial
time. Yet, to improve eficiency, we focus on some of these axioms only. Recall that the aim of the
prolificacy computation is to reduce the search space for finding a repair. For this reason, we only care
about the prolificacy of the axioms in  , from which diagnoses are being created. However, looking at
prolificacy alone may still lead to errors, specially when some axioms are shared among justifications.
This problem is avoided by verifying that a diagnosis still exists, before protecting an axiom from  .
However, there is also the issue that prolificacy may be relative among justifications; that is, the most
prolific axioms in one justification may be the least prolific in another one. In that case, it is not clear
which justification should take precedence in deciding which axioms to protect.
Algorithm 2 Automated Pruning of Prolific Axioms
Require: Justification  of size &gt; 1, threshold 
Ensure:  ′ ⊆  of minimal cardinality such that  ( ′) ≥  ·  ( ).</p>
        <p>1: Sort  into { 1, . . . ,  } such that  ( 1) ≥  ( 2) ≥ · · · ≥  (
2:  ′ ← ∅
3:  ← 0
4:  ← 1
5: while  &lt;  ·  ( ) do
6:  ′ ←  ′ ∪ { }
7:  ←  +  ( )
8:  ←  + 1
9: end while
10: return  ′
)</p>
        <p>To avoid these problems of suboptimal solutions and the impossibility of finding a diagnosis, it would
be more informative and precise to compute the impact of removal of entire diagnoses, and check the
diferences among them. However, this would contrast with our aim of not calculating all the diagnoses.
So, the solution we propose is to compute the prolificacy of only the axioms in a justification without
shared axioms. Computing the impact of removal for such axioms, is actually a good estimate of the role
they would play in the repair. In doing so, we avoid all the problems caused by axioms that are shared,
and we reduce the number of axioms for which we compute  () . Next, we describe the algorithm to
prune such justifications.</p>
        <p>An Algorithm for Automated Pruning. Once the system has computed the impact of removal
 () for each axiom  ∈  , it sorts the axioms in descending order of  () . After that, the algorithm
should remove from  the most prolific axioms in a quantity determined by a parameter  as explained
next.</p>
        <p>
          Given a justification  = { 1,  2, . . . ,  } for an error  , let  ( ) = ∑︀=1  ( ) denote the total
impact of removal of all axioms in  . The algorithm then identifies the smallest subset  ′ ⊆  , taken
in order of decreasing  (), such that:
∑︁  () ≥  ·  ( )
∈ ′
where  is a threshold parameter in the interval [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ]. The system protects the axioms in this subset  ′
from being removed in the current pruning step, since they are the most prolific ones, accounting for at
least a fraction  of the justification’s inferential power. All the remaining axioms  ∖  ′ are kept in 
and can be safely considered for possible removal in the subsequent interactive step.
        </p>
        <p>Note that in Example 9 we said that, when computing the prolificacy of an axiom, we should not
count the error as a consequence. However, note that there is no need to modify  () , since we are
now calculating the prolificacy just for the axioms in the justifications that do not have axioms in
common with others. So, in the case where diferent justifications for the error exists, the unwanted
consequence will not be counted for any of the axioms in the justification; whereas, in the case where
the justification for whose axioms we are calculating the prolificacy, the error will be counted as a
consequence in  () of all the axioms.</p>
        <p>This strategy is a trade of between simplification and preservation of relevant knowledge, making
the pruning phase both flexible and adaptive to the structure of each justification. In fact, it ensures
that longer justifications are pruned more that smaller ones, reducing the number of axioms in  and
facilitating the user in the computation of a diagnosis.</p>
        <p>Again, note that automating the process of finding the prolific axioms is better than asking to the user
to autonomously identify all the axioms that are important. This is due to the fact that the knowledge
engineer may not be completely aware of which axioms of the knowledge base are important, in other
words they may not see the consequences of keeping or removing a given axiom. The error of keeping
a redundant axiom, which may occur if the knowledge engineer is free to choose, is avoided using the
impact of removal.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. The Tree: Selection of the Diagnosis</title>
        <p>So far, we have discussed strategies to prune the search space to find the “true” repair in the sense of
restricting the number of axioms (and questions) to present to the user. This phase has ensured that all
the remaining repairs verify the desired consequence of the user. Moreover, thanks to the identification
of prolific axioms, we have an approximation of the fact that these repairs will be maximal (according to
the desiderata of the user). At this point, most of the issues of the naïve approach have been overcome,
and, as explained, we can continue either in an automatic way, or by presenting questions to the user.
Next, the two options are presented.</p>
        <p>For the manual one, we still need to think of an efective way to present justifications and axioms
to the user. To this end, we will use some heuristics. For the automatic alternative, we will base the
computation of the diagnosis on assumption that we prefer to apply minimal changes to the ontology,
thus trying to remove the smallest set of axioms from  . To this aim, the algorithm will present the
smallest possible diagnosis, allowing the user to approve or modify it. Recall that the two options
(manual and automatic) are not mutually exclusive nor fully intertwined, since the knowledge engineer
is free to stop whenever they want, and let the process finish automatically.</p>
        <p>If the knowledge engineer wants to control the construction of a diagnosis, the system should let them
manually choose one axiom from each justification. Since they have already specified their preferences
on consequences, we can safely present one justification at a time, and let the user choose from them.
However, there are still multiple ways of presenting axioms and justifications to the user, each resulting
in a diferent sequence of decisions. Some of these sequences are smaller than others; thus, presenting
the justifications in a certain order may reduce the number of questions. Also, since after the pruning
phase all the possible reachable repairs will verify the desired consequences, all possible diagnoses that
can be built are considered to be already adequate from the user requirements point of view—that is,
from the perspective of preserving the true knowledge from the domain. At this point of the procedure,
since the knowledge engineer has already freely expressed their desiderata on consequences (and
axioms), the user discovers their preferences in the interaction with the system. Thus, the very idea
of presenting justifications in an “optimal” way is flawed. At this stage, what the system can do is to
guide the user, and order justifications and axioms thanks to heuristics that help minimise the number
of questions and maximise the utility of the answers.</p>
        <p>In particular, the heuristics should act on the two dimensions that render the number of options
to consider too big from the user’s point of view: the length of each justification; and the number of
justifications.</p>
        <p>The issue of the length of each justification refers to the fact that a single justification may have a
number of axioms too high to be manageable by the user. However, at this point of our procedure,
the problem has mostly been solved thanks to both interactive and automated pruning. It may still
be the case that long justifications have survived, but it is unlikely. In fact, if such justifications exist,
it is because they did not have axioms in common with any of the justifications computed for the
desired consequences. At the same time, they must also be justifications that share axioms with others.
Otherwise, they would have been reduced during the automated pruning. As we will see in what
follows, this observation provides us with the solution for solving the problem.</p>
        <p>For what concerns the other dimension, the number of justifications may indeed be too long. However,
we can try to overcome the problem by suggesting to the user to remove axioms that are shared by most
of the justifications. When these axioms are removed from one justification, as an inherited answer
they are also removed from all the justifications where they appear. Hence, we do not need to present
those justifications to the user, and the number of questions is reduced. Moreover, consider that, in any
case, if the amount of questions is too large for the user, they always have the option to let the system
automatically compute the diagnosis.</p>
        <p>In case of automatic computation, the heuristics help the user to orientate among the axioms of the
justifications. Diferent heuristics may serve diferent goals, so, before implementing one heuristic,
we have to figure out what each heuristic is useful for. If our scope is to apply minimal changes to
the ontology, we should try to build the smallest possible diagnosis via shared axioms. If the aim is to
preserve as much consequences as possible, we should start by presenting to the user the axioms less
prolific for each justification; if our aim is to understand as quick as possible which axiom should be
removed from each justification, we should present the smaller justifications first. These heuristics are
presented more in detail in the next subsections.</p>
        <p>In any case, it is important to remember (and to stress) that, in the end, the decisions of the user
should be the primary consideration. The system defines an order just to help the user, but should not
force them to choose one axiom over another. Also, we assume that the knowledge engineer knows how
the system works, in the sense that they know that the machine orders axioms based on the estimation
of which ones are preferable to remove first. Therefore, we stated that the approach is interactive, as it
is based on an eficient Human-Computer collaboration.</p>
      </sec>
      <sec id="sec-4-5">
        <title>4.5. Shared Axioms</title>
        <p>As we have seen, axioms that are shared by diferent justifications may cause issues in the naïve
approach, since they influence diferent justifications. We now show how they can be used to the user’s
advantage. The idea is that, in ordering the axioms in the justification that the system presents to the
user, shared axiom should be presented first. In fact, these are the axiom that appear more frequently
than others in the justifications for the error. If such axioms are removed, they are likely to lead to the
smallest possible diagnosis.</p>
        <p>Since the aim is to reduce as fast as possible the search space for repairs, the idea behind presenting
shared axioms first, is that these allow to build a diagnosis in fewest step. The reason is that, once
selected (and removed) from one justification, they are discarded also from all other justifications where
they appear. Therefore, there is no need to select other axioms for such justifications, reducing the
number of questions. The extreme case is that of a shared axiom appearing in all justifications, in such
a case, the shared axiom constitutes a diagnosis by itself.</p>
        <p>The idea of shared axioms can also be applied to, and is particularly useful in cases where there are
more errors in an ontology, for instance when errors arise from merging knowledge bases. In this case
we can call maximally shared axioms those that are the most common among the justifications of the
two errors.</p>
        <p>The benefits of this strategy are two, namely, that it reduces the number of questions the user has to
answer, and that it leads to apply minimal changes to the knowledge base from the syntactic point of
view. Of course, it only works if the shared axioms are actually removed by the user. On the other hand,
if the user wants to keep the axioms, we do not reduce our search space by a lot, but we do not have
to ask for such axioms again in the next justifications. Otherwise, if the system would ask for those
axioms in following justification, and the user chooses to remove the shared axiom, the minimality of
the diagnosis is lost. This is yet another example of the fact that a greedy approach leads to suboptimal
results, as we have seen for the naïve approach.</p>
        <p>It could also be useful for the user to see the number of justifications left to be answered. This number
shows the worst case scenario, and should be updated after each axiom is chosen from a justification.</p>
      </sec>
      <sec id="sec-4-6">
        <title>4.6. Revisiting Prolific Axioms</title>
        <p>We have seen that within the justifications, it is a good idea to order axioms presenting those that are
shared first. However, what is the system supposed to do with justifications that do not share axioms
with others? Recall that in the automated pruning phase, we ordered such axioms according to their
prolificacy. Hence, we can reuse this previous computation to order the axioms presenting them from
the less to the most prolific one. Indeed, we want to suggest to the user to remove the less prolific
axioms, so that the least consequences are removed from the final repair. Moreover, since we have
computed the entailments for each axiom, it is also possible for the user to see which are the efects of
removing a certain axiom; i.e. which are the consequences that are lost. This is actually much more
informative from the user’s point of view than looking at the single axioms.</p>
      </sec>
      <sec id="sec-4-7">
        <title>4.7. A Heuristic for Ordering Justifications</title>
        <p>Recall that, within an interactive diagnosis process, the order in which justifications are presented
to the user plays a key role in optimizing the interaction. In this section we explain that the simple
heuristic of presenting the shortest justifications first is a good strategy to order them.</p>
        <p>First note that shorter justifications contain fewer axioms, which means that the user can analyse
them more quickly and with less efort. So, by prioritising shorter justifications, the system allows the
user to tackle simpler cases first, reducing the risk of early disengagement. This aligns with the goal of
reducing the cognitive load of the knowledge engineer.</p>
        <p>Second, if the user chooses to stop interacting with the system before completing the entire process,
they will have only dealt with simpler and smaller justifications, avoiding the burden of analysing more
complex ones and possibly selecting axioms from more justifications. This ensures that even partial
interaction can contribute meaningfully to refining the repair process. For example, if the user has
energy to analyse just twenty axioms and we present a justification consisting of twenty axioms, they
will ultimately analyse only this single justification. On the other hand, if we present four justifications
consisting of five axioms each, the user will still consider twenty axioms, but will respond to four
justifications.</p>
        <p>In summary, presenting the shortest justifications first balances eficiency and user engagement,
supporting a more user-friendly and practical interactive repair process.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Weakening the Diagnosis</title>
      <p>In the previous sections, our focus was on finding ways to minimise the human efort in the computation
of a repair, by protecting some axioms that are considered correct or at least wanted in the ontology. We
should bear in mind that the user, when presented with the diferent questions, is iteratively selecting a
diagnosis: the axioms signalled as erroneous.</p>
      <p>From the point of view of classical repairs, the final step after computing a diagnosis is to remove it
from the ontology, thus obtaining a repair à la Definition 3. However, removing full axioms from the
ontology can be seen as an extreme solution: one should take into account that axioms are not added
arbitrarily, but there is usually a motivation behind them. Moreover, some of the information provided
by the axiom may be correct, even if not all of it. We partially considered all these issues by requiring
the ontology to be in normal form. Yet, even in this simplified scenario, it is worth asking whether
removing full axioms is the best solution.</p>
      <p>
        A proposal to preserve more of the original knowledge is to weaken rather than delete, the elements
of the diagnosis. This approach is typically known as gentle repair [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Weakening means to make
the axiom less restrictive so that more interpretations satisfy it—which in turn removes some of the
consequences. Obviously, the axioms should be weakened at least enough to make the error disappear.
      </p>
      <p>A common approach for axiom weakening is to apply concept refinement operators to the elements
of the axioms. Briefly, to weaken a GCI of the form  ⊑ , one may specialise  to a concept that
is necessarily subsumed by , or generalise  to a concept subsuming it. In both cases, the result
is an axiom which has strictly more models than  ⊑ . Analogously, the assertion () can be
weakened by generalising . Note that if, as assumed, we have an ℰℒ⊥ ontology in normal form,
then syntactic refinement operators are not only quite simple—in fact, concepts in the left-hand side
of a normalised GCI can only specialise to ⊥—but easily lead to tautologies, which is equivalent to
removing the axiom. Hence, although gentle repairs can be very useful to preserve more consequences,
on normalised ontologies their impact is limited. However, if the ontology is not originally in normal
form or if the language under consideration is more expressive, they could provide better results.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>It is well known that ontology engineering is an error-prone task, which has attracted quite some
attention over the years in the development of methods which automatically correct a given ontology.
Yet, there may be many diferent ways to correct an error. Many diferent ideas have been proposed for
selecting the “best” among them like choosing largest correct sub-ontology or maximising the set of
preserved consequences. However, these syntactic and semantic approaches ignore one fundamental
issue: the correctness of an ontology—i.e., its expression of the knowledge domain under consideration—
is a meta-logical property, which can only be verified by a domain expert. In other words, there is no
way to guarantee that repairs with minimum change actually reflect the knowledge of the domain.</p>
      <p>On the other hand, the size of the ontology and the potential number of justifications (and diagnoses)
and axioms appearing in them makes it impossible for a knowledge engineer to select the most adequate
repair from all the available options without the help of an automated tool. Moreover, human experts
are costly and tire fast, so we should try to minimise their efort.</p>
      <p>In this paper, we proposed an intermediate path, where the knowledge engineer is iteratively asked
to choose one axiom from each justification to be removed as considered wrong, without the need to
compute all justifications or diagnoses at any time. The main novelty of our work is to include first
a pruning phase, which identifies axioms that are considered “correct” and hence safe from removal.
This pruning takes input from the knowledge engineer, who specifies a set of consequences that need
to be preserved, to choose minimal conditions that guarantee this request. It also applies an heuristic
for minimal change, based on a new notion of prolificacy: the impact an axiom has on the number of
consequences derived.</p>
      <p>Our procedure computes a diagnosis, which refers to the class of axioms identified as wrong in the
ontology. At this point, one could decide to simply remove these axioms (yielding a repair) or to weaken
them towards a gentle repair, as suggested in previous work.</p>
      <p>It should be clear that the choice of prolificacy as a way to find axioms to be preserved is a heuristic
based on the centrality (or importance) of the axiom for the concept hierarchy. As mentioned already,
no purely logical, syntactic, or semantic measure can fully characterise the meta-logical property of
correctness for the representation domain in consideration. Thus, this one may also yield an incorrect
repair. However, we argue that exactly their centrality makes these axioms to be subject to deeper
scrutiny, and hence less likely to be wrong from the domain point of view.</p>
      <p>
        One could imagine other measures for automated pruning like, e.g., a degree of trust associated to
axioms, depending on their source and use in the ontology. In this way, one can measure trust on
one or more consequences using the techniques from provenance [
        <xref ref-type="bibr" rid="ref27 ref28">27, 28</xref>
        ]. However, such approaches
require further input from the knowledge engineer, who must specify those trust degrees, even for
axioms which may seem incomparable. Another idea which was recently proposed was to learn the
repair patterns from edit history of ontologies [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ]; yet, this strategy requires data which is not always
available.
      </p>
      <p>
        For this work we focused on the light-weight DL ℰℒ⊥. The main reason for this is that its
polynomialtime reasoning services allow us to provide polynomial-time algorithms for each of the steps of the
iterative procedure, except for the initial one of computing the union of all justifications  , which
requires superpolynomial time (unless  =   ) [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. However, the latter can be seen as a
preprocessing step to be executed of-line when the error is detected. Importantly, as we never compute all
diagnoses or justifications, we avoid a potentially exponential computation in the intermediate steps.
Note that the same ideas can be applied to other, more expressive, logics. Indeed, our approach is
designed to be modular, and thus applicable to other scenarios without much modification.
      </p>
      <p>As part of our future work, we expect to build a better understanding about the properties of
prolificacy and develop more eficient methods for computing them directly. Moreover, we will try to
understand the guarantees that preserving axioms based on their prolificacy provides from the ontology
repair and belief revision point of view. Note also that our approach is parameterised to maintain
a proportion ( ) of the prolificacy of a justification, but we did not specify a precise value for this
parameter. From a practical point of view, we want to study the impact of this parameter on the overall
efort and quality of the repair process.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>Part of this work was developed during a research visit by the first author to TU Dresden, funded by
the University of Milan under the programme “Tesi all’estero.” We are thankful to Prof. Franz Baader
for his advise and fruitful discussions during this visit and to Christian Alrabbaa for clarifications.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Calvanese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. L.</given-names>
            <surname>McGuinness</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Nardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. F.</given-names>
            <surname>Patel-Schneider</surname>
          </string-name>
          ,
          <source>The Description Logic Handbook</source>
          , Cambridge University Press,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Brandt</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Lutz, Pushing the EL envelope</article-title>
          , in: L. P.
          <string-name>
            <surname>Kaelbling</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Safiotti (Eds.),
          <source>Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI-05)</source>
          , Professional Book Center,
          <year>2005</year>
          , pp.
          <fpage>364</fpage>
          -
          <lpage>369</lpage>
          . URL: http://ijcai.org/Proceedings/05/Papers/0372.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>B.</given-names>
            <surname>Suntisrivaraporn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Schulz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. A.</given-names>
            <surname>Spackman</surname>
          </string-name>
          ,
          <article-title>Replacing SEP-Triplets in SNOMED CT using tractable description logic operators</article-title>
          , in: R.
          <string-name>
            <surname>Bellazzi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Abu-Hanna</surname>
          </string-name>
          , J. Hunter (Eds.),
          <source>Proceegins of the 11th Conference on Artificial Intelligence in Medicine, AIME</source>
          <year>2007</year>
          , volume
          <volume>4594</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2007</year>
          , pp.
          <fpage>287</fpage>
          -
          <lpage>291</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>540</fpage>
          -73599-1_
          <fpage>38</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Schlobach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cornet</surname>
          </string-name>
          ,
          <article-title>Non-standard reasoning services for the debugging of description logic terminologies</article-title>
          ,
          <source>in: Proceedings of the 18th International Joint Conference on Artificial Intelligence (IJCAI</source>
          <year>2003</year>
          ), Morgan Kaufmann Publishers Inc.,
          <year>2003</year>
          , pp.
          <fpage>355</fpage>
          -
          <lpage>360</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>B.</given-names>
            <surname>Parsia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Sirin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kalyanpur</surname>
          </string-name>
          ,
          <article-title>Debugging OWL ontologies</article-title>
          ,
          <source>in: Proceedings of the 14th International Conference on World Wide Web, WWW '05</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2005</year>
          , pp.
          <fpage>633</fpage>
          -
          <lpage>640</lpage>
          . URL: https://doi.org/10.1145/1060745.1060837. doi:
          <volume>10</volume>
          . 1145/1060745.1060837.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Suntisrivaraporn</surname>
          </string-name>
          ,
          <article-title>Debugging SNOMED CT using axiom pinpointing in the description logic ℰ ℒ, KR-MED</article-title>
          <year>2008</year>
          (
          <year>2008</year>
          )
          <article-title>1</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R.</given-names>
            <surname>Peñaloza</surname>
          </string-name>
          ,
          <article-title>Axiom pinpointing</article-title>
          , in: G. Cota,
          <string-name>
            <given-names>M.</given-names>
            <surname>Daquino</surname>
          </string-name>
          ,
          <string-name>
            <surname>G. L.</surname>
          </string-name>
          Pozzato (Eds.),
          <source>Applications and Practices in Ontology Design, Extraction, and Reasoning</source>
          , volume
          <volume>49</volume>
          of
          <article-title>Studies on the Semantic Web</article-title>
          , IOS Press,
          <year>2020</year>
          , pp.
          <fpage>162</fpage>
          -
          <lpage>177</lpage>
          . URL: https://doi.org/10.3233/SSW200042. doi:
          <volume>10</volume>
          .3233/SSW200042.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Peñaloza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Suntisrivaraporn</surname>
          </string-name>
          ,
          <article-title>Pinpointing in the description logic ℰ ℒ+</article-title>
          , in: J.
          <string-name>
            <surname>Hertzberg</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Beetz</surname>
          </string-name>
          , R. Englert (Eds.),
          <source>Proceeding of the 30th Annual German Conference on AI, KI</source>
          <year>2007</year>
          , volume
          <volume>4667</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2007</year>
          , pp.
          <fpage>52</fpage>
          -
          <lpage>67</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>540</fpage>
          -74565-
          <issue>5</issue>
          _
          <fpage>7</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>N.</given-names>
            <surname>Troquard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Confalonieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Galliani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Peñaloza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Porello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kutz</surname>
          </string-name>
          ,
          <article-title>Repairing ontologies via axiom weakening</article-title>
          , in: S. A.
          <string-name>
            <surname>McIlraith</surname>
            ,
            <given-names>K. Q.</given-names>
          </string-name>
          <string-name>
            <surname>Weinberger</surname>
          </string-name>
          (Eds.),
          <source>Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence</source>
          ,
          <source>(AAAI-18)</source>
          ,
          <source>the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)</source>
          , New Orleans, Louisiana, USA, February 2-
          <issue>7</issue>
          ,
          <year>2018</year>
          , AAAI Press,
          <year>2018</year>
          , pp.
          <fpage>1981</fpage>
          -
          <lpage>1988</lpage>
          . URL: https://doi.org/10.1609/aaai.v32i1.11567. doi:
          <volume>10</volume>
          .1609/AAAI.V32I1. 11567.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Kriegel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nuradiansyah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Peñaloza</surname>
          </string-name>
          ,
          <article-title>Making repairs in description logics more gentle</article-title>
          , in: M.
          <string-name>
            <surname>Thielscher</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Toni</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Wolter</surname>
          </string-name>
          (Eds.),
          <source>Proceedings of the Sixteenth International Conference on Principles of Knowledge Representation and Reasoning (KR</source>
          <year>2018</year>
          ), AAAI Press,
          <year>2018</year>
          , pp.
          <fpage>319</fpage>
          -
          <lpage>328</lpage>
          . URL: https://aaai.org/ocs/index.php/KR/KR18/paper/view/18056.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>U.</given-names>
            <surname>Furbach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Schon</surname>
          </string-name>
          ,
          <article-title>Semantically guided evolution of SHI ABoxes</article-title>
          , in: D.
          <string-name>
            <surname>Galmiche</surname>
          </string-name>
          , D. LarcheyWendling (Eds.),
          <source>Automated Reasoning with Analytic Tableaux and Related Methods</source>
          , Springer Berlin Heidelberg, Berlin, Heidelberg,
          <year>2013</year>
          , pp.
          <fpage>134</fpage>
          -
          <lpage>148</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ludwig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Peñaloza</surname>
          </string-name>
          ,
          <article-title>Error-tolerant reasoning in the description logic ℰℒ</article-title>
          , in: E. Fermé, J. Leite (Eds.),
          <source>Proceedings of the 14th European Conference on Logics in Artificial Intelligence (JELIA</source>
          <year>2014</year>
          ), volume
          <volume>8761</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2014</year>
          , pp.
          <fpage>107</fpage>
          -
          <lpage>121</lpage>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>319</fpage>
          -11558-
          <issue>0</issue>
          _8. doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -11558-
          <issue>0</issue>
          _
          <fpage>8</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Bienvenu</surname>
          </string-name>
          ,
          <article-title>On the complexity of consistent query answering in the presence of simple ontologies</article-title>
          , in: J.
          <string-name>
            <surname>Hofmann</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          Selman (Eds.),
          <source>Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence</source>
          , AAAI Press,
          <year>2012</year>
          , pp.
          <fpage>705</fpage>
          -
          <lpage>711</lpage>
          . doi:
          <volume>10</volume>
          .1609/AAAI.V26I1.8218.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Kriegel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nuradiansyah</surname>
          </string-name>
          ,
          <article-title>Error-tolerant reasoning in the description logic ℰℒ based on optimal repairs</article-title>
          , in: G. Governatori,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Turhan (Eds.),
          <source>Proceedings of the 6th International Joint Conference on Rules and Reasoning (RuleML+RR 2022)</source>
          , volume
          <volume>13752</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2022</year>
          , pp.
          <fpage>227</fpage>
          -
          <lpage>243</lpage>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>031</fpage>
          -21541-4_
          <fpage>15</fpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -21541-4_
          <fpage>15</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>E.</given-names>
            <surname>Fermé</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. O.</given-names>
            <surname>Hansson</surname>
          </string-name>
          ,
          <source>Belief Change: Introduction and Overview</source>
          , Springer,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hitzler</surname>
          </string-name>
          ,
          <article-title>Foundations of refinement operators for description logics</article-title>
          , in: H.
          <string-name>
            <surname>Blockeel</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Ramon</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Shavlik</surname>
          </string-name>
          , P. Tadepalli (Eds.),
          <source>Inductive Logic Programming</source>
          , Springer Berlin Heidelberg, Berlin, Heidelberg,
          <year>2008</year>
          , pp.
          <fpage>161</fpage>
          -
          <lpage>174</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Koopmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Kriegel</surname>
          </string-name>
          ,
          <article-title>Optimal repairs in the description logic ℰℒ revisited</article-title>
          ,
          <source>in: Proceedings of the 18th European Conference on Logics in Artificial Intelligence (JELIA</source>
          <year>2023</year>
          ),
          <source>September 20-22</source>
          ,
          <year>2023</year>
          , Dresden, Germany, volume
          <volume>14281</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2023</year>
          , pp.
          <fpage>11</fpage>
          -
          <lpage>34</lpage>
          . doi:https://doi.org/10.1007/978-3-
          <fpage>031</fpage>
          -43619-
          <issue>2</issue>
          _
          <fpage>2</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>F.</given-names>
            <surname>Kriegel</surname>
          </string-name>
          ,
          <article-title>Beyond optimal: Interactive identification of better-than-optimal repairs</article-title>
          ,
          <source>in: Proceedings of the 40th ACM/SIGAPP Symposium On Applied Computing (SAC</source>
          <year>2025</year>
          ), Association for Computing Machinery,
          <year>2025</year>
          . doi:https://doi.org/10.1145/3672608.3707750.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ganter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Sertkaya</surname>
          </string-name>
          , U. Sattler,
          <article-title>Completing description logic knowledge bases using formal concept analysis</article-title>
          , in: M. M.
          <string-name>
            <surname>Veloso</surname>
          </string-name>
          (Ed.),
          <source>Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI</source>
          <year>2007</year>
          ),
          <year>2007</year>
          , pp.
          <fpage>230</fpage>
          -
          <lpage>235</lpage>
          . URL: http://ijcai.org/ Proceedings/07/Papers/035.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>B.</given-names>
            <surname>Sertkaya</surname>
          </string-name>
          ,
          <article-title>OntoComP: A Protégé plugin for completing OWL ontologies</article-title>
          , in: L.
          <string-name>
            <surname>Aroyo</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Traverso</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Ciravegna</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Cimiano</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Heath</surname>
            , E. Hyvönen,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Mizoguchi</surname>
            , E. Oren,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Sabou</surname>
          </string-name>
          , E. Simperl (Eds.),
          <source>Proceedings of the 6th European Semantic Web Conference, ESWC</source>
          <year>2009</year>
          , volume
          <volume>5554</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2009</year>
          , pp.
          <fpage>898</fpage>
          -
          <lpage>902</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>642</fpage>
          -02121-3_
          <fpage>78</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kazakov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Krötzsch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Simancik</surname>
          </string-name>
          ,
          <article-title>The incredible ELK - from polynomial procedures to eficient reasoning with EL ontologies</article-title>
          ,
          <source>J. Autom. Reason</source>
          .
          <volume>53</volume>
          (
          <year>2014</year>
          )
          <fpage>1</fpage>
          -
          <lpage>61</lpage>
          . doi:
          <volume>10</volume>
          .1007/ S10817-013-9296-3.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kalyanpur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Parsia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Horridge</surname>
          </string-name>
          , E. Sirin,
          <article-title>Finding all justifications of OWL DL entailments</article-title>
          , in: K. Aberer, K.-S. Choi,
          <string-name>
            <given-names>N.</given-names>
            <surname>Noy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Allemang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.-I.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Nixon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Golbeck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mika</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Maynard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mizoguchi</surname>
          </string-name>
          , G. Schreiber,
          <string-name>
            <given-names>P.</given-names>
            <surname>Cudré-Mauroux</surname>
          </string-name>
          (Eds.),
          <source>The Semantic Web</source>
          , Springer Berlin Heidelberg, Berlin, Heidelberg,
          <year>2007</year>
          , pp.
          <fpage>267</fpage>
          -
          <lpage>280</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>R.</given-names>
            <surname>Peñaloza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Sertkaya</surname>
          </string-name>
          ,
          <article-title>Understanding the complexity of axiom pinpointing in lightweight description logics</article-title>
          ,
          <source>Artif. Intell</source>
          .
          <volume>250</volume>
          (
          <year>2017</year>
          )
          <fpage>80</fpage>
          -
          <lpage>104</lpage>
          . doi:
          <volume>10</volume>
          .1016/J.ARTINT.
          <year>2017</year>
          .
          <volume>06</volume>
          .002.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>R.</given-names>
            <surname>Peñaloza</surname>
          </string-name>
          ,
          <article-title>Axiom pinpointing in description logics and beyond</article-title>
          ,
          <source>Ph.D. thesis</source>
          , Dresden University of Technology,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          , I. Horrocks,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Sattler</surname>
          </string-name>
          , An Introduction to Description Logic, Cambridge University Press,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>C.</given-names>
            <surname>Del Vescovo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Parsia</surname>
          </string-name>
          , U. Sattler,
          <string-name>
            <given-names>T.</given-names>
            <surname>Schneider</surname>
          </string-name>
          ,
          <article-title>The modular structure of an ontology: Atomic decomposition</article-title>
          , in: T. Walsh (Ed.),
          <source>IJCAI 2011, Proceedings of the 22nd International Joint Conference on Artificial Intelligence</source>
          , Barcelona, Catalonia, Spain,
          <source>July 16-22</source>
          ,
          <year>2011</year>
          , IJCAI/AAAI,
          <year>2011</year>
          , pp.
          <fpage>2232</fpage>
          -
          <lpage>2237</lpage>
          . URL: https://doi.org/10.5591/978-1-
          <fpage>57735</fpage>
          -516-8/
          <fpage>IJCAI11</fpage>
          -372. doi:
          <volume>10</volume>
          .5591/978-1-
          <fpage>57735</fpage>
          -516-8/
          <fpage>IJCAI11</fpage>
          -372.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>C.</given-names>
            <surname>Bourgaux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bourhis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Peterfreund</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Thomazo</surname>
          </string-name>
          ,
          <article-title>Revisiting semiring provenance for datalog</article-title>
          , in: G.
          <string-name>
            <surname>Kern-Isberner</surname>
          </string-name>
          , G. Lakemeyer, T. Meyer (Eds.),
          <source>Proceedings of the 19th International Conference on Principles of Knowledge Representation and Reasoning</source>
          ,
          <source>KR</source>
          <year>2022</year>
          ,
          <year>2022</year>
          . URL: https://proceedings.kr.org/
          <year>2022</year>
          /10/.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>C.</given-names>
            <surname>Bourgaux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ozaki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Peñaloza</surname>
          </string-name>
          , L. Predoiu,
          <article-title>Provenance for the description logic ELHr</article-title>
          , in: C.
          <string-name>
            <surname>Bessiere</surname>
          </string-name>
          (Ed.),
          <source>Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI</source>
          <year>2020</year>
          ,
          <article-title>ijcai</article-title>
          .org,
          <year>2020</year>
          , pp.
          <fpage>1862</fpage>
          -
          <lpage>1869</lpage>
          . doi:
          <volume>10</volume>
          .24963/IJCAI.
          <year>2020</year>
          /258.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>T. P.</given-names>
            <surname>Tanon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bourgaux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Suchanek</surname>
          </string-name>
          ,
          <article-title>Learning how to correct a knowledge base from the edit history</article-title>
          , in: L. Liu,
          <string-name>
            <given-names>R. W.</given-names>
            <surname>White</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mantrach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Silvestri</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. J. McAuley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Baeza-Yates</surname>
          </string-name>
          , L. Zia (Eds.),
          <source>Proceedings of The World Wide Web Conference, WWW</source>
          <year>2019</year>
          , ACM,
          <year>2019</year>
          , pp.
          <fpage>1465</fpage>
          -
          <lpage>1475</lpage>
          . doi:
          <volume>10</volume>
          .1145/3308558.3313584.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>