<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Ontology Repair Through Partial Meet Contraction</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Raphael C o´be</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Renata Wassermann</string-name>
          <email>renatag@ime.usp.br</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Sa ̃o Paulo Sa ̃o Paulo</institution>
          ,
          <country country="BR">Brazil</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The process of building an ontology, be it from scratch or through reuse and combination of other ontologies, is known to be susceptible to modeling errors. Ontology debugging and repair techniques have attracted attention in the last decade due to the popularization of the use of ontologies written in OWL. Belief Change deals with the problem of removing or adding new information to a knowledge base in a consistent way. In this paper, we look at the belief change operation known as partial meet contraction as a construction for ontology repair. We propose heuristics to improve the performance of such operation and compare them to an existing implementation and approaches based on finding minimal justifications or explanations by means of experiments with automatically generated ontologies and real world ontologies from the BioPortal.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>The ontology building task is a non trivial, error prone task
which has, mostly, to be carried out by experts on the domain
being modeled. Most ontology editors offer a connection to
a reasoner in order to test for conflicts. The nature of such
conflicts can be a logical one, where the resulting ontology is
considered inconsistent or incoherent (i.e. the interpretation
of some concept is necessarily empty), or even an unwanted
entailment that should not be valid.</p>
      <p>The problem we address in this paper is that of going one
step further and actually providing the ontology developer
solutions for the unwanted behavior, guaranteeing that the
resulting ontology is safe, i.e., free of conflicts.</p>
      <p>In this paper, we propose the use of belief base contraction
to debug and repair ontologies. Belief change [Alchourron et
al., 1985; Hansson, 1999] is the area of knowledge
representation that deals with adding or removing information from a
knowledge base. These changes are often non-trivial, as one
wants to preserve logical consistency. There are typically two
constructions used in the belief change literature: one based
on finding minimal conflict sets and deleting at least one
formula from each, and one based on finding maximal
conflictfree sets. The first construction is called kernel contraction
and the second one is known as partial meet contraction.</p>
      <p>There have been several proposals to apply belief change
operators to description logics [Flouris, 2006; Ribeiro, 2013]
as well as implementations for ontology repair [Haase et al.,
2005; Kalyanpur, 2006; Schlobach et al., 2007; Qi et al.,
2008; Horridge, 2011]. Virtually all implemented solutions
are based on the idea of kernel construction, i.e., finding
minimal conflicting sets. However, in order to solve a conflict,
one needs to compute all kernels and then remove at least
one element of each, while in the partial meet approach, if
there are not enough resources available, computing a single
maximal conflict-free set is enough. The selection of a
single maximal subset is known as maxichoice and although it
presents some incompatibilities with the AGM theory, it is
not a problem to use such approach in the context of belief
bases.</p>
      <p>We focus specifically on partial meet contraction, in which
we build maximally conflict-free sub-ontologies in a direct
manner. We propose optimizations for the operators
described in the literature and test their effectiveness on real
world data gathered from the BioPortal repository1.</p>
      <p>The paper proceeds as follows: in the next Section, we
present the operations for belief contraction and related work
on applying belief change to ontologies. In Section 3, we
present the algorithms and heuristics for the construction of
the Remainder Set. In Section 4, we describe the experiments
comparing our approach to the existing ones and in Section 5
we present our conclusions and discuss future work.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Belief Base Contraction</title>
      <p>The operation of removing information from a knowledge
base is called contraction and there are two main forms of
doing so, Kernel contraction and Partial Meet contraction.
We start by introducing the operations as they were originally
described in the literature. A belief base is a set of formulas
in a given logic.</p>
      <p>The first operation works by constructing a set of
minimal conflict preserving sub-bases, called Kernel Set, and then
removing the least preferred (according to some preference
order) sentences of each of the sub-bases. The operation of
ranking axioms and selecting the least preferred one is called
Incision Function. These concepts are formally described in
Definitions 1 and 2 respectively. Both concepts are used in</p>
      <sec id="sec-2-1">
        <title>1http://bioportal.bioontology.org/</title>
        <p>order to formulate the operation for Kernel contraction
presented in Definition 3.</p>
        <p>Definition 1 (Kernel Set) [Hansson, 1994] Let B be a belief
base and a sentence. A set B0 is an element of B j= iff
B0 is a minimal subset of B such that B0 j= :
Definition 3 (Kernel Contraction) [Hansson, 1994] Let B be
a belief base. For any sentence and incision function , the
kernel contraction of B by is given by:</p>
        <p>Partial Meet contraction works by building maximal
conflict-free subsets directly. The set of all possible maximal
conflict-free subset is called Remainder Set. The number of
elements in the Remainder Set can be possibly large, thus we
need to select which ones suit best our needs. The operation
of selecting the best elements from the remainder is called
Selection Function. Both concepts are formally described in
Definition 4 and 5, respectively. Definition 6 shows how these
two concepts are used to build a Partial Meet contraction
operator.</p>
        <p>Definition 4 (Remainder Set) [Alchourron et al., 1985] Let
B be a belief base and a sentence. A set B0 is an element
of the remainder B? if and only if it is a maximal subset of
B that does not imply :</p>
        <p>B’ is a subset of B (B0</p>
        <p>B)
B0 6j=
If B0</p>
        <p>B00</p>
        <p>B, then B00 `
(B? )</p>
        <p>B?
if B? 6= ;, then (B? ) 6= ;
if B?</p>
        <p>= ;, (B? ) = fBg
Definition 5 (Selection Function) [Alchourron et al., 1985]
Let L be a language and B a belief base of this language.
For any sentence , a selection function for B is a function
such that, for any sentence 2 L:
Definition 6 (Partial Meet Contraction) [Alchourron et al.,
1985] Let B be a belief base, a sentence and a selection
function. The partial meet contraction of B by is given by:
B</p>
        <p>= T (B? )</p>
        <p>Partial meet contraction is the most well-known
construction for belief contraction. When Hansson introduced
kernel base contraction in [Hansson, 1994], it was seen as a
solution much closer to work in artificial intelligence, such
as Truth Maintenance Systems [Doyle, 1979] and Model
Based Diagnosis [Reiter, 1987; Wassermann, 2000] and it
seemed much more amenable to implementation than
partial meet contraction. Maybe for this reason, most tools
for ontology debugging use something similar to kernels.
Schlobach and Cornet [Schlobach and Cornet, 2003] call the
minimal inconsistent subsets of an ontology MUPS
(Minimal Unsatisfiability-Preserving Sub-terminologies) and the
minimal incoherent subsets of an ontology MIPS (Minimal
Incoherence-Preserving Sub-terminologies). They present an
algorithm for computing all the minimal subsets of a given
knowledge base that have a given consequence, which are
called MinAs (Minimal Axiom set) in [Baader et al., 2007]
and served as the basis for Horridge’s Justifications
[Horridge, 2011].</p>
        <p>Operators for building kernel elements for ontologies are
largely described at the literature, for instance in [Kalyanpur,
2006; Suntisrivaraporn et al., 2008; Kalyanpur et al., 2007;
Ji et al., 2009; Horridge, 2011]. Several optimizations
have also been proposed, like the Sliding Window technique
[Kalyanpur, 2006] or the Divide and Conquer technique
[Junker, 2001] or the Syntactic Relevance [Ji et al., 2009;
Huang et al., 2005]. They all aim to reduce the number of
calls to the reasoner mechanism and also to reduce the overall
execution time. As we can see, the kernel building challenge
is a problem highly studied on the literature.</p>
        <p>On the other hand, Partial Meet contraction has not been
given the same amount of attention. It has been mentioned in
[Wassermann, 2000] and [Nyssen, 2009] that the Remainder
Set can be obtained from the Kernel Set using Reiter’s
hitting sets algorithm [Reiter, 1987]. However, it is not easy to
find works that implement a Partial Meet contraction directly.
In [Resina et al., 2014], the authors describe a procedure to
build the Remainder Set, which was used as a baseline for the
comparison of the heuristics we proposed.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Construction of the Remainder Set</title>
      <p>The approach used to build a Remainder Set element is
similar to the one used to build a Kernel Set element. For the
Kernel, most algorithms are described in the form of an
expandshrink algorithm. The idea is to start from the empty set and
add formulas to it until the conflict appears - the expand phase
- and then, as the set may not be minimal, remove any
formulas whose removal does not solve the conflict - the shrink
phase.</p>
      <p>In the case of the Remainder Set elements we use a
shrinkexpand algorithm [Ribeiro, 2013], that first removes axioms
until the conflict no longer exists, then tries to add back the
removed axioms in order to guarantee maximality. The basic
algorithm for building a Remainder can be seen in Listing
1, where O is the ontology for which we want to find the
Remainder Set and ' is the formula to be contracted, i.e., the
algorithm returns an element of O?'.</p>
      <p>Listing 1: Black-Box Algorithm for building a single element
of the remainder
Black-box-remainder(O, '):
#Shrink first
removed_elements O
remainder_element ;
#Now Expand
for each 2 removed_elements do
if(remainder_element [ f g 6j= ') then
remainder_element</p>
      <p>remainder_element [ f g
return remainder_element</p>
      <p>The algorithm presented in Listing 1 was proposed by
Resina et al. in [Resina et al., 2014]. The author bypasses
the shrink phase by removing all the axioms from O at once
(trivial shrinking) and passes to the expand, which is done
iteratively. A small variation of this algorithm would use an
iterative strategy for both the expansion and the shrink phase.
In the rest of this Section, we will describe a set of heuristics
that we developed with the goal of optimizing the execution
of the classical algorithm. The heuristics here proposed are
inspired by the ones used to optimize the classical black-box
algorithm for building kernel elements and are grouped
according to the phase in which they are used.</p>
      <sec id="sec-3-1">
        <title>3.1 Optimizations for the Shrink Phase</title>
        <p>The first heuristics we propose follows the same principle of
the Sliding Window proposed by Kalyanpur in [Kalyanpur et
al., 2005]. The idea is that we can optimize the shrink phase if
we, at each interaction remove more than one axiom at a time.
If the removed axiom set fixes the ontology conflict, then we
can move on to the expand phase and try to re-add the
removed axioms one by one. The algorithm for such heuristics
can be seen in Listing 2.</p>
        <p>Listing 2: Sliding Window Algorithm for Remainder
Sliding_Window(O,', window_size):
remainder_set ;
window_start 1
window_end window_size
Olist O
while window_start &lt;= |Olist| do</p>
        <p>Osublist</p>
        <p>Olist.subList(window_start,window_end)
remainder_set remainder_set [ Osublist
if remainder_set j= ' then
remainder_set remainder_setnOsublist
window_start window_start+1
else</p>
        <p>window_start window_start+window_size
window_end window_start+window_size
if window_end &gt; |Olist| then</p>
        <p>window_end |Olist|
return remainder_set</p>
        <p>Example 1 shows how the proposed heuristics works.
Notice that this heuristics, as well as most of the algorithms in
the literature, assume that the formulas are presented as a list,
and therefore, ordered. In this work, as in [Horridge, 2011],
we are not presuming any particular ordering, although we
know that it affects the performance of the algorithms.
Example 1 Consider the ontology O = fA v B; B v
D; A v E; E v F g j= A v D nd feed it to the algorithm in
Listing 2.</p>
        <p>The algorithm then positions the window over the first
axiom we would get:</p>
        <p>O = f A v B; B v D; A v E ; E v F g</p>
        <p>The sub-ontology resulting from removing the axioms
inside the sliding window can still entail A v D, thus we have
to slide the window to its next position, resulting in:</p>
        <p>O = fA v B; B v D; A v E; E v F g</p>
        <p>Now, after removing the axioms inside the window we
remove the unwanted entailment A v D. The algorithm returns
the subset removing the axioms within the sliding window.</p>
        <p>The next heuristics we propose is to start the shrinking first
removing the axioms that share concepts in their signatures
with the axiom causing the conflict. Listing 3 shows the
algorithm that implements such heuristics. The algorithm relies
on the function Signature that returns a set of all Classes
used on the axiom.</p>
        <p>Listing 3: Syntactic Conectedness based algorithm
Syntatic_Relevance_Black-box(O,'):
removed_elements ;
#Shrink
'sig Signature(')
for each 2 O do
if 'sig \ Signature( ) &lt;&gt; ; then</p>
        <p>removed_elements removed_elements[f g
if On removed_elements j= '
#Perform Trivial Shrink
removed_elements O
#Expand ...
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Optimizations for the Expand Phase</title>
        <p>The heuristics that we propose for the expansion phase is
based on the idea from Junker in [Junker, 2001]. The author
proposes to use a divide and conquer based strategy to build
explanations (kernel elements) for a given entailment. This
is the same heuristics used by the Prote´ge´ OWLExplanation
plugin.2</p>
        <p>Our version uses the same idea, with a slight modification
in the way we divide the input. In [Junker, 2001], the author
divides the input in quarters and tries to combine such
quarters hoping to obtain a conflicting subset, then the routine is
executed recursively. In our case, we try to build the largest
conflict-free subset, thus we only divide the input in halves
and check whether any half is conflict free, then we apply the
routine recursively on the remaining elements.</p>
        <p>The algorithm for this heuristics can be seen in Listing 4.</p>
        <p>Listing 4: Divide and Conquer based algorithm
Divide_and_Conquer_Remainder(Olist,', Osafe):
if |Olist| = 1 then
if Olist [ Osafe 6j= ' then</p>
        <p>return Olist [ Osafe
else</p>
        <p>return Osafe
middle 1+|Olist|/2
Osublist Olist.sublist(1,middle)
if(Osublist [ Osafe 6j= ') then</p>
        <p>Osafe Osafe [ Osublist
2https://github.com/matthewhorridge/
owlexplanation
return Divide_and_Conquer_Remainder(</p>
        <p>OlistnOsublist,',Osafe)
else</p>
        <p>Osublist Olist.sublist(middle,|Olist|)
if(Osublist [ Osafe 6j= ') then</p>
        <p>Osafe Osafe [ Osublist
return Divide_and_Conquer_Remainder(</p>
        <p>OlistnOsublist,',Osafe)
else</p>
        <p>Osublist Olist.sublist(1,middle)
Osafe Osafe [</p>
        <p>Divide_and_Conquer_Remainder(Osublist,</p>
        <p>',Osafe)
Osublist Olist.sublist(middle,|Olist|)
return Divide_and_Conquer_Remainder(</p>
        <p>Osublist,',Osafe)</p>
        <p>Example 2 illustrates the execution of the algorithm.
Example 2 Consider the ontology O = fA v B; B v
D; A v F; F v Dg and suppose that the entailment ' =
A v D is conflicting with the purposes for which your
ontology is being built.</p>
        <p>The first interaction of the algorithm would divide O into
O1 = fA v B,B v Dg and O2 = fA v F , F v Dg, then
we need to check if any of the sub-ontologies is conflict free.
It is the case for O2, thus a recursive call is done using the
rest as input, i.e., O1.</p>
        <p>The input O1 is then divided in O11 = fA v Bg, from
which we cannot infer A v D. This is the very base case of
the recursion (the base cannot be divided any further) so the
algorithm tries to add it to the safe knowledge base O2 and
the result is still safe, so a recursive call is done on the other
half, i.e., O12 = fB v Dg, as input. Again we reach the
base case for our recursion, only this time we cannot add it
because such an action would make the conflicting entailment
valid again.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Experiments</title>
      <p>We developed a series of software experiments to check the
performance of the algorithms and heuristics that we
proposed. These components were developed using the Java
programming language and are available as a free/open source
software at (https://github.com/raphaelmcobe/
ontology-debug-and-repair). We used two datasets
for our experiments. The first was automatically generated
and the second used the data available at the BioPortal.</p>
      <p>The idea behind these experiments was to verify if there
was any case - even an artificially generated one - in which
building the Remainder Set could be easier than building the
Kernel Set. Thus, we were not concerned with the quality of
the remainder element built since it has been shown in [Booth
et al., 2011] that it is always possible to derive the remainder
set from the kernel set.</p>
      <p>If we managed to find such input, we would like to verify
if such cases could be found in real world ontologies. This
would serve both as a redeemer of the partial meet
construction in the Artificial Intelligence community and as evidence
for building efficient ontology repair tools.
The experiment aimed to prove three hypotheses:
H1: the heuristics proposed have a positive impact on the
performance of the Remainder Set finding algorithm.
We prove this hypothesis by measuring the overall
impact on the execution time as well as the number of
reasoner calls.</p>
      <p>H2: there are cases at which building a single element of the
Remainder Set is at least as efficient as building a single
element of the Kernel Set. This hypothesis is proved if
we are able to find cases at which building a remainder
element is faster or makes less calls to the reasoner than
building a kernel element.</p>
      <p>H3: there are cases at which it is easier to build a remainder
element than building the whole Kernel Set. We verify
with this hypothesis that that even if we are not able to
build a remainder element as fast as a kernel element
it may be the case that building the whole Kernel Set
takes longer or makes more calls to the reasoner than
building a single remainder element. This is still a good
result because building the Kernel Set is just the first step
towards fixing the conflicts. The user will still have to
use an incision function to order and remove the less
preferred axioms of each kernel element. In this case
we were not concerned with the quality of the remainder
element built.</p>
      <p>For our experiment we chose to collect two metrics:
Reasoner calls, and Overall execution time. We tested the
following combination of heuristics for the remainder element
building:
R1: Syntactic Connectedness based Shrinking and Iterative</p>
      <p>Expansion;
R2: Syntactic Connectedness based Shrinking and Divide
and Conquer based Expansion;
R3: Trivial Shrinking (removal of all ontology axioms) and</p>
      <p>Divide and Conquer based Expansion;
R4: Trivial Shrinking and Iterative Expansion;
R5: Sliding Window Shrinking and Iterative Expansion;
R6: Iterative Shrinking and Iterative Expansion.</p>
      <p>For the kernel element building we used the combination
of Syntactic Connectedness based Expansion and Divide and
Conquer based Shrinking. This combination is the same used
by Horridge in [Horridge, 2011] and in the OWLExplanation
plugin, which is considered state of art implementation for
building justifications (Kernel Sets). The author argued that
such heuristics presented better performance. For other
optimizations refer to [Horridge, 2011]. The plugin was used in
the experiments described in Section 4.3 where we build the
whole Kernel Set.</p>
      <p>The algorithms presented on this paper were implemented
using the Java Language (Oracle JVM v. 1.7.0 51) and the
tests were executed on a Linux machine running Ubuntu
13.10 (Kernel 3.11.0-19-generic) with an Intel 12 core CPU
(XEON X5660) and 32 GB of RAM. The java VM was fed
with the parameters: -Xms 2048m -Xmx16384m. Each
test had a timeout, arbitrarily defined of 45 seconds for the
tests with generated data and 330 seconds for each
entailment tested at the BioPortal ontologies. The version of the
OWLExplanation plugin used was 1.0.3, checked out from
its github repository on October 14th, 2014.
The generated data used to test our hypotheses follows
Definition 7.</p>
      <p>Definition 7 [Resina et al., 2014] A Large Kernel and Small
Remainder Ontology is built using the template:</p>
      <p>O =
8
&gt;
&gt;
&gt;
&gt;
&lt;</p>
      <p>(B t B0)(a)</p>
      <p>B t B0 v B1; B t B0 v B10;
B1 t B10 v B2; B1 t B10 v B20;
.
.</p>
      <p>.
&gt;&gt;&gt;&gt; &gt;&gt;&gt;&gt;
: Bn 1 t Bn0 1 v Bn; Bn 1 t Bn0 1 v Bn0 ;
9
&gt;
&gt;
&gt;
&gt;
=</p>
      <p>It is interesting to notice that if we build the Kernel Set K
and the Remainder Set R for the entailment (Bn t Bn0)(a)
in O we notice that jKj = 2n and jRj = n + 1. Example 3
shows how Definition 7 can be used to generate an ontology.
Example 3 If we build the ontology O using the template
described at the Definition 7 and calculate the kernel K and
remainder R sets, considering n = 2 and the entailment
O j= (B2 t B20)(a) we would get:
8 f(B t B0)(a); B t B0 v B1; B1 t B10 v B2g; 9
&gt;
&lt; f(B t B0)(a); B t B0 v B10; B1 t B10 v B2g; =&gt;
K = &gt; f(B t B0)(a); B t B0 v B1; B1 t B10 v B20g;
: f(B t B0)(a); B t B0 v B10; B1 t B10 v B20g ;&gt;
R =
( O n f(B t B0)(a)g; )</p>
      <p>O n fB t B0 v B1g;</p>
      <p>O n fB t B0 v B10g</p>
      <p>Figures 1 and 2 sumarize the comparison between
operators for building a kernel and a remainder element. We chose
the best results among the heuristics combinations for
building remainder elements. In the case of the remainder building
algorithm we selected the two best results: the one using the
Syntactic Connectedness for the shrinking and iterative
expansion and the Divide and Conquer for expansion and
Trivial shrinking. We also plotted the results of the baseline
algorithm for building a remainder element, proposed by Resina
et al. [Resina et al., 2014], which uses the trivial shrink and
iterative expansion combination.</p>
      <p>We can see that the operators for building a remainder
element had a better performance for both algorithms in the case
of reasoner calls. The same could be observed during the
time analysis. In this case, we observed a better performance
by the remainder building algorithm that used syntactic
connectedness, followed by the kernel building algorithm and the
worst result was the remainder building algorithm that used
Divide and Conquer.</p>
      <p>We can see from the Figures that we were able to prove all
of our hypotheses, we were able to introduce heuristics that
resulted in better performance than what is described at the
literature (H1). Our results also showed that, for this kind
of input, building a remainder element is easier - in terms
of reasoner calls and overall execution time - than building a
kernel element (H2 and H3).</p>
      <p>We were able to find a template for constructing an
ontology for which building a remainder element is easier than
building a kernel element. We went a step further and tried to
find real world ontologies at which that is also the case.</p>
      <p>During the development of our study we also evaluated
experiments using ontologies with small Kernel Sets and large
Remainder Sets. Such dataset only confirmed the intuition
that building a kernel element has better performance than
building a remainder element.
We have conducted an experiment using the data available
at the BioPortal repository, maintained by the National
Center for Biomedical Ontology - NCBO. The same dataset was
used by Horridge at [Horridge, 2011]. The expressivity of the
ontologies collected vary from E L to SHROIQ.</p>
      <p>During the data selection procedure, Horridge filtered only
ontologies that were described using SHROIQ or a less
expressive language. Then he filtered ontologies that were
written in OWL and OBO since standard tools could be used to
manipulate them. After selecting the files, Horridge
enumerated all non-trivial entailments, which are axioms entailed by
the reasoner that were not explicitly defined. The resulting
dataset had 72 ontologies.</p>
      <p>We selected 51 of the 72 ontologies to test our algorithms.
We chose the ontologies that had 1 MB or less, since our
experiments were not yet optimized and do not scale. The total
amount of tested entailments was 3812.</p>
      <p>We gathered data regarding the overall execution time and
reasoner calls and only compared the execution of the
algorithm for finding a remainder element and the whole kernel,
provided by the OWLExplanation plugin.</p>
      <p>During the experiment we used the same heuristics
combination from the experiment with generated data for the
remainder element building algorithm. For the comparison, we
tested the state of art kernel finding software, which is
distributed with the Prote´ge´3 editor.</p>
      <p>Each ontology had its own set of accompanying
entailments to be tested and in Table 1 we show the percentage
in which the algorithm behaved better, i.e., the percentage of
entailments for which the operator had the best performance.</p>
      <p>For the case of the reasoner calls test we observed that,
within the defined timeout, 19 ontologies had better
performance on building a single remainder element compared to
building the whole Kernel Set. That represents approximately
40% of the ontologies. This result helped us prove our third
hypothesis (H3) with real world data at least comparing the
number of reasoner calls. Also, in all cases, the heuristics that
we proposed had better performance than the baseline
algorithm for building a remainer element by Resina et al. [Resina
et al., 2014], with exception of the Sliding Window heuristic,
that had worse performance than the algorithm from Resina
et al. [Resina et al., 2014]. We believe that this is the case
because we have to better calibrate the sliding window size
before running the experiments. We used 10 as the window
size. This size was the same reported by Kalyanpur
[Kalyanpur, 2006] as being the best for the kernel element building
algorithm based on sliding windows. This result goes on the
direction of proving our first hypothesis (H1) also with real
world data.</p>
      <p>Table 1 summarizes the data collected at our experiment
regarding the number of reasoner calls.</p>
      <p>The results comparing execution time were not as good as
the number of reasoner calls. The numbers for this
experiment are presented on Table 1.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Final Remarks</title>
      <p>In this paper we presented new algorithms for building
remainder elements. We conducted experiments in order to
prove that in some (real) cases it is easier to build a remainder
element than the whole kernel. We believe that building the</p>
      <sec id="sec-5-1">
        <title>3http://protege.stanford.edu/</title>
        <sec id="sec-5-1-1">
          <title>Operator</title>
          <p>R1
R2
R3
R4
R5
R6
OWLExplanation
Plugin</p>
        </sec>
        <sec id="sec-5-1-2">
          <title>Total</title>
          <p>RC
whole kernel can be used as a fine-grained debugging strategy
and although it is the case that it is easier to build the Kernel
Set in general, the user may still have to define an incision
function in order to fix the ontology. Such function might
cause an overhead. We were not specially concerned with the
remainder element quality. Still, the Syntactic
Connectedness heuristics can build remainder elements discarding the
axioms syntactically connected to the unwanted entailment
first, which can be a desired property at the ontology repair.</p>
          <p>We tested our heuristics using real and automatically
generated data. We have found real world cases that proved that
it is not the case that it is always easier to build the Kernel Set
than building a remainder element. Still, further investigation
is needed in order to check how often these cases appear in
larger data sets.</p>
          <p>During our experiment we observed a few interesting
cases (6.4% of the entailments), where the kernel algorithm
could not terminate within the allocated time: ontology
semanticscience-integrated-ontology (3 entailments),
imgtontology (25 entailments), biotop (4 entailments), amino-acid
(3 entailments), and adverse-event-reporting-ontology (1
entailment).</p>
          <p>In ontology imgt-ontology, the Syntactic Connectedness
heuristics did not present the best performance results. In
this case, the trivial shrinking proved to be a better option in
3 entailments checked.</p>
          <p>For the future, we intend to test our algorithms with a larger
dataset of real world ontologies and implement a Prote´ge´
plugin for debugging based on remainder elements. We also
plan to check the algorithms behaviour after introducing the
idea of ontology modules. In that way, we could isolate
subontologies, preferring a sub-set of axioms over another, thus
building remainder elements more suitable for the ontology
designer needs.</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [Alchourron et al.,
          <year>1985</year>
          ]
          <string-name>
            <given-names>C.E.</given-names>
            <surname>Alchourron</surname>
          </string-name>
          , P. Ga¨rdenfors, and
          <string-name>
            <given-names>D.</given-names>
            <surname>Makinson</surname>
          </string-name>
          .
          <article-title>On the Logic of Theory Change: Partial Meet Contraction</article-title>
          and
          <string-name>
            <given-names>Revision</given-names>
            <surname>Functions</surname>
          </string-name>
          .
          <source>The Journal of Symbolic Logic</source>
          ,
          <volume>50</volume>
          (
          <issue>2</issue>
          ):
          <fpage>510</fpage>
          -
          <lpage>530</lpage>
          ,
          <year>1985</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [Baader et al.,
          <year>2007</year>
          ]
          <string-name>
            <given-names>Franz</given-names>
            <surname>Baader</surname>
          </string-name>
          , Rafael Pen˜aloza, and Boontawee Suntisrivaraporn.
          <article-title>Pinpointing in the description logic EL</article-title>
          .
          <source>In Proceedings of the International Workshop on Description Logics (DL)</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [Booth et al.,
          <year>2011</year>
          ] Richard Booth, Thomas Meyer, Ivan Varzinczak, and
          <string-name>
            <given-names>Renata</given-names>
            <surname>Wassermann</surname>
          </string-name>
          .
          <article-title>On the link between partial meet, kernel, and infra contraction and its application to horn logic</article-title>
          .
          <source>J. Artif. Int. Res.</source>
          ,
          <volume>42</volume>
          (
          <issue>1</issue>
          ):
          <fpage>31</fpage>
          -
          <lpage>53</lpage>
          ,
          <year>September 2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <source>[Doyle</source>
          , 1979]
          <string-name>
            <given-names>Jon</given-names>
            <surname>Doyle</surname>
          </string-name>
          .
          <article-title>A truth maintenance system</article-title>
          .
          <source>Artificial Intelligence Journal</source>
          ,
          <volume>12</volume>
          (
          <issue>3</issue>
          ):
          <fpage>231</fpage>
          -
          <lpage>272</lpage>
          ,
          <year>1979</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <source>[Flouris</source>
          , 2006]
          <string-name>
            <given-names>Giorgos</given-names>
            <surname>Flouris</surname>
          </string-name>
          .
          <article-title>On Belief Change and Ontology Evolution</article-title>
          .
          <source>PhD thesis</source>
          , University of Crete,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [Haase et al.,
          <year>2005</year>
          ]
          <string-name>
            <given-names>P.</given-names>
            <surname>Haase</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. van Harmelen</given-names>
            ,
            <surname>Zh</surname>
          </string-name>
          . Huang,
          <string-name>
            <given-names>H.</given-names>
            <surname>Stuckenschmidt</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Sure</surname>
          </string-name>
          .
          <article-title>A framework for handling inconsistency in changing ontologies</article-title>
          .
          <source>In Proceedings of the Fourth Internation Semantic Web Conference</source>
          , volume
          <volume>3729</volume>
          <source>of LNCS</source>
          , pages
          <fpage>353</fpage>
          -
          <lpage>367</lpage>
          . Springer,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <source>[Hansson</source>
          , 1994]
          <article-title>Sven Ove Hansson</article-title>
          .
          <article-title>Kernel contraction</article-title>
          .
          <source>The Journal of Symbolic Logic</source>
          ,
          <volume>59</volume>
          (
          <issue>03</issue>
          ):
          <fpage>845</fpage>
          -
          <lpage>859</lpage>
          ,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <source>[Hansson</source>
          , 1999]
          <article-title>Sven Ove Hansson. A Textbook of Belief Dynamics</article-title>
          . Kluwer Academic Press,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <source>[Horridge</source>
          , 2011]
          <string-name>
            <given-names>Matthew</given-names>
            <surname>Horridge</surname>
          </string-name>
          .
          <article-title>Justification based explanation in ontologies</article-title>
          .
          <source>PhD thesis</source>
          , the University of Manchester,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [Huang et al.,
          <year>2005</year>
          ]
          <string-name>
            <given-names>Zhisheng</given-names>
            <surname>Huang</surname>
          </string-name>
          , Frank van Harmelen,
          <article-title>and Annette ten Teije. Reasoning with inconsistent ontologies</article-title>
          .
          <source>In IJCAI-05, Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence</source>
          , Edinburgh, Scotland,
          <string-name>
            <surname>UK</surname>
          </string-name>
          ,
          <source>July 30-August 5</source>
          ,
          <year>2005</year>
          , pages
          <fpage>454</fpage>
          -
          <lpage>459</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [Ji et al.,
          <year>2009</year>
          ]
          <string-name>
            <given-names>Qiu</given-names>
            <surname>Ji</surname>
          </string-name>
          , Guilin Qi, and
          <string-name>
            <given-names>Peter</given-names>
            <surname>Haase</surname>
          </string-name>
          .
          <article-title>A relevance-directed algorithm for finding justifications of dl entailments</article-title>
          .
          <source>In Asuncin Gmez-Prez</source>
          ,
          <string-name>
            <given-names>Yong</given-names>
            <surname>Yu</surname>
          </string-name>
          , and Ying Ding, editors,
          <source>The Semantic Web</source>
          , volume
          <volume>5926</volume>
          of Lecture Notes in Computer Science, pages
          <fpage>306</fpage>
          -
          <lpage>320</lpage>
          . Springer Berlin Heidelberg,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <source>[Junker</source>
          , 2001]
          <string-name>
            <given-names>Ulrich</given-names>
            <surname>Junker</surname>
          </string-name>
          . QUICKXPLAIN:
          <article-title>Conflict detection for arbitrary constraint propagation algorithms</article-title>
          .
          <source>In Proceedings of the IJCAI Workshop on Modelling</source>
          and
          <article-title>Solving Problems with Constraints (IJCAI'01)</article-title>
          . Morgan Kaufmann,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [Kalyanpur et al.,
          <year>2005</year>
          ]
          <string-name>
            <given-names>Aditya</given-names>
            <surname>Kalyanpur</surname>
          </string-name>
          , Bijan Parsia, and
          <string-name>
            <given-names>James</given-names>
            <surname>Hendler</surname>
          </string-name>
          .
          <article-title>A tool for working with web ontologies</article-title>
          .
          <source>International Journal on Semantic Web and Information Systems (IJSWIS)</source>
          ,
          <volume>1</volume>
          (
          <issue>1</issue>
          ):
          <fpage>36</fpage>
          -
          <lpage>49</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [Kalyanpur et al.,
          <year>2007</year>
          ]
          <string-name>
            <given-names>Aditya</given-names>
            <surname>Kalyanpur</surname>
          </string-name>
          , Bijan Parsia, Matthew Horridge, and
          <string-name>
            <given-names>Evren</given-names>
            <surname>Sirin</surname>
          </string-name>
          .
          <article-title>Finding all justifications of OWL DL entailments</article-title>
          .
          <source>In The Semantic Web</source>
          , volume
          <volume>4825</volume>
          of Lecture Notes in Computer Science, pages
          <fpage>267</fpage>
          -
          <lpage>280</lpage>
          . Springer,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <source>[Kalyanpur</source>
          , 2006]
          <article-title>Aditya Anand Kalyanpur. Debugging and repair of OWL ontologies</article-title>
          .
          <source>PhD thesis</source>
          , the University of Maryland,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <source>[Nyssen</source>
          , 2009]
          <article-title>Rafael Pen˜aloza Nyssen. Axiom pinpointing in description logics and beyond</article-title>
          .
          <source>PhD thesis</source>
          , Dresden University of Technology,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [Qi et al.,
          <year>2008</year>
          ]
          <string-name>
            <given-names>Guilin</given-names>
            <surname>Qi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Peter</given-names>
            <surname>Haase</surname>
          </string-name>
          , Zhisheng Huang, Qiu Ji, JeffZ. Pan, and
          <string-name>
            <given-names>Johanna</given-names>
            <surname>Vlker</surname>
          </string-name>
          .
          <article-title>A kernel revision operator for terminologies algorithms and evaluation</article-title>
          .
          <source>In The Semantic Web - ISWC</source>
          <year>2008</year>
          , volume
          <volume>5318</volume>
          of Lecture Notes in Computer Science, pages
          <fpage>419</fpage>
          -
          <lpage>434</lpage>
          . Springer,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <source>[Reiter</source>
          , 1987]
          <string-name>
            <given-names>Raymond</given-names>
            <surname>Reiter</surname>
          </string-name>
          .
          <article-title>A theory of diagnosis from first principles</article-title>
          .
          <source>Artificial Intelligence</source>
          ,
          <volume>32</volume>
          :
          <fpage>57</fpage>
          -
          <lpage>95</lpage>
          ,
          <year>1987</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [Resina et al.,
          <year>2014</year>
          ]
          <string-name>
            <given-names>Fillipe</given-names>
            <surname>Resina</surname>
          </string-name>
          , Ma´rcio Moretto Ribeiro, and
          <string-name>
            <given-names>Renata</given-names>
            <surname>Wassermann</surname>
          </string-name>
          .
          <article-title>Algorithms for multiple contraction and an application to OWL ontologies</article-title>
          .
          <source>In Proceedings of the Brazilian Conference on Intelligent Systems (BRACIS)</source>
          . IEEE,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <source>[Ribeiro</source>
          , 2013] Ma´rcio Moretto Ribeiro.
          <article-title>Belief revision in non-classical logics</article-title>
          . Springer,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <source>[Schlobach and Cornet</source>
          , 2003]
          <string-name>
            <given-names>Stefan</given-names>
            <surname>Schlobach</surname>
          </string-name>
          and
          <string-name>
            <given-names>Ronald</given-names>
            <surname>Cornet</surname>
          </string-name>
          .
          <article-title>Non-standard reasoning services for the debugging of description logic terminologies</article-title>
          .
          <source>In Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI)</source>
          , pages
          <fpage>355</fpage>
          -
          <lpage>362</lpage>
          . Morgan Kaufmann,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [Schlobach et al.,
          <year>2007</year>
          ]
          <string-name>
            <given-names>S.</given-names>
            <surname>Schlobach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cornet</surname>
          </string-name>
          , and
          <string-name>
            <surname>F. van Harmelen.</surname>
          </string-name>
          <article-title>Debugging incoherent terminologies</article-title>
          .
          <source>Journal of Automated Reasoning</source>
          ,
          <volume>39</volume>
          :
          <fpage>317</fpage>
          -
          <lpage>349</lpage>
          ,
          <year>2007</year>
          .
          <volume>10</volume>
          .1007/s10817-007-9076-z.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [Suntisrivaraporn et al.,
          <year>2008</year>
          ]
          <string-name>
            <given-names>Boontawee</given-names>
            <surname>Suntisrivaraporn</surname>
          </string-name>
          , Guilin Qi, Qiu Ji, and
          <string-name>
            <given-names>Peter</given-names>
            <surname>Haase</surname>
          </string-name>
          .
          <article-title>A modularizationbased approach to finding all justifications for OWL DL entailments</article-title>
          . In John Domingue and Chutiporn Anutariya, editors,
          <source>The Semantic Web</source>
          , volume
          <volume>5367</volume>
          of Lecture Notes in Computer Science, pages
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          . Springer Berlin Heidelberg,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <source>[Wassermann</source>
          , 2000]
          <string-name>
            <given-names>Renata</given-names>
            <surname>Wassermann</surname>
          </string-name>
          .
          <article-title>An algorithm for belief revision</article-title>
          .
          <source>In Proceedings of the Seventh International Conference on Principles of Knowledge Representation and Reasoning</source>
          (KR). Morgan Kaufmann,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>