=Paper= {{Paper |id=Vol-2211/paper-39 |storemode=property |title=Making Repairs in Description Logics More Gentle (Extended Abstract) |pdfUrl=https://ceur-ws.org/Vol-2211/paper-39.pdf |volume=Vol-2211 |authors=Franz Baader,Francesco Kriegel,Adrian Nuradiansyah,Rafael Peñaloza |dblpUrl=https://dblp.org/rec/conf/dlog/BaaderKNP18 }} ==Making Repairs in Description Logics More Gentle (Extended Abstract)== https://ceur-ws.org/Vol-2211/paper-39.pdf
           Making Repairs in Description Logics
            More Gentle (Extended Abstract)?

                        Franz Baader1 , Francesco Kriegel1 ,
                   Adrian Nuradiansyah1 , and Rafael Peñaloza2
                     1
                      Technische Universität Dresden, Germany
      {franz.baader,francesco.kriegel,adrian.nuradiansyah}@tu-dresden.de
                        2
                          Free University of Bolzano, Italy
                           rafael.penaloza@unibz.it

As the size of DL-based ontologies grows, tools that support improving the qual-
ity of such ontologies become more important. DL reasoners can be used to detect
inconsistencies and to infer other implicit consequences, such as subsumption and
instance relationships. However, for the developer of a DL-based ontology, it is
often quite hard to understand why a consequence computed by the reasoner
actually follows from the knowledge base, and how to repair the ontology in case
this consequence is not intended. The classical approach for repairing an ontology
first computes all justifications, i.e., minimal subsets of the ontology that have
the unintended consequence, and then removes one axiom from each justifica-
tion. However, removing complete axioms may also eliminate consequences that
are still wanted. For example, consider an ontology that contains the following
two general concept inclusion axioms (GCIs):

                 Professor v ∃employedBy.University u ∃enrolledIn.University,
    ∃enrolledIn.University v Student.

These two axioms are a justification for the incorrect consequence that professors
are students. While the first axiom is the culprit, removing it completely would
also remove the correct consequence that professors are employed by a univer-
sity. Thus, it would be more appropriate to replace the first axiom by the weaker
axiom Professor v ∃employedBy.University. This is the basic idea underlying
our gentle repair approach, in which we weaken one axiom from each justifica-
tion such that the modified justifications no longer have the consequence. More
precisely, the full paper contains the following results:

1. From a semantic point of view, a repair of an ontology O is a new ontology
   O0 that is implied by O, but does not have the unintended consequence α.
   Such a repair is optimal if there is no repair O00 that strictly implies O0 . We
   show that optimal repairs need not always exist in case O is divided into
   a strict part (that must not be changed) and a refutable part (that can be
   changed).
?
    This is an extended abstract of a paper published in the proceedings of KR 2018.
    This work was partially supported by DFG within the Research Training Group
    1907 (RoSI).
2. Next, we introduce our gentle repair framework that basically follows the
   classical repair approach, but instead of removing one axiom βi from each
   justification Ji , it replaces βi by a weaker axiom γi such that (Ji \{βi })∪{γi }
   no longer has the unintended consequence. Here we call an axiom γ weaker
   than an axiom β if β logically implies γ. Our first important result is that,
   in general, the gentle repair approach needs to be iterated, i.e., applying
   it once does not necessarily remove the consequence. Our second result is
   that at most exponentially many iterations (in the number of axioms in the
   refutable part of O) are always sufficient to reach a repair.
3. The question then is how to come up with appropriate weaker axioms γi
   within the gentle repair framework. To make the repair as gentle as possi-
   ble, these weakenings should be maximally strong. Instead of allowing for
   arbitrary ways of weakening axioms, we introduce the notion of a weakening
   relation, which restricts the way in which axioms can be weakened. We show
   that, under certain restrictions on this relation (well-founded, one-step gen-
   erated, effectively finitely branching), maximally strong weakenings w.r.t. it
   can effectively be computed.
4. The paper then investigates weakening relations on GCIs for the DL EL.
   In general, such relations need not be well-behaved. For example, we show
   that the largest such relation g (which is induced by logical consequence
   without additional restrictions) is not one-step generated, i.e., the transitive
   closure of the one-step relation g1 induce by g is strictly contained in g ,
   where the induced one-step relation is defined as

            g1 := {(β, γ) ∈ g | there is no δ such that β g δ g γ}.

5. To obtain weakening relations that have our desired properties (see item 3),
   we then concentrate on weakening relations on EL GCIs that are obtained
   by generalizing the right-hand sides of the GCIs. In the semantic weakening
   relation sub , “generalizing” basically means that the right-hand side of the
   weaker axiom strictly subsumes the right-hand side of the stronger one.
   We can show that the relation sub is well-founded, one-step generated,
   and effectively finitely branching. Thus, maximally strong weakenings w.r.t.
   sub can effectively be computed. However, the algorithm obtained from the
   general result mentioned in item 3 yields a non-elementary upper bound for
   sub , and the strongest lower bound we are currently able to show is only
   exponential.
6. To obtain a weakening relation that has better algorithmic properties, we
   introduce the syntactic weakening relation syn , where a more general right-
   hand side is obtained by removing occurrences of subconcepts from the orig-
   inal one. This relation is also one-step generated, but the other two proper-
   ties are strengthened to linear branching and to linearly bounded length of
   syn -chains. For this relation, a single maximally strong weakening can be
   computed in polynomial time, and all of them can be computed in exponen-
   tial time. In addition, there may be exponentially many maximally strong
   weakenings, which shows that the exponential upper bound is optimal.