=Paper= {{Paper |id=Vol-348/paper-8 |storemode=property |title=A Forgetting-based Approach for Reasoning with Inconsistent Distributed Ontologies |pdfUrl=https://ceur-ws.org/Vol-348/worm08_contribution_12.pdf |volume=Vol-348 |dblpUrl=https://dblp.org/rec/conf/womo/QiWHH08 }} ==A Forgetting-based Approach for Reasoning with Inconsistent Distributed Ontologies== https://ceur-ws.org/Vol-348/worm08_contribution_12.pdf
     A Forgetting-based Approach for Reasoning with
           Inconsistent Distributed Ontologies

                   Guilin Qi, Yimin Wang, Peter Haase, Pascal Hitzler

             Institute AIFB, Universität Karlsruhe, D-76128 Karlsruhe, Germany
                   {gqi,ywa,pha,phi}@aifb.uni-karlsruhe.de



       Abstract. In the context of multiple distributed ontologies, we are often con-
       fronted with the problem of dealing with inconsistency. In this paper, we propose
       an approach for reasoning with inconsistent distributed ontologies based on con-
       cept forgetting. We firstly define concept forgetting in description logics. We then
       adapt the notions of recoveries and preferred recoveries in propositional logic
       to description logics. Two consequence relations are then defined based on the
       preferred recoveries.


1   Introduction

    Ontologies play a central role for the success of the Semantic Web as they provide
shared vocabularies for different domains, such as Medicine. Web ontologies in prac-
tical applications are usually distributed and dynamic. One of the major challenges in
managing these distributed and dynamic ontologies is to handle potential inconsisten-
cies introduced by integrating multiple distributed ontologies.
    For inconsistency handling in single, centralized ontologies, several approaches are
known, see e.g. [10]. There are two main ways to deal with inconsistent ontologies [10].
One way is trying to live with the inconsistency and to apply a non-standard reasoning
method to obtain meaningful answers [11, 14]. For example, a general framework for
reasoning with inconsistent ontologies based on concept relevance was proposed in
[11]. The second way to deal with logical contradictions is to resolve logical model-
ing errors whenever a logical problem is encountered. For example, several methods
have been proposed to debug erroneous terminologies and have them repaired when
inconsistencies are detected [17, 16].
    At the same time, formalisms for modular ontologies – such as Distributed De-
scription Logics (DDL) [4] and E-Connections [15] – provide an opportunity to handle
multiple and distributed ontologies. Our approach is based on DDLs, where a special
interpretation, called a hole, is proposed to interpret locally inconsistent ontologies [4,
19]. However, this approach is not fine-grained to deal with inconsistency that arises
due to several connected local sources.
    In [12], an approach based on variable forgetting is proposed to resolve inconsis-
tency in a propositional belief base. The basic idea of this approach is to restore con-
sistency of a belief base by forgetting propositional variables in some formulas of the
belief base. It has been shown that this approach provides a general and flexible frame-
work that encompasses several previous approaches for handling inconsistency as spe-
cific cases, such as coherence-based approaches for reasoning from preferred consistent
subsets and approaches for merging multiple sources of information.
     In this paper we define an approach for handling inconsistencies in distributed on-
tologies based on concept forgetting. We first define concept forgetting in description
logics. We then adapt the notion of recoveries and preferred recoveries in [12] to de-
scription logics. Based on the preferred recoveries, two forgetting consequence relations
are defined to draw nontrivial conclusions from inconsistent distributed ontologies. Our
approach can generally be applied to any formalism for distributed ontologies. How-
ever, in this paper we present our approach in terms of DDL for two reasons. First,
DDL has been proven to be a nice framework to represent distributed ontologies [2].
Second, there is some work which discusses inconsistency handling in DDL [19, 6],
whilst relatively little work on dealing with inconsistency has been given for alterna-
tive approaches. By comparing our approach with existing approaches, we can show
that our approach is indeed sufficiently powerful to handle inconsistency in distributed
ontologies.
     The rest of this paper is organized as follows. We first review DDL and variable
forgetting in Section 2 and discuss related work in Section 5. We then introduce the
notion of forgetting in description logic ALC in Section 3. We then adapt the notions
of recoveries and preferred recoveries in propositional logic to DDL in Section 4. We
finally conclude the paper and provide future work in Section 6.


2      Preliminaries

2.1     Distributed Description Logics

    In this paper, we presume that the reader is familiar with Description Logics (DLs)
and we refer to Chapter 2 of the DL handbook [1]. We consider the well-known DL
ALC [18]. Let NC and NR be pairwise disjoint and countably infinite sets of concept
names and role names respectively. We use the letters A and B for concept names,
the letter R for role names. > and ⊥ denote the top concept and the bottom concept
respectively. The set of ALC concepts is the smallest set such that: (1) every concept
name is a concept; (2) if C and D are concepts, R is a role name, then the following
expressions are also concepts: ¬C (full negation), CuD (concept conjunction), CtD
(concept disjunction), ∀R.C (value restriction on role names) and ∃R.C (existential
restriction on role names). A DL knowledge base (or ontology)1 KB consists of a TBox
T and an ABox A. A TBox is a finite set of terminology axioms which are of the form
CvD, where C and D are concepts. An ABox is a finite set of assertions which is of
the form C(a) or R(a, b), where C is a concept, R is a role, and a, b are individuals.
    Distributed Description Logics (DDL) [4] adopt a linking mechanism. It is a well-
developed logical language for representing both ontologies and context [5]. In DDL,
the distributed knowledge base (DKB), D = h{KB i }i∈I , Bi consists a set of local
knowledge bases {KB i }i∈j in ALC and bridge rules B = {Bij }i6=j∈I that repre-
sent the correspondences between their terminologies. The semantic mappings between
modules KB i and KB j are represented in terms of axioms of the following form:
 1
     In this paper, we consider a DL knowledge base as an ontology and use them interchangeably.
                  v
 – INTO: i : C −
               →j:D
                     w
  – ONTO: i : C −   →j:D
The semantics of DKB is given by the interpretation J = ({I}i∈I , {rij }i6=j∈I ) that
consists of standard DL interpretations Ii of KB i , and binary relations rij ⊆ ∆Ii ×
∆Ij . We use rij (X) to denote ∪x∈X {y ∈ ∆Ij : (x, y) ∈ rij } for any X ⊆ ∆Ii . A
distributed interpretation J d-satisfies (denoted as J |=d ) the element of the DKB D =
h{KB i }i∈I , Bi if
 – Ii |=d AvB for all A v B in KB i
                 v
               → j : D, if rij (C Ii ) ⊆ DIj (INTO)
 – J |=d i : C −
                 w
 – J |=d i : C −→ j : D, if rij (C Ii ) ⊇ DIj (ONTO)
 – J |=d D, if for every i, j∈I, J |=d KBi and J |=d Bij ,
where Ii and Ij are local interpretations of KB i and KB j , respectively, C, D are con-
cepts, rij (called domain relation) is a relation that represents an interpretation of Bij .
Finally, D |=d i : C v D if for every J, J |=d D implies J |=d i : C v D. More
details of the semantics of DDL can be found in [4].
    A DKB is inconsistent if there is no interpretation J which satisfies all KB i , where
i ∈ I. Given a DKB D = h{KB i }i∈I , Bi, a DKB D0 = h{KB 0i }i∈I , B0 i is a sub-base
of D, denoted D0 vD, iff, KB 0i ⊆KB i for all i∈I and B0i ⊆Bi .


2.2   Variable forgetting

    Let L be a propositional language constructed from a finite alphabet P of proposi-
tional symbols, the Boolean constants > (true) and ⊥ (false), using the usual operators
¬ (not), ∨ (or) and ∧ (and). The classical consequence relation is denoted by `. Given a
propositional formula φ, V ar(φ) denotes the set of propositional variables occuring in
φ. φx←0 (resp. φx←1 ) denotes the formula obtained by replacing in φ every occurence
of variable x by ⊥ (resp. >).
    Variable forgetting, which is also known as projection, is originally defined in [13]
and is applied to resolve inconsistency in [12].

Definition 1. Let φ be a formula over P and V ⊆P be a set of propositional symbols.
The forgetting of V in φ, denoted as Forget(V, φ), is a formula over P , defined induc-
tively as follows:
 – Forget(∅, φ)≡φ;
 – Forget({x}, φ)≡φx←0 ∨ φx←1 ;
 – Forget({x}∪V, φ)≡Forget(V, Forget({x}, φ)).

Forget(V, φ) is the logically strongest consequence of φ which is independent of V .
By forgetting a set of variables in a formula φ we get a formula which is inferen-
tially weaker than φ, i.e. φ ` Forget(V, φ). For example, Forget({a}, ¬a∧b)≡b and
Forget({a}, a∨b)≡>.
3   Forgetting in Description Logics
    In this section, we define forgetting in a TBox of a DL knowledge base. The notion
of forgetting can be applied to any description logic. Here, a concept name plays the
same role as a variable in propositional logic. In particular, we define the forgetting of
a set of concept names in a concept. We use Con(C) to denote the concept names in a
concept C. CA←> (resp. CA←⊥ ) denotes the concept obtained by replacing in C every
occurrence of concept name A by > (resp. ⊥).
    A straightforward way to forget a concept name A in a concept C can be given
as follows: Forget({A}, C)=CA←> tCA←⊥ . However, this definition suffers from a
problem. That is, by forgetting a concept name A in a concept C in a TBox T , we do
not necessarily obtain a concept which is logically weaker than C, i.e. T |= C v
Forget({A}, C), a very important property of forgetting. The main reason for this
problem is that the DL concept constructors include value restrictions on role names
and existential restriction on role names. Let us look at an example. Suppose C =
∀R.(Au∀R0 .¬A) and we want to forget A. Since CA←> = ∀R.(>u∀R0 .⊥) = ∀R.(∀R0 .⊥)
and CA←⊥ = ∀R.(⊥u∀R0 .>) = ∀R.⊥, we have Forget({A}, C) = (∀R.(∀R0 .⊥)) t
∀R.⊥. In this case, we do not necessarily have T |= C v Forget({A}, C).
    We consider the following definition of forgetting.

Definition 2. Let C be a (complex) concept and A a concept name. Suppose C is in
negation normal form, i.e. negation can only appear in the front of a concept name. The
forgetting of A in C, denoted Forget({A}, C), is a concept obtained by replacing every
occurrence of A and ¬A by >.

In Definition 2, it is very important that the concept C is transformed into its negation
normal form before we apply the forgetting operator to it. Otherwise, we cannot ensure
that CvForget({A}, C). For example, suppose C = ¬(A1 uB1 ) and we want to forget
A1 . If we replace A1 by > without transforming C into its negation normal form, then
C 0 = ¬B1 , which is subsumed by C.
    We give a running example to illustrate the notion of forgetting.
Example 1. Let us consider a TBox
T = {ProfessortPostDoctortResearchAssistantvEmployee, PhDStudentv
ResearchAssistant, PhDSupervisor ≡ Professoru∃isSupervisorof.PhDStudent}. Then,
   Forget({Professor}, ProfessortPostDoctortResearchAssistant) = > and
   Forget({Professor}, Professoru∃isSupervisorof.PhDStudent) = ∃isSupervisorof.
PhDStudent.
   The following proposition tells us that the forgetting of A in C indeed results in a
concept which is a super concept of C.
Proposition 1. Let C be a concept in a TBox T and A be a concept name. Then T |=
CvForget({A}, C).
    We now define the forgetting of a set of concept names in a concept.
Definition 3. Let C be a concept, and let SN be a finite set of concept names. The
forgetting of SN in C, denoted Forget(SN , C), is defined inductively as follows:
 – Forget(∅, C)=C;
 – Forget({A}∪SN , C) = Forget(SN , Forget({A}, C)).

Example 2. (Example 1 Continued) Let C = Professoru∃isSupervisorof.PhDStudent.
Suppose we want to forget Professor and PhDStudent in C, i.e., SN = {Professor,
PhDStudent}, then we have Forget(SN , C) = ∃isSupervisorof.>.

    Unlike the forgetting notion in propositional theories, we do not have a correspond-
ing model theoretic semantics for concept forgetting. However, we have the following
properties which precisely characterize our notion of concept forgetting.

Proposition 2. Let C, D and E be concepts, and let A and A0 be two concept names.
We then have

 1. If C = A or ¬A, then Forget({A}, C) = >;
 2. If C = DuE (or DtE), where A∈Con(D) and A6∈Con(E), then Forget({A},
    C) = Forget({A}, D)uE (or Forget({A}, C) = Forget({A}, D)tE).
 3. If C = AtE, then Forget({A}, C) = >.

The proof of Proposition 2 is clear by the definition of forgetting. Item 1 says that
if C is a concept name or full negation of a concept name, then by forgetting it we
get a top concept. Item 2 shows that forgetting a concept name A is a concept which
is a conjunction (or disjunction) will not influence the conjunct (or disjunct) which
is irrelevant to the concept name. Item 3 is a disjunction of the form AtE, then by
forgetting A we get a top concept.
     We now consider the weakening of a TBox T by forgetting a concept name A.
A first thought would be that we simply forget A in every concept appearing in T .
However, this approach may cause some problems. Suppose a terminology axiom CvD
is in T such that C = AuB, and A6∈Con(D) and A6∈Con(B). By forgetting A in
C we get a new axiom BvD. It is easy to check that BvD infers CvD whilst the
converse does not always hold. Therefore, CvD is not weakened. On the contrary,
it is reinforced. To solve this problem, we can equivalently transform every axiom of
the form CvD into >v¬CtD. According to Proposition 1, we have the following
corollary.

Corollary 1. Let φ be a terminology axiom of the form > v C and A a concept name.
Suppose A∈Con(C), then >vC |= >vForget({A}, C), but not vice versa.

Corollary 1 tells us that an axiom is weakened by forgetting a concept name in the
right-side concept.
    We give the notion of forgetting in a TBox.

Definition 4. Let T be a TBox. Let φ be a terminology axiom in T which has been
transformed into the form > v C and SN a finite set of concept names. The forgetting
of SN in φ is defined as Forget(SN , φ) = >vForget(SN , C) and the forgetting of SN
in T is defined as Forget(SN , T ) = {Forget(SN , φ) : φ∈T , Sig(φ)∩SN 6= ∅}, where
Sig(φ) denotes the signature of φ.
By Corollary 1, it is clear that we have that every axiom in Forget(SN , T ) can be
inferred from T , for any SN .

Example 3. (Example 1 Continued) Suppose we want to forget Professor in T . We need
some pre-processing. First, ProfessortPostDoctortResearchAssistantv Employee is
transformed into > v (¬Professor u ¬PostDoctor u ¬ResearchAssistant) t Employee.
Second, we split PhDSupervisor ≡ Professoru∃isSupervisorof.PhDStudent into
PhDSupervisor v Professoru∃isSupervisorof.PhDStudent and
    Professoru∃isSupervisorof.PhDStudent v PhDSupervisor, then transform them
into > v ¬PhDSupervisor t (Professor u ∃isSupervisorof.PhDStudent) and > v
¬Professor t ¬∃isSupervisorof.PhDStudent t PhDSupervisor. Therefore, we have

Forget({Professor}, T ) = {> v (¬PostDoctor u ¬ResearchAssistant) t Employee,
                              > v ¬PhDSupervisor t ∃isSupervisorof.PhDStudent
                              PhDStudentvResearchAssistant}.


4     Recoveries for Distributed DL Knowledge Bases

    In this section, we apply the forgetting-based approach for resolving inconsistency
in a propositional knowledge base [12] to deal with inconsistency in a DKB by forget-
ting concepts in TBoxes of local ontologies. We do not consider inconsistency which is
caused only by inter-ABoxes as this kind of inconsistency may be better addressed by
performing some kind of weakening on the ABoxes of local ontologies. Dealing with
this problem is also important, but it is out of scope for this paper.


4.1     Recoveries

     In [12], two key notions of the approach are forgetting context and recovery vec-
tor. The former one is a collection of sets of variables to be forgotten in each formula
from the knowledge base, and the latter one is a vector of subsets of the set of all the
variables appearing in the knowledge base, which satisfies the constraints in the forget-
ting context. Forgetting of the variables in the recovery vector will result in a consistent
knowledge base. We adapt the definitions of forgetting context and recovery vector to
DDL.
     In the following, we assume that I = [n] where n∈N. The forgetting context in
DDL is defined as follows.

Definition 5. Given a DKB D = h{KB i }i∈I , Bi, where KB i = hRi , Ti , Ai i, a forget-
ting context CD for it is a triple hF, G, Hi where:

    – F = hFi ii∈I such that Fk ⊆NC with k∈I, and for each i∈I, Fi is the set of concept
      names that can be forgotten in Ti .
                                                            0
    – G = hGi ii∈I such that for each i, Gi = {(Ai , Ai ) : Ai , A0i ∈CN (Ti )}, where
      CN (Ti ) is the set of all concept names in Ti and a pair (Ai , A0i ) in Gi means that
      A0i is meaningful only if Ai exists;
 – H⊆NC ×{1, ..., n} × NC ×{1, ..., n} is a set of quadruples (A, i, B, j) such that
   A∈Fi , B∈Fj and i6=j, which means that if A is forgotten in Fi , then B must be
   forgotten as well in Fj .

Fi imposes the constraints over the concept names which can be forgotten in Ti . That
is, if a concept name A6∈Fi , then forgetting A in Ti is forbidden. Gi contains the pairs
of concept names in local TBox Ti such that the existence of the former is meaningful
or significant only in presence of the latter. In other words, forgetting the latter imposes
to forget the former. H imposes the constraints over inter-TBoxes. It forces that if A is
forgotten in Ti then B should also be forgotten in Tj . The forgetting context is depen-
dent on the application at hand. A simple example for forgetting context is a standard
forgetting context which is given by Fi = NC for every i = 1, ..., n, Gi = ∅ for every
i = 1, ..., n, and H = ∅. Note that our approach does currently not include weakening
of bridge rules. Such a weakening may be preferable in some situations (for example,
to improve mappings created by different matching systems [6]), but its treatment is out
of scope for this paper.
     We next define a recovery vector in DDL. It is a vector of subsets of concept names
of NC , which satisfies the constraints in the forgetting context.
Definition 6. Given a DKB D = h{KB i }i∈I , Bi, where KB i = hRi , Ti , Ai i, and a
                                                                  →
                                                                  −
forgetting context CD , a recovery vector (or recovery for short) V for D given CD is a
       →
       −
vector V = hV1 , ..., Vn i of subsets of NC s.t.:

 – for every i∈I, Vi ⊆Con(Ti ),
 – for every i∈I, for any (A, A0 )∈Gi , if A∈Vi then A0 ∈Vi ,
 – for every (A, i, B, j)∈H, A∈Vi implies B∈Vj ,
      →
      −
 – D| V = h{hForget(Vi , Ti ), Ai i}i∈I , Bi is consistent.
   →
   −                       →
                           −
D| V is called the cure of V for D given CD . We denote the set of all recoveries for D
given CD as RCD (D).
                                                        →
                                                        −
According to the definition of recovery, the cure of V for D is always consistent if
             →
             −
the recovery V exists. When RCD (D) = ∅, we say that D is not recoverable w.r.t.
CD . This may happen when some concepts, which cannot be forgotten according to the
forgetting context, must be forgotten to restore consistency.
    We give an example to illustrate the definitions in this section.

Example 4. Suppose we have a
DKB D = h{KB 1 , KB 2 }, {B12 , B21 }i, where
(1) KB 1 = hR1 , T1 , A1 i :
 – R1 = ∅;
 – T1 = {ProfessortPostDoctortResearchAssistantvEmployee,
   PhDStudentvResearchAssistant,
   PhDSupervisor ≡ Professoru∃isSupervisorof.PhDStudent};
 – A1 = {PhDStudent(M ary), PhDSupervisor(T om)};
(2) KB 2 = hR2 , T2 , A2 i :
 – R2 = ∅;
 – T2 ={ProfessortLecturervAcademicStaff, StudentuAcademicStaffv⊥,
   PhDSupervisor ≡ AcademicStaffu∃isSupervisorof.PhDStudent,
   PhDStudentvStudent};
 – A2 = {PhDStudent(John)};

                               w                                  w
(3) B21 = {2 : AcademicStaff −
                             → 1 : Employee, 2 : PhDStudent −
                                                            → 1 : PhDStudent,
                     v                                     v
2 : ¬AcademicStaff −  → 1 : ¬Employee, 2 : ¬PhDStudent −    → 1 : ¬PhDStudent} and
B12 = ∅. KB 1 and KB 2 are two University ontologies which are connected by bridge
rules.
    Let D0 = h{T1 , T2 }, {B12 , B21 }i. Since D0 |=d PhDStudentv⊥ and PhDStudent
(M ary)∈A1 , D is inconsistent. We have the following forgetting context CD for D:

 – F =hFi ii=1,2 , where Fi =
   {AcademicStaff, Employee, Student, Professor, Lecturer,
   ResearchAssistant, PhDStudent, PhDSupervisor}i=1,2 .
 – G=hGi ii=1,2 , where
   G1 ={(PhDStudent, PhDSupervisor)}; G2 ={(PhDStudent, PhDSupervisor)}.
 – H = {(PhDStudent, 1, PhDStudent, 2)}.

For each local ontology, all the concept names in it can be forgotten. G1 is used to
show that the concept PhD supervisor is not meaningful if the concept PhD student or
professor is forgotten. G2 can be similarly explained. H ensures that the concepts PhD
student, professor, student in KB 1 are respectively equal to PhD student, professor,
student in KB 2 w.r.t. weakening from the point of view of KB 2 .
    Given the forgetting context CD , the following are two recoveries for D:

   →
   −
 – V 1 = hV1 , V2 i, where
   V1 = {ResearchAssistant} and V2 = ∅:
   Forget(V1 , T1 ) = {ProfessortPostDoctorvEmployee,
   PhDSupervisor ≡
   Professoru∃isSupervisorof.PhDStudent}, Forget(V2 , T2 ) = T2
   It is easy to check that the DKB D1 = h{hForget(Vi , Ti ), Ai i}i=1,2 , Bi is consis-
   tent.
   →
   −
 – V 2 = hV1 , V2 i, where
   V1 = ∅ and V2 = {PhDStudent, PhDSupervisor}:
   Forget(V1 , T1 ) = T1 ,
   Forget(V2 , T2 ) = {ProfessortLecturervAcademicStaff, StudentuAcademicStaffv⊥}
   The DKB D2 = h{hForget(Vi , Ti ), Ai i}i=1,2 , Bi is consistent.

                                                                 →
                                                                 −
However, the following vector is not a recovery for D given CD : V = hV1 , V2 i, where
V1 = {PhDStudent} and V2 = ∅, because it does not satisfy the constraints of H in
CD .
4.2   Preferred Recoveries
     Given a forgetting context, there may exist several different recoveries. In many
cases, we are only interested in some of these recoveries, called preferred recoveries,
which is defined by some preference relation on recoveries. For example, among all the
recoveries, we may be only interested in those contain minimal number of concepts to
forgotten. The notion of preferred recovery is originally defined in [12] in propositional
logic. We adapt it to DDLs here. We need to define a preference relation on the set of
all recoveries for a DKB D given a forgetting context CD for it. For any two recoveries
→
−       →
        −               →
                        − →   −
V and V 0 of length n, V ⊆c V 0 means that Vi ⊆Vi0 for each i∈1, ..., n.
Definition 7. Given a DKB D and a forgetting context CD for it, a preference relation 
on RCD (D) is a reflexive and transitive relation which satisfies the following property:
                       →
                       − →  −                →
                                             − →   −         →
                                                             − →  −
                      ∀ V , V 0 ∈RCD (D), if V ⊆c V 0 , then V  V 0 .
                         →
                         − →  −      →
                                     − → −         →
                                                   − →  −       →
                                                                − →  −       →
                                                                             − →  −
As usual, we denote V ≺ V 0 for V  V 0 and V 0 6 V , and V ∼ V 0 for V  V 0 and
→
−0 →  −
V V .
                               →
                               −                                    →
                                                                    − →   − → −
In Definition 7, a recovery V is at least preferred to another one V 0 ( V  V 0 ) if each
      →
      −                         →
                                −
Vi in V is a subset of Vi0 in V 0 .
    There are many ways to define a preference relation. For instance, we can prefer
recoveries that lead to forget minimal sets of concepts w.r.t. the set containment. We
can also ask the experts or users for such a preference relation. We adapt two specific
preference relations in [12] as follows.
                                             →
                                             − →−
  – binaricity preference relation bin : ∀ V , V 0 ∈RCD (D), if for every i = 1, ..., n,
                                 →
                                 −    →
                                      −
     (Vi 6=∅ ⇔ Vi0 6=∅), then V ∼bin V 0 .
  – ranking function based preference relation µ : suppose µ is a ranking function
                                                            →
                                                            − →−                 →
                                                                                 − →   −
     from RCD (D) to N such that µ(h∅, ..., ∅i) = 0, and ∀ V , V 0 ∈RCD (D), if V ⊆c V 0 ,
             →
             −        →
                      −
     then µ( V )≤µ( V 0 ). The preference relation µ induced by µ is the total pre-
                                           →
                                           − →−             →
                                                            −      →
                                                                   −
     ordering defined as follows. For all V , V 0 ∈RCD (D), V µ V 0 if and only if
        →
        −      →
               −
     µ( V )≤µ( V 0 ).
The ranking function based preference relation is defined by a ranking function from
                                                                               →
                                                                               −        S→ −
RCD (D) to N. A particular ranking function can be defined by µcard ( V ) = | V |,
       S→ −
where V = V1 ∪...∪Vn . We denote the preference relation based on µcard as card .
             →
             −                              →
                                            −
A recovery V is preferred to another one V 0 w.r.t. the preference relation card if and
only if the cardinality of the union of the sets Vi (i = 1, ..., n) is smaller than that of the
union of the sets Vi0 (i = 1, ..., n).
    By Corollary 1, when a concept name is forgotten in a DL knowledge base, the
resulting DL knowledge base is weaker than the original one w.r.t. the inference power.
        →
        − → −                  →
                               − →    −           →
                                                  −          →
                                                             −
Given V , V 0 ∈RCD (D), if V ⊆c V 0 then D| V |= D| V 0 . Therefore, among all the
recoveries from RCD (D), those which are minimal w.r.t. ⊆c lead to cures preserving
as much information as possible given the forgetting context CD .
                                                                        →
                                                                        −
    Next, we define two consequence relations. We denote by V            D,CD the set of all
minimal recoveries from RCD (D) w.r.t. .
Definition 8. Given a DKB D and a forgetting context CD for it, let  be a preference
relation on RCD (D). Let φ be a DL axiom. Then φ is said to be skeptically inferred
                                                            →
                                                            −               →
                                                                            − →−
from D w.r.t. , denoted by D |=C,Ske
                                   D
                                        φ, if and only if D| V |= φ for all V ∈ V 
                                                                                  D,CD .
φ is called a skeptical consequence of D.

   A DL axiom is a skeptical consequence of a DKB D if and only if it can be inferred
from all the most preferred recoveries.
                                                         →
                                                         −
Example 5. (Example 4 Continued) Suppose =card . Then V 1 is a preferred recov-
              S→−
ery because | V 1 | = 1 and no other recovery can have cardinality smaller than it.
Other preferred recoveries are given as follows:
   →
   −
 – V 3 = hV1 , V2 i, where V1 = {Employee} and V2 = ∅. We have
   Forget(V3 , T1 ) =
   {PhDStudentvResearchAssistant,
   PhDSupervisor≡Professoru∃isSupervisorof.PhDStudent} and Forget(V3 , T2 ) =
   T .
   −2
   →
 – V 4 =hV1 , V2 i, where V1 =∅ and V2 ={AcademicStaff}. We have Forget(V4 , T1 ) =
   T1 and
   Forget(V4 , T2 )={PhDStudentvStudent, PhDSupervisor
   vAcademicStaffu∃isSupervisorof.PhDStudent}.
   →
   −
 – V 5 = hV1 , V2 i, where V1 = ∅ and V2 = {Student}. We have Forget(V5 , T1 ) = T1
   and Forget(V5 , T2 ) = {ProfessortSeniorLecturertLecturertResearchAssistant
   vAcademicStaff}.

Since PhDSupervisor≡Professoru∃isSupervisorof.PhDStudent is in Forget(Vi , Ti ) for
                                   →
                                   −                               →
                                                                   − →−
all i = 1, 3, 4, 5, we have that D| V |= Professor(T om), for all V ∈ V 
                                                                        D,CD . That is,
we can conclude that T om is a professor using the skeptical inference.

     Suppose D = h{KB i }i∈I , Bi is a DKB. A maximal consistent sub-base of D w.r.t
terminology is a sub-base D0 = h{KB 0i }i∈I , Bi (with KB 0 = hR0 , T 0 , A0 i) of D which
satisfies the following conditions: (1) D0 is consistent; (2) {Ti0 : Ti0 6= ∅} ⊆ {Ti : i =
1, ..., n}, R0 = R, and A0i = Ai ; (3) there does not exist a sub-base D00 of D which
satisfies (1) and (2), and {Ti0 : Ti0 6= ∅} ⊂ {Ti00 : Ti00 6= ∅}.

Proposition 3. Let D be a DKB and MaxCons(D) the set of all maximal consistent
sub-bases of D. Suppose CD is the standard forgetting context and the preference rela-
tion  is a binaricity preference relation. Then for every DL axiom φ, D |=C,Ske
                                                                             D
                                                                                  φ if
and only if Di |= φ, for all Di ∈MaxCons(D).

Proposition 3 tells us that the skeptical consequence relation based on binaricity pref-
erence relation is the same as the consequence relation based on maximal consistent
sub-bases w.r.t terminology.
    In the case that many preferred recoveries exist (see Example 4), the skeptical con-
sequence relation is rather weak (i.e., only few DL axioms can be inferred). Therefore,
we propose another consequence relation.
Definition 9. Given a DKB D and a forgetting context CD for it, let  be a preference
relation on RCD (D). Let φ be a DL axiom and ¬φ is its negation2 . Then φ is said to be
argumentatively inferred from D w.r.t. , denoted by D |=C,Arg
                                                            D
                                                                  φ, if and only if there
         →
         − →−                  →
                                −                                →
                                                                 −0 →  −
exists a V ∈ V D,CD such that D| V |= φ and there does not exist V ∈ V  D,CD such that
   →
   −0
D| V |= ¬φ. φ is called an argumentative consequence of D.

A DL axiom is an argumentative consequence if and only if it can be inferred from a
most preferred recovery and its negation cannot be inferred from any most preferred
recoveries. Therefore, the argumentative consequence relation will not result in con-
tradictory conclusions. Note that our argumentative consequence relation is different
from consequence relation defined in the logic-based framework for argumentation [3]
because a most preferred recovery can be viewed as maximal recovered consistent in-
formation in the DKB whilst an argument in an argumentation theory is a minimal
consistent subset that can infer a conclusion. Unlike an argument in the argumentation
theory, each axiom in a most preferred recovered is self-justified. A weakness of the
argumentative consequence relation is that it may be too adventurous because an argu-
mentative consequence may not be inferred by other most preferred recoveries (also its
negation will not be inferred). The relationship between this consequence relation and
the skeptical consequence relation is summarized by the following proposition.

Proposition 4. Given a DKB D and a forgetting context CD for it, suppose φ is a DL
axiom. If D |=C,Ske
                D
                     φ, then D |=C,Arg
                                   D
                                        φ.

That is, every skeptical consequence of a DKB is an argumentative consequence.
                                                                               →
                                                                               −
Example 6. (Example 5 Continued) It is easy to check that, for i = 3, 4, 5, D| V i |=
ResearchAssistant(M ary) because PhDStudent(M ary)∈A1 and PhDStudent v
ResearchAssistant∈Forget(Vi , T1 ). Since ResearchAssistant6∈ Con(Forget(V1 , T1 )) and
                                                  →
                                                  −
ResearchAssistant6∈Con(T2 ), we must have that D| V 1 6|=¬ResearchAssistant(M ary).
Therefore, D |=C,Arg
                 D
                      ResearchAssistant(M ary).


5      Related Work
    To handle inconsistency in DDLs, some special interpretation, called a hole, is pro-
posed to interpret locally inconsistent ontologies in DDLs [4, 19]. In DDL, a hole for
an ontology is an interpretation I  = h∅, · i, where the domain is empty. It is pro-
posed to deal with inconsistency. By doing this, even if a local ontology is inconsistent,
there still exists a global model for the distributed knowledge base. However, if a hole
is applied to a local ontology, it is equivalent to say that this ontology and all related
bridge rules are removed from the distributed knowledge bases (see discussions in [4,
19]). This all-or-nothing semantics is clearly not very advisable. Furthermore, when
the inconsistency is caused by several connected local ontolgoies via mapping, it is not
clear how to apply the hole interpretation to deal with the inconsistency. Although it is
possible to translate an inconsistent DKB to a DL knowledge base and then apply the
 2
     The notion of axiom negation is defined in [9]
paraconsistent semantics given in [14] or [11] to infer non-trivial conclusions, this ap-
proach does not differentiate bridge rules and axioms in local ontologies and is different
from our approach.
    In [6], the authors deal with the problem of inconsistency in DDL by removing some
bridge rules which are responsible for the inconsistency. The idea of their approach is
similar to that of the approaches for debugging and repairing terminology in a central-
ized ontology. Our work takes a different view on inconsistency handling in DDL that
the bridge rules are reliable and we weaken the axioms in local ontologies to restore
consistency.
    In [7], the notion of forgetting is given for a class of OWL/RDF(S) ontologies. The
definition of forgetting is semantically meaningful. However, their definition of for-
getting has some restriction on axioms in the ontologies. Moreover, the computational
complexity of the algorithm for computing the forgetting literals may be exponential in
the worst case [8]. In contrast, our definition of forgetting does not have any restriction
on the axioms in an ontology and it can be computed in polynomial time. Although we
do not have corresponding model-theoretical definition for our notion of forgetting, we
justify our definition by analyzing its logical properties. Also, the focus of this paper is
to apply the notion of forgetting to deal with inconsistency in DDL knowledge bases.

6   Conclusions and Future Work
    In this paper, a forgetting-based approach for reasoning with inconsistent distributed
ontologies was proposed. We first defined the notion of concept forgetting in DL ALC.
Some properties of the concept forgetting were given. The definitions of recoveries and
preferred recoveries in propositional logic setting [12] were adapted to DDL. Based
on the preferred recoveries, two consequence relations on inconsistent D-KB were de-
fined. One is called skeptical consequence and the other is called argumentative con-
sequence. We showed that every skeptical consequence of a DKB is an argumentative
consequence. Our definition of concept forgetting does not have a corresponding model
theoretic semantics. One of the reasons that prevent us from defining concept forgetting
by simply extending variable forgetting is that we allow value restrictions on role names
and existential restriction on role names. We will consider a less expressive DL that al-
low neither value restrictions on role names nor existential restriction on role names
and see if we can define a semantic meaningful notion of concept forgetting. We will
also provide discuss complexity issues of our approach in another work.

Acknowledgments
   This work is partially supported by the EU under the IST project NeOn (IST-2006-
027595, http://www.neon-project.org/)

References
 1. Franz Baader, Diego Calvanese, Deborah McGuinness, Daniele Nardi, and Peter Patel-
    Schneider. The Description Logic Handbook: Theory, implementation and application. Cam-
    bridge University Press, 2003.
 2. Jie Bao, Doina Caragea, and Vasant Honavar. On the semantics of linking and importing in
    modular ontologies. In Proc. of ISWC’06, pages 72–86, 2006.
 3. Philippe Besnard and Anthony Hunter. A logic-based theory of deductive arguments. Artif.
    Intell., 128(1-2):203–235, 2001.
 4. Alexander Borgida and Luciano Serafini. Distributed description logics: Assimilating infor-
    mation from peer sources. J. Data Semantics, 1:153–184, 2003.
 5. Paolo Bouquet, Fausto Giunchiglia, Frank van Harmelen, Luciano Serafini, and Heiner
    Stuckenschmidt. Contextualizing ontologies. J. Web Sem., 1(4):325–343, 2004.
 6. Andrei Tamilin Christian Meilicke, Heiner Stuckenschmidt. Repairing ontology mappings.
    In Proc. of AAAI’07, pages 1408–1413. 2007.
 7. Thomas Eiter, Giovambattista Ianni, Roman Schindlauer, Hans Tompits, and Kewen Wang.
    Forgetting in managing rules and ontologies. In Web Intelligence, pages 411–419, 2006.
 8. Thomas Eiter and Kewen Wang. Forgetting and conflict resolving in disjunctive logic pro-
    gramming. In Proc. of AAAI’06, 2006.
 9. Giorgos Flouris, Zhisheng Huang, Jeff Z. Pan, Dimitris Plexousakis, and Holger Wache.
    Inconsistencies, negations and changes in ontologies. In Proc. of AAAI’06, pages 1295–
    1300, 2006.
10. Peter Haase, Frank van Harmelen, Zhisheng Huang, Heiner Stuckenschmidt, and York Sure.
    A framework for handling inconsistency in changing ontologies. In Proc. of ISWC’05, pages
    353–367, 2005.
11. Zhisheng Huang, Frank van Harmelen, and Annette ten Teije. Reasoning with inconsistent
    ontologies. In Proc. of IJCAI’05, pages 254–259. 2005.
12. Jérôme Lang and Pierre Marquis. Resolving inconsistencies by variable forgetting. In Proc.
    of KR’02, pages 239–250, 2002.
13. F. Lin and R. Reiter. Forget it! In AAAI Fall Symposium on Relevance, 1994.
14. Yue Ma, Pascal Hitzler, and Zuoquan Lin. Algorithms for paraconsistent reasoning with owl.
    In Proc. of ESWC’07, pages 399–413, 2007.
15. Bijan Parsia and Bernardo Cuenca Grau. Generalized link properties for expressive epsilon-
    connections of description logics. In Proc. of AAAI’05, pages 657–662, 2005.
16. Bijan Parsia, Evren Sirin, and Aditya Kalyanpur. Debugging OWL ontologies. In Proc. of
    WWW’05, pages 633–640, 2005.
17. Stefan Schlobach and Ronald Cornet. Non-standard reasoning services for the debugging of
    description logic terminologies. In Proc. of IJCAI’03, pages 355–362. Morgan-Kaufmann,
    2003.
18. Manfred Schmidt-Schaußand Gert Smolka. Attributive concept descriptions with comple-
    ments. Artificial Intelligence, 48(1):1–26, 1991.
19. Luciano Serafini, Alexander Borgida, and Andrei Tamilin. Aspects of distributed and mod-
    ular ontology reasoning. In Proc. of IJCAI’05, pages 570–575, 2005.