Reasoning-Supported Interactive Revision of Knowledge Bases Nadeschda Nikitina1 , Sebastian Rudolph1 , and Birte Glimm2 1 Karlsruhe Institute of Technology, Germany, 1 {nikitina,rudolph}@kit.edu, 2 The University of Oxford, Department of Computer Science, UK 2 birte.glimm@cs.ox.ac.uk Abstract. We propose a method for controlling the quality of (semi-)automatically acquired axioms. We combine the manual inspection of axioms with automatic evaluation decisions and propose decision spaces as a means to efficiently com- pute which decisions can be automatized and which axiom evaluation order is beneficial. 1 Introduction Manual knowledge formalization for real-world knowledge-intensive applications is highly time-consuming. An application of (semi-)automatic knowledge acquisition meth- ods such as ontology learning or matching is, therefore, often considered a reasonable way to reduce the cost of ontology development. Automatically acquired knowledge usually has to be manually inspected; either partially, to estimate the overall quality, or even fully, to maintain high quality standards. So far, the knowledge representation community has been focusing on restoring the consistency of ontologies enriched with new axioms as done in various belief revision and repair approaches. Such approaches, however, are not directly suited in case a more restrictive quality control is required. We support an exhaustive manual inspection of newly acquired axioms before adding the selected ones into the ontology and call this process interactive ontology revision. Once a decision (add or not, i.e., accept or de- cline) has been made, we determine which other axioms can be evaluated automatically by exploiting logical dependencies between axioms. We illustrate the main challenges with an example in which we have already con- firmed that the axioms Metal v Chemical_Element (1) Chemical_Element v Material (2) belong to the desired consequences, while the following axioms are still to be evaluated: Copper v Material (3) Copper v Chemical_Element (4) Copper v Metal (5) If Axiom (3) is declined, we can immediately also decline Axioms (4) and (5) since accepting the axioms would implicitly lead to the undesired consequence (3). Similarly, if Axiom (5) is approved, Axioms (3) and (4) are implicit consequences, which can be approved automatically. If we start, however, with declining Axiom (5), no automatic evaluation can be performed. It can be observed that – a high grade of automation requires a good evaluation order, and – approval and decline decisions have a different impact. Which axioms have the highest impact on decline or approval and which axioms can be automatically evaluated once a decision has been made can be determined with the help of algorithms for automated reasoning. Even for not very expressive knowledge representation formalisms, reasoning is an expensive task and in an interactive setting it is crucial to minimize the number of reasoning tasks while maximizing the number of automated decisions. We reduce the number of reasoning tasks by transferring ideas for ontology classification [8] to our problem. For this, we introduce the notion of decision spaces, which exploit the characteristics of the logical entailment relation between ax- ioms to maximize the amount of information gained by reasoning. From the evaluation of our prototypical system, it can be observed that a considerable proportion of axioms can be evaluated automatically. Furthermore, decision spaces significantly reduce the number of required reasoning operations, resulting in a considerable performance gain. In the next sction we formalize the basic notions and ideas; in Section 3, we define decision spaces, how they can be updated, and how they help to determine a beneficial axiom order. Our evaluation is presented in Section 4. Finally, we discuss related ap- proaches in Section 5 before we conclude in Section 6. Further details and proofs can be found in a technical report [4]. The paper is an adaptation of our IJCAI’2011 paper [5]. 2 Revision of Knowledge Bases The approach proposed here is not only applicable to Description Logics, but to any logic where taking all consequences is a closure operation, i.e., extensive ({ϕ} |= ϕ), monotone (Φ |= ϕ implies Φ ∪ Ψ |= ϕ), and idempotent (Φ |= ϕ and Φ ∪ {ϕ} |= ψ imply Φ |= ψ). Moreover, we presume the existence of a decision procedure for logical entailment. The revision of a knowledge base K aims at a separation of its axioms (i.e., logical statements) into two disjoint sets: the set of wanted consequences K |= and the set of unwanted consequences K 6|= . This motivates the following definitions. Definition 1 (Revision State). A revision state is defined as a tuple (K, K |= , K 6|= ) of knowledge bases with K |= ⊆ K, ∅ , K 6|= ⊆ K, and K |= ∩ K 6|= = ∅. Given two revision states (K, K1|= , K16|= ) and (K, K2|= , K26|= ), we call (K, K2|= , K26|= ) a refinement of (K, K1|= , K16|= ), if K1|= ⊆ K2|= and K16|= ⊆ K26|= . A revision state is complete, if K = K |= ∪K 6|= , and incomplete otherwise. An incomplete revision state (K, K |= , K 6|= ) can be refined by evaluating a further axiom α ∈ K \ (K |= ∪ K 6|= ), obtaining (K, K |= ∪ {α}, K 6|= ) or (K, K |= , K 6|= ∪ {α}). We call the resulting revision state an elementary refinement of (K, K |= , K 6|= ). Since we expect that the deductive closure of the wanted consequences in K |= must not contain unwanted consequences, we introduce the notion of consistency for revision Algorithm 1 Interactive Knowledge Base Revision Input: (K, K0|= , K06|= ) a consistent revision state Output: (K, K |= , K 6|= ) a complete and consistent revision state 1: (K, K |= , K 6|= ) ← clos(K, K0|= , K06|= ) 2: while K |= ∪ K 6|= , K do 3: choose α ∈ K \ (K |= ∪ K 6|= ) 4: if expert confirms α then 5: (K, K |= , K 6|= ) ← clos(K, K |= ∪ {α}, K 6|= ) 6: else 7: (K, K |= , K 6|= ) ← clos(K, K |= , K 6|= ∪ {α}) 8: end if 9: end while states. If we want to maintain consistency, a single evaluation decision can predeter- mine the decision for several yet unevaluated axioms. These implicit consequences of a refinement are captured in the revision closure. Definition 2 (Revision State Consistency and Closure). A (complete or incomplete) revision state (K, K |= , K 6|= ) is consistent if there is no α ∈ K 6|= such that K |= |= α. The revision closure clos(K, K |= , K 6|= ) of (K, K |= , K 6|= ) is (K, Kc|= , Kc6|= ) with Kc|= := {α ∈ K | K |= |= α} and Kc6|= := {α ∈ K | K |= ∪ {α} |= β for some β ∈ K 6|= }. We can show the following useful properties of the closure of consistent revision states: Lemma 1. For (K, K |= , K 6|= ) a consistent revision state, 1. clos(K, K |= , K 6|= ) is consistent, 2. every elementary refinement of clos(K, K |= , K 6|= ) is consistent, 3. every consistent complete refinement of (K, K |= , K 6|= ) is a refinement of clos(K, K |= , K 6|= ). Algorithm 1 employs the above properties to implement a general methodology for interactive knowledge base revision. Instead of starting with empty sets for K0|= and K06|= , we can initialize the latter sets with approved and declined axioms from a previous revision or add axioms of the knowledge base that is being developed to K0|= . We can further initialize K06|= with axioms that express inconsistency and unsatisfiability of predicates (i.e. of classes or relations) in K, which we assume to be unwanted consequences. In line 3, an axiom is chosen that is evaluated next. As motivated in the introduction, a random decision can have a detrimental effect on the amount of manual decisions. Ideally, we want to rank the axioms and choose one that allows for a high number of consequential automatic decisions. For this purpose, we introduce the following notion of axiom impact. Definition 3 (Impact). Let (K, K |= , K 6|= ) be a consistent revision state with α ∈ K and let ?(K, K |= , K 6|= ) := |K \ (K |= ∪ K 6|= )|. For an axiom α, – the approval impact is: impact+ (α) = ?(K, K |= , K 6|= ) − ?(clos(K, K |= ∪ {α}, K 6|= )), – the decline impact is: impact− (α) = ?(K, K |= , K 6|= ) − ?(clos(K, K |= , K 6|= ∪ {α})), – the guaranteed impact is: guaranteed(α) = min(impact+ (α), impact− (α)). The approval (decline) impact of an axiom α is determined by the number of au- tomatically evaluated axioms in case α is approved (declined), while the guaranteed impact is the minimum of the two impact functions. In the example from Section 1, Axioms (3), (4) and (5) have an approval impact of 0, 1, and 2, a decline impact of 2, 1, and 0, and a guaranteed impact of 0, 1, and 0, respectively. We show in the evaluation that the ratio of accepted axioms to all axioms that are to be evaluated can be used to determine which impact function is best. Since computing such an impact as well as computing the closure after each evalu- ation (lines 1, 5, and 7) can be considered very expensive, we next introduce decision spaces, auxiliary data structures which significantly reduce the cost of computing the closure upon elementary revisions and provide an elegant way of determining high im- pact axioms. 3 Decision Spaces Intuitively, the purpose of decision spaces is to keep track of the dependencies between the axioms in such a way, that we can read-off the consequences of revision state re- finements upon an approval or a decline of an axiom, thereby reducing the required reasoning operations. Furthermore, we will show how we can update these structures after a refinement step avoiding many costly recomputations. Definition 4 (Decision Space). Given a revision state (K, K |= , K 6|= ) with K 6|= , ∅, the according decision space D(K,K |= ,K 6|= ) = (K ? , E, C) contains the set K ? := K \ ({α | K |= |= α} ∪ {α | K |= ∪ {α} |= β for some β ∈ K 6|= }) of unevaluated axioms together with two binary relations, E (entails) and C (conflicts) on K ? : αEβ iff K |= ∪ {α} |= β αCβ iff K |= ∪ {α, β} |= γ for some γ ∈ K 6|= The requirement that K 6|= , ∅ is without loss of generality since we can always add an axiom that expresses a contradiction (an inconsistency), which is clearly undesired. As a direct consequence of this definition, we have D(K,K |= ,K 6|= ) = Dclos(K,K |= ,K 6|= ) . Also the following properties are immediate from the above definition: Lemma 2. Given D(K,K |= ,K 6|= ) = (K ? , E, C) for a revision state (K, K |= , K 6|= ), K 6|= , ∅, P1 (K ? , E) is a quasi-order (i.e., reflexive and transitive), P2 C is symmetric, P3 αEβ and βCγ imply αCγ for all α, β, γ ∈ K ? , and P4 if αEβ then αCβ does not hold. On the other hand, the properties established in the above lemma are characteristic:1 Lemma 3. Let V be finite set and let E, C ⊆ V × V be relations for which (V, E) is a quasi-order, C = C − , E ◦ C ⊆ C and E ∩ C = ∅. Then there is a decision space D(K,K |= ,K 6|= ) isomorphic to (V, E, C). 1 As usual, we let R− = {(y, x) | (x, y) ∈ R} as well as R ◦ S = {(x, z) | (x, y) ∈ R, (y, z) ∈ S for some y}. The following lemma shows how decision spaces can be used for calculating clo- sures of updated revision states and impacts of axioms. As usual for (quasi)orders, we define ↑α = {β | αEβ} and ↓α = {β | βEα}. Moreover, we let oα = {β | αCβ}. Lemma 4. Given D(K,K |= ,K 6|= ) = (K ? , E, C) for a revision state (K, K |= , K 6|= ) such that (K, K |= , K 6|= ) = clos(K, K |= , K 6|= ) with K 6|= , ∅ and α ∈ K ? , then 1. clos(K, K |= ∪ {α}, K 6|= ) = (K, K |= ∪ ↑α, K 6|= ∪ oα) and 2. clos(K, K |= , K 6|= ∪ {α}) = (K, K |= , K 6|= ∪ ↓α). 3. impact+ (α) = |↑α| + |oα| 4. impact− (α) = |↓α| Hence, the computation of the revision closure (lines 5 and 7) and axiom impacts does not require any entailment checks if the according decision space is available. For the computation of decision spaces, we exploit the structural properties established in Lem- mas 2 and 3 in order to reduce the number of required entailment checks in cases where the relations E and C are partially known. For this purpose, we define the rules R0 to R9, which describe the connections between the relations E and C and their comple- ments E and C. The rules can serve as production rules to derive new instances of these relations thereby minimizing calls to costly reasoning procedures. R0 → E(x, x) reflexivity of E R1 E(x, y) ∧ E(y, z) → E(x, z) transitivity of E R2 E(x, y) ∧ C(y, z) → C(x, z) (P3) R3 C(x, y) → C(y, x) symmetry of C R4 E(x, y) → C(x, y) disjointness of E and C R5 C(x, y) → C(y, x) symmetry of C R6 E(x, y) ∧ C(x, z) → C(y, z) (P3) R7 C(x, y) → E(x, y) disjointness of E and C R8 C(x, y) ∧ C(y, z) → E(x, z) (P3) R9 E(x, y) ∧ E(x, z) → E(y, z) transitivity of E An analysis of the dependencies between the rules R0 to R9 reveals an acyclic structure (indicated by the order of the rules). Therefore E,C,C, and E can be saturated one after another. Moreover, the exhaustive application of the rules R0 to R9 can be condensed into the following operations: E ← E∗ C ← E ◦ (C ∪ C − ) ◦ E − − C ← E − ◦ (C ∪ Id ∪ C ) ◦ E E ← E ◦ (C ◦ C ∪ E) ◦ E − − The correctness of the first operation (where (·)∗ denotes the reflexive and transitive closure) is a direct consequence of R0 and R1. For the second operation, we exploit the relationships E◦C◦E − ⊆ C◦E − ⊆ C − ◦E − ⊆ C − ⊆ C R2 R3 R2 R3 E◦C − ◦E − ⊆ E◦C − ⊆ E◦C ⊆ C R2 R3 R2 that can be further composed into E◦C◦E − ∪ E◦C − ◦E − = E ◦ (C ∪ C − ) ◦ E − ⊆ C Conversely, iterated backward chaining for C w.r.t. R2 and R3 yields E ◦ (C ∪ C − ) ◦ E − as a fixpoint, under the assumption E = E ∗ . The correctness of the last two operations can be shown accordingly. Algorithm 2 Decision Space Completion Algorithm 3 Decision Space Update on Input: (K, K |= , K 6|= ) a consistent revision Declining α state; E, E, C, C subsets of the entail- Input: D(K,K |= ,K 6|= ) a decision space, α ∈ K ? an ment and conflict relations and their axiom complements Output: D(K,K |= ,K 6|= ∪{α}) the updated decision Output: (K ? , E, C) the corresponding deci- space sion space 1: K ? ← K ? \ ↓α, 1: E ← E ∗ 2: E ← E ∩ (K ? × K ? ) 2: C ← E ◦ C ◦ E − 3: E ← E ∩ (K ? × K ? ) 3: C ← C ∪ C − 4: C ← C ∩ (K ? × K ? ) 4: C ← E − ◦ C ∪ IdK ? ◦ E 5: C ← E − ◦ E − 5: C ← C ∪ C 6: while C ∪ C , K ? × K ? do 6: E ← (C ◦ C) ∪ E 7: pick one (β, γ) ∈ K ? × K ? \ (C ∪ C) 7: E ← E − ◦ E ◦ E − 8: if K |= ∪ {β, γ} |= α then 8: while E ∪ E , K ? × K ? do 9: C ← C ∪ (E ◦ {(β, γ), (γ, β)} ◦ E − ) 9: pick one (α, β) ∈ K ? × K ? \ (E ∪ E) 10: else 10: if K |= ∪ {α} |= β then 11: C ← C ∪ (E − ◦ {(β, γ), (γ, β)} ◦ E) 11: E 0 ← transupdatediff(E, (α, β)) 12: end if 12: E ← E ∪ E0 13: end while 13: C 0 ← (E 0 ◦ C) \ C 14: C 0 ← C 0 ∪ (C 0 ◦ E 0− ) \ C 15: C ← C ∪ C0 0 16: C ← (E 0− ◦ C) \ C 0 0 0 17: C ← C ∪ (C ◦ E 0 ) \ C 0 Algorithm 4 Decision Space Update on 18: C ←C ∪C 19: 0 0 E ← ((C ◦ C) ∪ (C ◦ C 0 )) \ E Approving α 20: E ←E∪E 0 Input: D(K,K |= ,K 6|= ) a decision space, α ∈ K ? an 0 0 21: E ← ((E 0− ◦ E) ∪ (E − ◦ E )) \ E axiom 0 0 22: E ← E ∪ E ∪ (E ◦ E − ) ∪ (E ◦ E 0− ) Output: D(K,K |= ∪{α},K 6|= ) the updated decision 23: else space 24: E ← E ∪ (E − ◦ {(α, β)} ◦ E − ) 1: K ? ← K ? \ (↑α ∪ oα) 25: end if 2: E ← E ∩ (K ? × K ? ) 26: end while 3: C ← C ∩ (K ? × K ? ) 27: while C ∪ C , K ? × K ? do 4: C ← E − ◦ E 28: pick one (α, β) ∈ K ? × K ? \ (C ∪ C) 5: E ← E − ◦ C ◦ C ◦ E − 29: if K |= ∪ {α, β} |= γ for some γ ∈ K 6|= 6: execute lines 8–38 from Alg. 2 then 30: C 0 ← E ◦ {(α, β), (β, α)} ◦ E − 31: C ← C ∪ C0 32: E ← E ∪ (E − ◦ C ◦ C 0 ◦ E − ) 33: else 0 34: C ← (E − ◦ {(α, β), (β, α)} ◦ E) \ C 0 35: C ←C ∪C 0 36: E ← E ∪ (E − ◦ C ◦ C ◦ E − ) 37: end if 38: end while Algorithm 2 realizes the cost-saving identification of the complete entailment and conflict relations of a decision space. Maintaining sets of known entailments (E), non- entailments (E), conflicts (C) and non-conflicts (C), the algorithm always closes these sets under the above operations before it cautiously executes expensive deduction checks to clarify missing cases. First, the initially known (non-)entailments and (non-)conflicts are closed in the aforementioned way (lines 1–7). There and in the subsequent lines, we split computations into several ones where appropriate in order to minimize the size of sets subject to the join operation ( ◦ ). Lines 8–26 describe the successive clarifica- tion of the entailment relation (for cases where neither entailment nor non-entailment is known yet) via deduction checks. After each such clarification step, the sets E, E, C, and C are closed. Thereby, we exploit known properties of intermediate results such as already being transitive or symmetric to avoid redoing the according closure operations unnecessarily (transupdatediff computes, for a relation R and a pair of elements (α, β), the difference between the reflexive transitive closure of R extended with (α, β) and R∗ , i.e., (R ∪ {(α, β)})∗ \ R∗ )). Likewise, we also avoid redundant computations and reduce the size of the input sets for the join operations by explicitly bookkeeping sets 0 0 E 0 ,C 0 ,C , and E containing only the instances newly added in the current step. Lines 27–38 proceed in the analog way for stepwise clarification of the conflicts relation. 3.1 Updating Decision Spaces We proceed by formally describing the change of the decision space as a consequence of approving or declining one axiom with the objective of again minimizing the required number of entailment checks. We first consider the case that an expert approves an axiom α ∈ K ? , and hence α is added to the set K |= of wanted consequences. Lemma 5. Let D(K,K |= ,K 6|= ) = (K ? , E, C), α ∈ K ? , D(K,K |= ∪{α},K 6|= ) = (Knew ? , E 0 , C 0 ). Then – Knew = K \ (↑α ∪ oα), ? ? – βEγ implies βE 0 γ for β, γ ∈ Knew ? , and – βCγ implies βC γ for β, γ ∈ Knew 0 ? . Essentially, the lemma states that all axioms entailed by α (as witnessed by E) as well as all axioms conflicting with α (indicated by C) will be removed from the decision space if α is approved. Moreover due to monotonicity, all positive information about entailments and conflicts remains valid. Algorithm 4 takes advantage of these correspondences when fully determining the updated decision space. The next lemma considers changes to be made to the decision space on the denial of an axiom α by characterizing it as unwanted consequence. Lemma 6. Let D(K,K |= ,K 6|= ) = (K ? , E, C), α ∈ K ? , D(K,K |= ,K 6|= ∪{α}) = (Knew ? , E 0 , C 0 ). Then – Knew = K \ ↓α, ? ? – βEγ exactly if βE 0 γ for β, γ ∈ Knew ? , and – βCγ implies βC γ for β, γ ∈ Knew . 0 ? The lemma shows that the updated decision space can be obtained by removing all axioms that entail α. Furthermore entailments between remaining axioms remain unal- tered whereas the set of conflicts may increase. Algorithm 3 implements the respective Table 1. Characteristics of the evaluated datasets dataset size validity ratio dataset size validity ratio S1 54 94% S4 35 48% S2 60 100% S5 26 26% S3 40 45% S6 72 12% decision space update, additionally exploiting that new conflicts can only arise from derivability of the newly declined axiom α. Algorithms 4 and 3 have to be called in Alg. 1 after the accept (line 5) or decline revision step (line 7), respectively. For n the number of involved axioms, Algorithms 2, 4, and 3 run in time bounded by O(n5 ) and space bounded by O(n2 ) if we treat entailment checking as a constant time operation. Without the latter assumption, the complexity of reasoning usually domi- nates. For example, if the axioms use all features of OWL 2 DL, entailment checking is N2ExpT ime-complete, which then also applies to our algorithm. 4 Evaluation For a first evaluation of the developed methodology, we choose a scenario motivated by ontology-supported literature search. The hand-crafted NanOn ontology models the scientific domain of nano technology, including substances, structures, procedures used in that domain. The ontology, denoted here with O, is specified in the Web Ontology Language OWL DL [6] and comprises 2,289 logical axioms. The project associated to NanOn aims at developing techniques to automatically analyze scientific documents for the occurrence of NanOn concepts. When such concepts are found, the document is automatically annotated with NanOn concepts to facilitate topic-specific information retrieval on a fine-grained level. Since total accuracy of the automatically added an- notations (which can be seen as logical axioms expressing factual knowledge) cannot be guaranteed, they need to be inspected by human experts, which provides a natural application scenario for our approach. For our evaluation, we employed tools for automated textual analysis to produce a set of document annotations, the validity of which was then manually evaluated. This provided us with sets of valid and invalid annotation facts (denoted by A+ and A− , respectively). To investigate how the a priori quality of each axiom set influences the results, we created six distinct annotation sets S 1 to S 6 using different annotation meth- ods. The different methods result in different validity ratios |A+ |/(|A+ | + |A− |) of the datasets, where |S | denotes the cardinality of a set S . The size of each set followed by the corresponding validity ratio in percent are shown in Table 1. We then applied our methodology starting from the revision state (O ∪ O− ∪ A+ ∪ A , O, O− ) with O containing the axioms of the NanOn ontology and with O− con- − taining axioms expressing inconsistency and concept unsatisfiability. We obtained a complete revision state (O ∪ O− ∪ A+ ∪ A− , O ∪ A+ , O− ∪ A− ) where on-the-fly expert Table 2. Revision results for different axiom choosing strategies impact+ guaranteed impact− upper bound rand S 1 69% 4,677 36,773 48% 11,860 51,677 9% 17,828 46,461 74% 4,110 11,399 45% S 2 83% 2,584 18,702 65% 8,190 55,273 12% 20,739 67,625 83% 2,645 27,850 60% S 3 20% 3,137 26,759 43% 3,914 27,629 28% 9,947 46,461 48% 3,509 13,202 31% S 4 29% 2,198 15,601 43% 3,137 18,367 31% 7,309 10,217 51% 2,177 7,002 31% S 5 8% 1,778 11,443 39% 1,290 6,647 54% 954 1,438 54% 801 1,989 41% S 6 13% 9,352 212,041 54% 8,166 99,586 76% 6,797 16,922 76% 5,219 19,861 57% decisions about approval or decline were simulated according to the membership in A+ or A− . For computing the entailments, we used the OWL reasoner HermiT.2 For each set, Table 2 shows the effects of the different choice functions impact+ , guaranteed and impact− by measuring the reduction of expert decisions compared to evaluating the whole set manually (1st column for each axiom set and choice function), followed by the number of necessary reasoner calls when using decision spaces (2nd column for each axiom set and choice function) and the corresponding number of rea- soner calls without the use of decision spaces (3rd column for each axiom set and choice function). As a baseline, we also include the reduction of expert decisions when choos- ing axioms randomly (last column). The upper bound for the manual effort reduction was obtained by applying the “impact oracle” function: +  impact (α) if α ∈ A+ ,   KnownImpact(α) =    impact− (α) if α ∈ A− .  The results of the evaluation show that: – Decision spaces save on average 75% of reasoner calls, which leads to a consid- erable overall performance gain given that, on average, 88% of computation time in our experiments is spent within the methods of the reasoner according to our profiling measurements. The experiments with the same datasets took on average 8 times longer without the application of decision spaces. – Compared to an all manual revision, a significant effort reduction of on average 44% is already achieved when axioms are chosen randomly for each expert decision by automatically approving and declining axioms based on the computed revision closure. However, there is still some space for improvement, since the “impact oracle” manages to reduce the manual effort of revision on average by 64%. – If the ratio of approved axioms is rather high or rather low, impact+ or impact− , respectively, perform best. – If the ratios of approved and declined axioms are more or less equal, the guaranteed impact is the best choice. From these observations we can conclude that the appropriate axiom choosing strategy has to be selected based on the expected ratio of valid axioms. We see that an application of the most suitable axiom choosing strategy for each validity ratio, listed in grey rows, yields on average an effort reduction of 61%, which is 15% higher than the performance of random and only 3% less than the effort reduction achieved by the “impact oracle”. 2 http://www.hermit-reasoner.com 5 Related Work In our previous work [3], we proposed an approach for determining a beneficial order of axiom evaluation under the assumption of a high validity ratio within the axiom set under investigation. The latter approach aims at reducing the manual effort of revision by eliminating the redundancy within the corresponding axiom set, which is the major factor leading to automatic axiom evaluation under the assumption of a high validity ratio. Prior to the interactive revision, a minimal non-redundant subset of axioms under investigation is identified and then reviewed by the expert thereby not requiring the expensive computation of axiom impacts after each expert decision. In addition to our own work, we are aware of two approaches for supporting the revision of ontological data based on logical appropriateness: Meilicke et al. [2] and Jiménez-Ruiz et al. [1] propose two approaches, both of which are applied in the con- text of mapping revision. In these approaches, dependencies between evaluation deci- sions are determined based on a set of logical criteria, each of which is a subset of the criteria that can be derived from the notion of revision state consistency introduced in Definition 1. Similarly to our approach, Meilicke et al. aim at reducing the manual ef- fort of mapping revision by relying on a heuristic notion of impact. The approach is, however, difficult to generalize to the revision of ontologies since the notion of impact is based on the hypothetically possible number of mapping axioms for two ontologies O1 and O2 and further relies on the assumption that the set of possible mapping axioms is mostly disjoint from the axioms in O1 ∪ O2 . This assumption is justified in case of mapping revision, since axioms in O1 (O2 ) usually refer only to entities from O1 (O2 ), while mapping axioms link entities from O1 and O2 . For interactive ontology revision in general, however, the axioms that are to be revised are typically not disjoint from the already evaluated axioms. The focus of ContentMap [1] lies within the visualization of consequences and user guidance in case of difficult evaluation decisions, while the minimization of the manual and computational effort required for the revision is out of scope. ContentMap selec- tively materializes and visualizes the logical consequences caused by the axioms under investigation and supports the revision of those consequences. ContentMap requires an exponential number of reasoning operations in the size of the ontology under revision since dependencies between the consequences are determined by comparing their jus- tifications (sets of axioms causing the entailment aka minAs). Our approach, however, requires at most a polynomial number of entailment checks. Another strand of work starting from [7] is related to the overall motivation of en- riching knowledge bases with additional expert-curated knowledge in a way that mini- mizes the workload of the human expert: based on the attribute exploration algorithm from formal concept analysis (FCA), several works have proposed structured interac- tive enumeration strategies of inclusion dependencies or axioms of certain fragments of description logics which then are to be evaluated by the expert. While similar in terms of the workflow, the major difference of these approaches to ours is that the axioms are not pre-specified but created on the fly and therefore, the exploration may require (in the worst case exponentially) many human decisions. 6 Conclusions and Future Work In this paper, we proposed a methodology for supporting interactive ontology revision based on logical criteria. We stated consistency criteria for revision states and intro- duced the notion of revision closure, based on which the revision of ontologies can be partially automatized. Even though a significant effort reduction can be achieved when axioms are chosen randomly for each expert decision, choosing an appropriate order usually yields a higher effort reduction. We introduced the notion of axiom impact which is used to determine a beneficial order of evaluation. Depending on the expected ratio of approved axioms, impact+ , impact− or the guaranteed impact can be employed in order to achieve a higher effort reduction. In fact, in three out of six cases during the evaluation, the maximum possible effort reduction was achieved when employing the best suitable axiom choosing strategy. Moreover, we provided an efficient and elegant way of determining the revision closure and axiom impact by computing and updating structures called decision spaces which saved 75% of reasoner calls during our evalua- tion. In our future work, we will investigate how the axiom choosing strategy can be adjusted according to the current ratio of approved axioms. Another open question is how the axioms under investigation can be efficiently partitioned into sets that can be reviewed independently. Acknowledgments This work is supported by the German Federal Ministry of Educa- tion and Research (BMBF) under the SAW-project NanOn, by the German Research Foundation (DFG) under the project ExpresST, and by the EPSRC project HermiT: Reasoning with Large Ontologies. References 1. Jiménez-Ruiz, E., Cuenca Grau, B., Horrocks, I., Llavori, R.B.: Ontology integration using mappings: Towards getting the right logical consequences. In: Proc. ESWC 2009. pp. 173– 187 (2009) 2. Meilicke, C., Stuckenschmidt, H., Tamilin, A.: Supporting manual mapping revision using logical reasoning. In: Proc. AAAI 2008. pp. 1213–1218 (2008) 3. Nikitina, N.: Semi-automatic revision of formalized knowledge. In: Proc. ECAI 2010. pp. 1097–1098 (2010) 4. Nikitina, N., Rudolph, S., Glimm, B.: Reasoning-supported interactive revision of knowledge bases. Technical Report 3013, Institut AIFB, KIT, Karlsruhe (April 2011) 5. Nikitina, N., Rudolph, S., Glimm, B.: Reasoning-supported interactive revision of knowledge bases. In: Proc. IJCAI 2011 (2011), to Appear 6. OWL Working Group, W.: OWL 2 Web Ontology Language: Document Overview. W3C Rec- ommendation (27 October 2009), available at http://www.w3.org/TR/owl2-overview/ 7. Rudolph, S.: Exploring relational structures via FLE. In: Proc. ICCS 2004. pp. 196–212. Springer (2004) 8. Shearer, R., Horrocks, I.: Exploiting partial information in taxonomy construction. In: Proc. ISWC 2009. pp. 569–584 (2009)