Merge, Explain, Iterate Martin Homola, Júlia Pukancová, Júlia Gablı́ková, and Katarı́na Fabianová Comenius University in Bratislava, Mlynská dolina, 84248 Bratislava, Slovakia {homola|pukancova}@fmph.uniba.sk, {gablikova.julia|kfabianova11}@gmail.com Abstract. ABox abduction is the reasoning problem to find explana- tions for a DL knowledge base (KB) and an observation given as an ABox assertion which does not follow from the KB. The explanations them- selves are sets of ABox assertions such that the observation follows from the KB once an explanation set is included in it. One widely employed method to compute the explanations is the Minimal Hitting Set algo- rithm (MHS). MHS is complete, but it is also NP-complete and widely recognized as inefficient. There are also approximative methods such as MergeXplain (MXP) which are fast, but incomplete. This work describes a hybrid method called MHS-MXP which adopts the divide and conquer heuristics of MXP and applies it on MHS with the aim to make it more efficient at least on a favourable class of inputs of the ABox abduction problem. First experimental implementation is also available. Keywords: Description logics · Abduction · MergeXplain · Minimal hitting set · Optimization 1 Introduction Abduction was introduced by Peirce [7]. It is a reasoning problem to find expla- nations of an observation that is not entailed based on our current knowledge. In the context of DL, an ABox abduction problem [2] assumes a DL KB K and an extensional observation O (in from of an ABox assertion). Explanations (also extensional) are sets of ABox assertions E such that K together with E entail O. If one wishes to find all explanations of an ABox abduction problem, the Reiter’s MHS algorithm [10] is the classic method. MHS is complete. In a nut- shell, it searches through the space of all possible explanations, starting from the smallest (in terms of cardinality) and continues towards the largest. The MHS algorithm (NP-complete) works on top of a DL solver (whose complexity de- pends on the particular DL). This approach is useful particularly in cases where there are smaller explanations. However in cases where there is (even a small number of) explanations of larger size this search strategy may be inefficient. Alternative strategies were explored. Junker’s QuickXplain (QXP) [5], and more recently its extension MergeXplain [11] proposed by Shchekotykhin et al. Copyright c 2020 for this paper by its authors. Use permitted under Creative Com- mons License Attribution 4.0 International (CC BY 4.0). Table 1. Syntax and Semantics of ALCHO Concept Syntax Semantics Atomic concept A AI complement ¬C ∆I \ C I intersection C uD C I ∩ DI existential restriction ∃R.C {x | ∃y (x, y) ∈ RI ∧ y ∈ C I } nominal {a} {aI } Axiom Syntax Semantics concept inclusion (GCI) C v D C I ⊆ DI role inclusion (RIA) RvS RI ⊆ S I concept assertion C(a) aI ∈ C I role assertion R(a, b) (aI , bI ) ∈ RI negated role assertion ¬R(a, b) (aI , bI ) 6∈ RI employ divide and conquer strategy which allows to find one (QXP) or multi- ple explanations (MXP) very efficiently. On the other hand, these methods are approximative, i.e., they are not complete. We base this work on the key observation that when MXP is run repeatedly, with a slightly modified inputs, it divides the search space differently and it may possibly return a different set of explanations than in the previous runs. We pro- pose a combined algorithm, dubbed MHS-MXP, that iterates runs of MXP and it uses MHS to steer the search space exploration in such a way that completeness is retained. We also provide a preliminary experimental implementation. Compared to pure MXP, this approach has the advantage that it is no longer approximative – it is complete. Such a hybrid algorithm may likely have an advantage when compared with MHS, at least on a certain class of inputs. Precise characterization of this class and further optimization of the algorithm is subject to our ongoing work. 2 Preliminaries For simplicity we will introduce ALCHO [1]. However any other DL may be used due to the black box approach. A vocabulary consists of countably infinite mutually disjoint sets of individuals NI = {a, b, . . .}, roles NR = {P, Q, . . .}, and atomic concepts NC = {A, B, . . .}. Concepts are recursively built using constructors ¬, u, ∃, as shown in Table 1. A KB K = T ∪ A consists of a finite set of GCI and RIA axioms (in TBox T ), and a finite set of assertions (in ABox A) as given in Table 1. We also define ¬σ := ¬C(a) if σ = C(a); ¬σ := C(a) if σ = ¬C(a); ¬σ := ¬R(a, b) if σ = R(a, b); ¬σ := R(a, b) if σ = ¬R(a, b). An interpretation is a pair I = (∆I , ·I ), where ∆I 6= ∅ is a domain, and the interpretation function ·I maps each individual a ∈ NI to aI ∈ ∆I , each atomic concept A ∈ NC to AI ⊆ ∆I , each role R ∈ NR to RI ⊆ ∆I × ∆I , and each complex concept according to the right-hand side of Table 1. An interpretation I satisfies an axiom σ (denoted I |= σ) if the respective constraint in Table 1 is satisfied. It is a model of a KB K (denoted I |= K) if I |= σ for all σ ∈ K. A KB K is consistent, if I |= K for some interpretation I. K entails an axiom σ (denoted K |= σ) if I |= σ for each I |= K. In ABox abduction [2], we are given a KB K and an observation O consisting of an ABox assertion. The task is to find an explanation E, again, consisting of ABox assertions, such that K ∪ E |= O. However, the set of all ABox expressions may be too broad to draw the explanations from (after all, it is infinite). Hence we typically consider some (finite and reasonably small) set of abducibles Abd, and require that E ⊆ Abd. Definition 1 (ABox Abduction Problem). Let Abd be a finite set of ABox assertions. An ABox abduction problem is a pair P = (K, O) such that K is a knowledge base in DL and O is an ABox assertion. An explanation of P (on Abd) is any finite set of ABox assertions E ⊆ Abd such that K ∪ E |= O. In this work we limit the explanations to atomic and negated atomic con- cept and role assertions; hence Abd ⊆ {A(a), ¬A(a) | A ∈ NC , a ∈ NI } ∪ {R(a, b), ¬R(a, b) | R ∈ NR , a, b ∈ NI }. Note that we do not limit the ob- servations in any way, apart from allowing only one (possibly complex) ABox assertion. While Definition 1 establishes the basic reasoning frame of abduction, some explanations may be unintuitive. According to Elsenbroich et al. [2] it is reason- able to require from each explanation E of P = (K, O) to be: (a) consistent (K∪E is consistent); (b) relevant (E 6|= O); and (c) explanatory (K 6|= O). Explanations which satisfy these three conditions will be called desired. In addition, in order to avoid excess hypothesizing, minimality is required. Definition 2 (Minimality). Assume an ABox abduction problem P = (K, O). Given two explanations E and E 0 of P, we say that E is (syntactically) smaller than E 0 if E ⊆ E 0 .1 We further say that an explanation E of P is (syntactically) minimal if there is no other explanation E 0 of P that is smaller than E. 3 Computing Explanations We now review first the complete MHS algorithm and then the faster but ap- proximative MXP algorithm. The hybrid approach that tries to combine “the best of both worlds” is then introduced in Section 4. 1 Note that before we compare two explanations E and E 0 of P syntactically, we typically normalize the assertions w.r.t. (outermost) concept conjunction: as CuD(a) is equivalent to the pair of assertions C(a) and D(a), we replace the former form by the latter while possible. Algorithm 1 MHS(K,O,Abd) Require: knowledge base K, observation O, set of abducibles Abd Ensure: set SE of all explanations of P = (K, O) of the class Abd 1: M ← a model M of K ∪ {¬O} 2: if M = null then 3: return "nothing to explain" 4: end if 5: T ← (V = {r}, E = ∅, L = {r 7→ Abd(M )}) . Initialize HS-tree 6: for each σ ∈ L(r) create new σ-successor nσ of r 7: SE ← {} 8: while there is next node n in T w.r.t. BFS do 9: if n can be pruned then 10: prune n 11: else if there is a model M of K ∪ {¬O} ∪ H(n) then 12: label n by L(n) ← Abd(M ) 13: else 14: SE ← SE ∪ {H(n)} 15: end if 16: for each σ ∈ L(n) create new σ-succesor nσ of n 17: end while 18: return SE 3.1 Minimal Hitting Set Thanks to Reiter [10] we know that computing all minimal explanations of (K, O) reduces to finding all minimal hitting sets of the set of models of K ∪{¬O} (mod- ulo negation). To find some explanation, we may simply collect in E assertions that invalidate each such model. Then K ∪ E ∪ {¬O} is inconsistent and hence K ∪ E |= O. The inverse reduction is also easily found.2 Hence disregarding the time needed to find the models of K ∪ {¬O}, finding all explanations is NP-complete [6]. However we want to draw explanations only from the set of abducibles Abd, as given in Definition 1. It is easily observed, that to find such explanations it suffices to exclude non-abducibles from the search. In addition, if some of the models contains no abducibles then there are no explanations: Observation 1. The minimal explanations of (K, O) on Abd directly corre- sponds to the minimal hitting sets of {Abd(M ) | M |= K ∪ {¬O}} where Abd(M ) = {φ ∈ Abd | M 6|= φ}. Observation 2. If Abd(M ) = ∅ for some M |= K ∪ {¬O}, then (K, O) has no explanations on Abd. 2 For S = {S1 , . . . , Sn } let K = {A1 t · · · t Ak v Si | Si ∈ S, Si = {A1 , . . . , Ak }} ∪ {S1 u · · · u Sn v S}. Then the minimal explanations of (K, S(a)) exactly correspond to the to minimal hitting sets of S. The MHS algorithm, also proposed by Reiter [10], works by constructing a HS-tree. As we are limited by abducibles, in our case a HS tree T = (V, E, L) is a labelled tree in which each node is labelled by Abd(M ) for some model M of K ∪ {¬O} and whose edges are labelled by elements of the parent node’s label. If a node n1 ∈ V has a successor n2 ∈ V such that L(hn1 , n2 i) = σ then n2 is a σ-successor of n1 . The HS-tree has the property that the node-label L(n) and the union H(n) of the edge-labels on the path from the root r to each node n are disjoint. For each node n such a label can be found as Abd(M ) of any model of K ∪ {¬O} ∪ H(n) which can be obtained by one call to an external DL reasoner. If no such model M exists then H(n) corresponds to a hitting set. Note that if M exists but Abd(M ) = ∅, then in accord with Observation 2 H(n) is not a hitting set. In addition, pruning is applied during the HS-tree construction in order to eliminate non-minimal hitting sets. The most obvious case is when a node n already corresponds to a hitting set H(n) and there is another node n0 with H(n) ⊆ H(n0 ). Any such n0 can be pruned. Also if H(n) = H(n0 ), even if not yet a hitting set, one of the nodes is redundant and it can be pruned. Once completed, a pruned HS-tree (i.e., one on which all pruning conditions were applied) contains all minimal hitting sets [10]. In addition we also prune nodes which correspond to undesired explanations [8]. The resulting algorithm is given in Algorithm 1. This algorithm is sound and complete [10, 8, 9]. Theorem 3. The MHS algorithm is sound and complete (i.e., it returns the set SE of all minimal desired explanations of K and O on Abd.) The fact that MHS explores the search space using breadth-first search (BFS) allows to limit the search for explanations by maximum size. The algorithm is still complete w.r.t. any given target size [9]. 3.2 MergeXplain Both QXP [5] and MXP [11] were originally designed to find minimal inconsistent subsets (dubbed conflicts) of an overconstrained knowledge base K = B ∪ C, where B is the consistent background theory and C is the “suspicious” part from which the conflicts are drawn. The algorithm is listed in Algorithm 2. The essence of QXP is captured in the GetConflict function, which clev- erly decomposes C by splitting it into smaller and smaller subsets in such a way that “on the way back” it is always able to reconstruct one minimal conflict, if it only exists. The auxiliary function isConsistent(K) encapsulates calls to an external reasoner; it returns true if K is consistent and false otherwise. However, QXP only finds one conflict. Its extension MXP, captured in the FindConflicts function, finds as many conflicts as possible that can be re- constructed from a one way in which C can be split. During the reconstruction, MXP relies on GetConflict to recover some of the conflicts that would be lost due to splitting, which also ensures that it also keeps the important property that if at least one conflict exists, at least one is also found. Algorithm 2 MXP(B,C) Input: background theory B, set of possibly faulty constraints C Output: a set of minimal conflicts Γ 1: if ¬isConsistent(B) then 2: return "no explanation" 3: else if isConsistent(B ∪ C) then 4: return ∅ 5: end if 6: h , Γ i ← FindConflicts(B, C) 7: return Γ 8: function FindConflicts(B, C) returns a tuple hC 0 , Γ i 9: if isConsistent(B ∪ C) then 10: return hC, ∅i 11: else if |C| = 1 then 12: return h∅, {C}i 13: end if 14: Split C into disjoint, non-empty sets C1 and C2 15: hC10 , Γ1 i ← FindConflicts(B, C1 ) 16: hC20 , Γ2 i ← FindConflicts(B, C2 ) 17: Γ ← Γ1 ∪ Γ2 18: while ¬isConsistent(C10 ∪ C20 ∪ B) do 19: X ← GetConflict(B ∪ C20 , C20 , C10 ) 20: γ ← X ∪ GetConflict(B ∪ X, X, C20 ) 21: C10 ← C10 \{σ} where σ ∈ X 22: Γ ← Γ ∪ {γ} 23: end while 24: return hC10 ∪ C20 , Γ i 25: end function 26: function GetConflict(B, D, C) 27: if D 6= ∅ ∧ ¬isConsistent(B) then 28: return ∅ 29: else if |C| = 1 then 30: return C 31: end if 32: Split C into disjoint, non-empty sets C1 and C2 33: D2 ← GetConflict(B ∪ C1 , C1 , C2 ) 34: D1 ← GetConflict(B ∪ D2 , D2 , C1 ) 35: return D1 ∪ D2 36: end function This can be immediately adopted for ABox abduction: in order to find ex- planations for an abduction problem P = (K, O) on Abd one needs to call MXP(K ∪ {¬O}, Abd). This observation allows us to adopt the following result from Shchekotykhin et al. [11]: Theorem 4. Assume an ABox abduction problem P = (K, O) and a set of abducibles Abd. If there is at least one explanation γ ⊆ Abd of P then calling MXP(K ∪ {¬O}, Abd) returns a nonempty set Γ of explanations of P. In fact, MXP is thorough in its decomposition of C, which is broken to smaller and smaller subsets until they are consistent with B or until only sets of size 1 remain. This directly implies that all conflicts of size 1 will always be found and returned by a single run of MXP. This observation will prove to be useful for our hybrid algorithm. Observation 5. Given an ABox abduction problem P = (K, O), set of ab- ducibles Abd, and any γ ⊆ Abd s.t. |γ| = 1, if K ∪ γ |= O then γ ∈ MXP(K ∪ {¬O}, Abd). Thus MXP is sound, and if an explanation exists, it always finds at least one (and it finds all of size one). However MXP is still approximative, i.e., it is not complete. Some explanations may be lost, especially in case of abduction problems with multiple partially overlapping explanations. Example 1. Let K = {A u B v D, A u C v D} and let O = D(a). Let us ig- nore negated ABox expressions and start with Abd = {A(a), B(a), C(a)}. There are two minimal explanations of P = (K, O): {A(a), B(a)}, and {A(a), C(a)}. Calling MXP(K ∪ {¬O}, Abd), it passes the initial tests and calls FindCon- flicts(K ∪ {¬O}, Abd). FindConflicts needs to decide how to split C = Abd into C1 and C2 . Let us assume the split was C1 = {A(a)} and C2 = {B(a), C(a)}. Since both C1 and C2 are now conflict-free w.r.t. K ∪ {¬O}, the two consecutive recursive calls return hC10 , ∅i and hC20 , ∅i where C10 = {A(a)} and C20 = {B(a), C(a)}. In the while loop, GetConflict(K ∪ {¬O} ∪ {B(a), C(a)}, {B(a), C(a)}, {A(a)}) returns X = {A(a)} while GetConflict(K ∪ {¬O} ∪ {A(a)}, {A(a)}, {B(a), C(a)}) returns B(a), and hence the first conflict γ = {A(a), B(a)} is found and added into Γ . However, consecutively A(a) is removed from C10 leaving it empty, and thus the other conflict is not found and Γ = {{A(a), B(a)}} is returned. 4 Combined MHS-MXP Algorithm The hybrid algorithm is based on the observation that running MXP multiple times, each time on a slightly modified input, results in consecutive extension of the overall conflicts that are found. We will show, that the construction of a HS-tree may serve to guide this iterations in such a way, that completeness is achieved. The combined MHS-MXP algorithm, listed as Algorithm 4, therefore con- structs the HS-tree as usual, but in each node n, instead of simply retrieving one model of K ∪ {¬O} ∪ H(n), it calls FindConflicts. It starts by checking the consistency of K ∪ {¬O}. We use a modified is- Consistent function which stores all models in the model cache Mod. This is Algorithm 3 MHS-MXP(K,O,Abd) Require: knowledge base K, observation O, set of abducibles Abd Ensure: set SE of all explanations of P = (K, O) of the class Abd 1: Con ← {} . Set of conflicts 2: Mod ← {} . Set of cached models 3: if ¬isConsistent(K ∪ {¬O}) then 4: return "nothing to explain" 5: else if Abd(M ) = ∅ where Mod = {M } then 6: return SE = ∅ 7: end if 8: T ← (V = {r}, E = ∅, L = ∅) . Init. HS-Tree 9: while there is next node n in T w.r.t. BFS do 10: if γ 6⊆ H(n) for all γ ∈ Con then . Otherwise n is pruned 11: h , Γ i ← FindConflicts(K ∪ {¬O} ∪ H(n), Abd \ H(n)) 12: Con ← Con ∪ {H(n) ∪Sγ | γ ∈ Γ } 13: if Γ 6= ∅ and Abd \ {γ ∈ Γ | |γ| = 1} > 1 then . HS-tree is extended under n 14: L(n) ← Abd(M ) \ H(n) for some M ∈ Mod s.t. M |= H(n) 15: for each σ ∈ L(n) create new σ-successor nσ of n 16: end if 17: end if 18: end while 19: return SE ← {γ ∈ Con | γ is desired} 20: function isConsistent(K) 21: if there is M |= K then 22: Mod ← Mod ∪ {M } 23: return true 24: else 25: return false 26: end if 27: end function because it will be important to remember the models, even if they are found inside the calls to FindConflicts. Then the main loop is initiated. For the root node r, FindConflicts is simply called passing K ∪ {¬O} as the background theory and Abd as the set of conflicts (as H(n) = ∅ at this point). The obtained conflicts Γ are stored in Con. We then verify if all conflicts were already found or the search needs to continue (line 13). From Theorem 4 we know that if no conflicts were found in Γ , it means there are no conflicts whatsoever. Also from Observation 5 we know that all conflicts of size 1 were already found in Γ , which means that any minimal conflicts may only possibly remain if at least two abducibles are not present (as singletons) in Γ . If neither of these is true then the HS-tree is extended under r using the model M that was previously found and stored in Mod. When consecutively any other node n 6= r is visited by the main loop, the node is immediately pruned, if H(n) contains any of the conflicts already stored in Con. If not, we now want to use MXP with the goal to explore as much as possible of that part of the space of explanations that extends H(n). Therefore we call FindConflicts passing K ∪ {¬O} ∪ H(n) as the background theory and Abd \ H(n) as the set of conflicts. If we are lucky, we might cut off this branch completely (due to Theorem 4 and Observation 5, line 13). Now in order to extend the HS-tree under n we need a model of K ∪ {¬O} ∪ H(n). However, we do not need to run another consistency check here, as by design of out algorithm at this point such model is already cached in Mod. Observation 6. For each node n of the HS-tree visited by the main loop of MHS-MXP(K, O, Abd) either H(n) ∈ Γ or K ∪ {¬O} ∪ H(n) is consistent and at least for one M ∈ Mod, M |= K ∪ {¬O} ∪ H(n). This holds due to FindConflicts was previously called in the parent node n0 of n, and from Observation 5 we know that during that call all possible inconsistent extensions of H(n0 ) of size 1 were added to Γ . Hence if n was not pruned on line 10, H(n) = H(n0 ) ∪ {σ} must be consistent with K ∪ {¬O}. Moreover, since FindConflicts did not prove {σ} being a conflict for K ∪ {¬O} ∪ H(n0 ), at some point it must have checked the consistency of K ∪ {¬O} ∪ H(n0 ) together with {σ} or some superset thereof with a positive result, and at that point the respective model was added to Mod. 5 Implementation We provide a preliminary implementation in Java. It is based on previous imple- mentations of MHS and MXP [3] which are merged into the hybrid algorithm. We rely on OWL API [4] to call an external DL reasoners as a black box; Pellet, HermiT or JFact are supported. As OWL API does not have a dedicated method to extract models from the DL reasoner we extract models by iterating through abducibles and checking if they are consistent with a current state of the reasoner. The current version has several limitations. Only single atomic ABox obser- vation of the form A(a) is supported. The set of abducibles can not be specified yet, instead, all possible ABox assertions of form A(a) are included in the set, for any individual a occurring in the observation O and any atomic concept A occurring in K. Further extension of the implementation and its optimization is ongoing work. The implementation is available online.3 6 Conclusions MHS-MXP is a new hybrid ABox abduction algorithm for DL. The algorithm builds on the divide and conquer strategy employed by the incomplete MXP and it retains completeness by repetitive runs of MXP while tracking the search space exploration using the HS-tree construction. The combined MHS-MXP search strategy may likely have an advantage on a certain class of inputs, especially those featuring (even a smaller number of) 3 http://dai.fmph.uniba.sk/~pukancova/mhs-mxp/ explanations of greater size. Exploring such search space with MHS, which relies on breadth-first search, is rather inefficient. In the future we would like to characterize the inputs for which MHS-MXP is suitable and also to test this characterization also an empirical evaluation. We would also like to further develop our implementation and explore various possible optimization techniques. For instance, more aggressive model caching can be tried: one does not have to call MXP in every node if there is already a suitable model in cache that can be used to label the node. This however invalidates Observation 6 and additional consistency checks will be required. In addition, conflicts that are cached in Con may be used to save consistency checks inside the MXP calls. It would be interesting to implement and empirically test such possible optimization techniques. Acknowledgments. This work was supported by the Slovak Research and Development Agency under the Contract no. APVV-19-0220 (ORBIS), by the Slovak VEGA agency under Contract no. 1/0778/18 (KATO), and by the EU H2020 programme under Contract no. 952215 (TAILOR). References 1. Baader, F., Calvanese, D., McGuinness, D.L., Nardi, D., Patel-Schneider, P.F. (eds.): The Description Logic Handbook: Theory, Implementation, and Applica- tions. Cambridge University Press (2003) 2. Elsenbroich, C., Kutz, O., Sattler, U.: A case for abductive reasoning over on- tologies. In: Proceedings of the OWLED*06 Workshop on OWL: Experiences and Directions, Athens, GA, US. CEUR-WS, vol. 216 (2006) 3. Fabianová, K., Pukancová, J., Homola, M.: Comparing ABox abduction based on minimal hitting set and MergeXplain. In: Proceedings of the 32nd International Workshop on Description Logics, Oslo, Norway, June 18-21, 2019. CEUR-WS, vol. 2373 (2019) 4. Horridge, M., Bechhofer, S.: The OWL API: A java API for OWL ontologies. Semantic Web 2(1), 11–21 (2011) 5. Junker, U.: QuickXplain: Preferred explanations and relaxations for over- constrained problems. In: Proceedings of the Nineteenth National Conference on Artificial Intelligence, Sixteenth Conference on Innovative Applications of Artificial Intelligence, San Jose, California, US. pp. 167–172. AAAI Press (2004) 6. Karp, R.M.: Reducibility among combinatorial problems. In: Proceedings of a sym- posium on the Complexity of Computer Computations, March 20–22, 1972, at the IBM Thomas J. Watson Research Center, Yorktown Heights, New York. pp. 85–103 (1972) 7. Peirce, C.S.: Illustrations of the logic of science VI: Deduction, induction, and hypothesis. Popular Science Monthly 13, 470–482 (1878) 8. Pukancová, J., Homola, M.: Tableau-based ABox abduction for the ALCHO de- scription logic. In: Proceedings of the 30th International Workshop on Description Logics, Montpellier, France. CEUR-WS, vol. 1879 (2017) 9. Pukancová, J., Homola, M.: ABox abduction for description logics: The case of multiple observations. In: Proceedings of the 31st International Workshop on De- scription Logics, Tempe, Arizona, US. CEUR-WS, vol. 2211 (2018) 10. Reiter, R.: A theory of diagnosis from first principles. Artificial intelligence 32(1), 57–95 (1987) 11. Shchekotykhin, K.M., Jannach, D., Schmitz, T.: MergeXplain: Fast computation of multiple conflicts for diagnosis. In: Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina. AAAI Press (2015)