Contextual Abduction and its Complexity Issues ? Emmanuelle-Anna Dietz Saldanha1 , Steffen Hölldobler1,2 , and Tobias Philipp1 1 International Center for Computational Logic, TU Dresden, Germany 2 North-Caucasus Federal University, Stavropol, Russian Federation {dietz,sh}@iccl.tu-dresden.de, tobias.philipp@tu-dresden.de Abstract. In everyday life, it seems that we prefer some explanations for an ob- servation over others because of our contextual background knowledge. Reiter already tried to specify a mechanism within logic that allows us to avoid explic- itly considering all exceptions in order to derive a conclusion w.r.t. the usual case. In a recent paper, a contextual reasoning approach has been presented, which takes this contextual background into account and allows us to specify contexts within the logic. This approach is embedded into the Weak Completion Seman- tics, a Logic Programming approach that aims at adequately modeling human rea- soning tasks. As this approach extends the underlying three-valued Łukasiewicz logic, some formal properties of the Weak Completion Semantics do not hold anymore. In this paper, we investigate the effects of this extension and present some surprising results about the complexity issues of contextual abduction. 1 Introduction Consider the following scenario, extended and discussed in [10]: If the brakes are pressed, then the car slows down. If the brakes are not OK, then car does not slow down. If the car accelerates, then the car does not slow down. If the road is slippery, then the car does not slow down. If the road is icy, then the road is slippery. If the road is downhill, then the car accelerates. If the car has snow chains on the wheels, then the road is not slippery for the car. If the car has snow chains on the wheels and the brakes are pressed, then the car does not accelerate when the road is downhill. [11] proposed to introduce licenses for inferences when modeling conditionals in hu- man reasoning. [10] suggested to make these conditionals exception-tolerant in logic programs, by modeling the first conditional in the scenario above as If the brakes are pressed and nothing abnormal is the case, then the car slows down. Accordingly, we apply this idea to all conditionals in the previous scenario: If the brakes are pressed (press) and nothing abnormal is the case (¬ab1 ), then the car slows down (slow down). If the brakes are not OK (¬brakes ok), then something abnormal is the case w.r.t. ab1 . If the car accelerates (accelerate), then something abnormal is the case w.r.t. ab1 . If the road is slippery (slippery), ? The authors are mentioned in alphabetical order. 2 then something abnormal is the case w.r.t. ab1 . If the road is icy (icy road) and nothing abnormal is the case (ab2 ), then the road is slippery. If the road is downhill (downhill) and nothing abnormal is the case (ab3 ), then the car ac- celerates (accelerate). If the car has snow chains (snow chain), then something abnormal is the case w.r.t. ab2 . If the car has snow chains (snow chain) and the brakes are pressed (press), then something abnormal is the case w.r.t. ab3 . According to [10], when reasoning in such a scenario, abnormalities should be ignored, unless there is some reason to assume them to be true. As already observed and ques- tioned by Reiter [9], the issue is whether it is possible to specify a logic-based mech- anism that allows us to avoid explicitly considering all exceptions in order to derive a conclusion w.r.t. the usual case. In this paper, we aim at modeling this idea within a logic programming approach, the Weak Completion Semantics (WCS) [3] and with the help of contextual reason- ing [2]. WCS originates from [11], which unfortunately had some technical mistakes. These were corrected in [4] by using the three-valued Łukasiewicz logic. Since then, WCS has been successfully applied to various human reasoning tasks,summarized in [3]. [1] shows the correspondence between WCS and the Well-founded Semantics [12] and that the Well-founded Semantics does not adequately model Byrne’s suppression task. As has been shown in [2] modeling the famous Tweety example [9] under the Weak Completion Semantics leads to undesired results, namely that all exception cases have to be stated explicitly false. [2] proposes to extend the underlying three-valued Łukasiewicz Semantics and presents a contextual abductive reasoning approach. The above scenario is similar to Reiter’s goal when he discussed the Tweety example, in the sense that it describes exceptions, which we don’t want to explicitly consider. Consider Pcar , representing the previous described scenario, including abnormalites: slow down ← press ∧ ¬ab1 . slippery ← icy road ∧ ¬ab2 . ab1 ← slippery. accelerate ← downhill ∧ ¬ab3 . ab1 ← ¬brakes ok. ab2 ← snow chain. ab1 ← accelerate. ab3 ← snow chain ∧ press. Suppose that we observe the brakes are pressed, i.e. O1 = {press}: Under the WCS, we cannot derive from Pcar ∪ O1 that slow down is true, because we don’t know whether ab1 is false, which in turn cannot be derived to be false, because we don’t know whether the road is slippery, the brakes are OK or the car accelerates. We need to explicitly state that press and brakes ok are true whereas icy road, downhill and snow chain have to be assumed false such that we can derive that slow down is true. However, if there is no evidence to assume that ab1 , ab2 and ab3 are true, we would like to assume the usual case, i.e. to avoid specifying explicitly that all abnormalities are not true. Let us observe that the car does not slow down, i.e. O2 = {¬slow down}. Given Pcar , we can either explain this observation by assuming that the brakes are not pressed, E2 = {press ← ⊥}, that the road is icy, E3 = {icy road ← >}, that the brakes are not OK, E4 = {brakes ok ← ⊥} or that the road is downhill and the car has no snow chain, E5 = {downhill ← >, snow chain ← ⊥}. We would like to express that the explanation that describes the usual case seems more likely: In this case E2 is the preferred expla- nation, as usually, when the car does not slow down, then the brakes are not pressed. 3 Only if there is some evidence that something abnormal is the case, i.e. if we observe that something else would suggest one of the other explanations, then some other ex- planation can be considered. For instance, if we observe additionally that the road is slippery, we would prefer E3 over the other explanations, or if we additionally observe that the road is downhill, we would prefer E5 over the other explanations. Let us check whether the closed world assumption (CWA) w.r.t. undefined atoms helps. Consider the program Pcar ∪ {press ← ⊥}:1 brakes ok is assumed to be false by CWA, which in turn makes ab1 true, and therefore leads us to conclude that slow down is false. However, in the usual case we would like to derive the contrary, namely that slow down is true. The two examples above show that neither the Weak Completion Semantics nor approaches that apply the closed world assumption w.r.t. undefined atoms can ade- quately model our intention. The contextual abductive reasoning approach presented in [2] proposes a way of modeling the usual case, i.e. ignoring abnormalities if there is no evidence to assume them to be true, and expressing a preference among explana- tions. This approach takes Pereira and Pinto’s inspection points [8] in abductive logic programming as starting point. In this paper we investigate several problems in terms of complexity theory, and contrast these results with properties from abductive reasoning without context. 2 Background We assume that the reader is familiar with logic and logic programming. The general notation and terminology is based on [6]. 2.1 Contextual Logic Programs Contextual logic programs are logic programs extended by the truth-functional op- erator ctxt, called context [2]. (Propositional) contextual logic program clauses are expressions of the forms A ← L1 ∧ . . . ∧ Lm ∧ ctxt(Lm+1 ) ∧ . . . ∧ ctxt(Lm+p ) (called rules), A ← > (called facts), and A ← ⊥ (called assumptions). A is called head and L1 ∧ . . . ∧ Lm ∧ ctxt(Lm+1 ) ∧ . . . ∧ ctxt(Lm+p ) as well as > and ⊥, standing for true and false, respectively, are called body of the corresponding clauses. A contextual (logic) program is a set of contextual logic program clauses. atoms(P ) denotes the set of all atoms occurring in P . A is defined in P iff P contains a rule or a fact with head A. A is undefined in P iff A is not defined in P . The set of all atoms that are undefined in P is denoted by undef(P ). The definition of A in P is defined as def(A, P ) = {A ← body | A ← body is a rule or a fact occurring in P }. ¬A is assumed in P iff P contains an assumption with head A and def(A, P ) = 0. / An exception clause is as a clause of the form ab j ← body, 1 ≤ j ≤ m, where m ∈ N. A is an atom and the Li with 1 ≤ i ≤ m + p are literals. Conceptually, we suggest to use the context connective within contextual programs as follows: The ctxt operator is applied upon every literal in the body of an exception clause in P . We will omit the word ‘contextual’ when we refer to (logic) programs, if not stated otherwise. 1 This notion of falsehood appears to be counterintuitive, but programs will be interpreted under (weak) completion semantics where the implication sign is replaced by an equivalence sign. 4 A level mapping ` for a contextual program P is a function which assigns to each atom a natural number. It is extended to literals and expressions of the form ctxt(L) as follows: `(¬A) = `(A) and `(ctxt(L)) = `(L). A contextual program P is acyclic w.r.t. a level mapping iff for every A ← L1 ∧ . . . ∧ Lm ∧ ctxt(Lm+1 ) ∧ . . . ∧ ctxt(Lm+p ) ∈ P we find that `(A) > `(Li ) for all 1 ≤ i ≤ m + p. A contextual program P is acyclic iff it is acyclic w.r.t. some level mapping. Consider the following transformation for a given program P : (1) For all A ← body1 , A ← body2 , . . . , A ← bodyn ∈ P , where n ≥ 1, replace by A ← body1 ∨ body2 ∨ . . . ∨ bodyn . (2) Replace all occurrences of ← by ↔. The resulting set is the weak com- pletion of P , denoted by wc P [4]. Example 1 Consider P = {s ← r, r ← ¬p ∧ q, q ← ⊥, s ← >}. The first two clauses are rules, the third is an assumption and the fourth is a fact. s and r are defined, whereas p and q are not defined in P , i.e. undef(P ) = {p, q}. P is acyclic, as it is acyclic w.r.t. the following level mapping: `(s) = 3, `(r) = 2 and `(p) = `(q) = 1. The weak completion of P is wc P = {s ↔ r ∨ >, r ↔ ¬p ∧ q, q ↔ ⊥}. 2.2 Three-Valued Łukasiewicz Logic Extended by the Context Connective We consider the three-valued Łukasiewicz logic, for which the corresponding truth val- ues are >, ⊥ and U, which mean true, false and unknown, respectively. A three-valued interpretation I is a mapping from atoms(P ) to the set of truth values {>, ⊥, U}, and is represented as a pair I = hI > , I ⊥ i of two disjoint sets of atoms, where I > = {A | I(A) = >} and I ⊥ = {A | I(A) = ⊥}. Atoms which do not occur in I > ∪ I ⊥ are mapped to U. The truth value of a given formula under I is determined according to the truth tables in Table 1. A three-valued model M of P is a three-valued interpretation such that M (A ← body) = > for each A ← body ∈ P . Let I = hI > , I ⊥ i and J = hJ > , J ⊥ i be two interpretations. I ⊆ J iff I > ⊆ J > and I ⊥ ⊆ J ⊥ . I is a minimal model of P iff for no other model J of P it holds that J ⊆ I. I is the least model of P iff it is the only minimal model of P . Example 2 shows the models of the program in Example 1. Our suggestion to apply the ctxt operator upon every literal in the body of an excep- tion clause in P with Table 1 implements the idea of the introduction that abnormalities should be assumed to be false, if there is no evidence to assume otherwise. Example 2 I1 = h{s}, {q, r}i, I2 = h{s, p}, {q, r}i and I3 = h{s, q, r}, {p}i are models of P . I1 is the least model of wc P and I3 is not a model of wc P . 2.3 Stenning and van Lambalgen Consequence Operator We reason w.r.t. the Stenning and van Lambalgen consequence operator ΦP [11, 4]: Let I be an interpretation and P be a program. The application of Φ to I and P , denoted by ΦP (I), is the interpretation J = hJ > , J ⊥ i: J > = {A | there is A ← body ∈ P such that I(body) = >}, J ⊥ = {A | there is A ← body ∈ P and for all A ← body ∈ P , we find I(body) = ⊥}. 5 F ¬F ∧ >U⊥ ∨ >U⊥ ←>U⊥ ↔>U⊥ L ctxt(L) > ⊥ >>U⊥ >>>> > >>> > >U⊥ > > ⊥ > UUU⊥ U>UU U U>> U U>U ⊥ ⊥ U U ⊥⊥⊥⊥ ⊥>U⊥ ⊥ ⊥U> ⊥ ⊥U> U ⊥ Table 1. The truth tables for the connectives under the three-valued Łukasiewicz logic and for ctxt(L). L is a literal, >, ⊥, and U denote true, false, and unknown, respectively. The least fixed point of ΦP is denoted by lfp ΦP , if it exists. Acyclic programs admit several nice properties: The ΦP operator is a contraction, has a least fixed point2 that can be reached by iterating a finite number of times starting from any interpretation, and lfp ΦP is a model of P [2]. We define P |=wcs F iff P is acyclic and lfp ΦP |= F. As has been shown in [4], for non-contextual programs, the least fixed point of ΦP is identical to the least model of the weak completion of P , which always exists. As Example 3 shows this does not hold for contextual programs: The weak completion of contextual programs might have more than one minimal model. Example 3 P = {s ← ¬r, r ← ¬p ∧ q, q ← ctxt(¬p)} then wc P = {s ↔ r, r ↔ ¬p ∧ q, q ↔ ctxt(¬p)}. lfp ΦP = h{s}, {q, r}i. h{q, r}, {p, s}i is also a minimal model of wc P . However, a minimal model that is different to the least fixed point of ΦP , is not sup- ported in the sense that if we iterate ΦP starting with this minimal model, then we will compute lfp ΦP . As lfp ΦP is unique and the only supported minimal model of wc P , we define P |=wcs F if and only if F holds in the least fixed point of ΦP . 2.4 Complexity Classes A decision problem is a problem where the answer is either yes or no. A natural corre- spondence to the decision problem is the word problem, where the word problem deals with the question Does word w belong to language L? Here, a word is a finite string over the alphabet Σ and a language is a possibly infinite set of words over Σ, where Σ∗ denotes every word over Σ. P is the class of decision problems that are solvable in polynomial time. NP is the class of decision problems, where the yes answers can be verified in polynomial time. Every decision problem can be specified by a language and every class can be specified by the set of languages it contains: P is the set of languages, for which it can be decided in polynomial time whether they are in P. Let R be a binary relation on strings. R is balanced if (x, y) ∈ R implies |y| ≤ |x|k for some k ≥ 1. Let L ⊆ Σ∗ be a language. L ∈ NP iff there is a polynomially decidable and a polynomial balanced relation R such that L = {x | (x, y) ∈ R for some y } [7]. That is, NP is the set of languages, for the ones that belong to this class, it can be decided in polynomial time whether they are in NP. Given that CO NP = {L | L ∈ NP}, a language L is in the class DP iff there are two languages L1 ∈ NP and L2 ∈ CO NP such that L = L1 ∩ L2 . PSPACE is the 2 Note that for acyclic programs, the least fixed point of Φ P is also the unique fixed point of ΦP . 6 set of languages for which it can be decided in polynomial space whether they are in PSPACE. The relation of the four classes is P ⊆ NP ⊆ DP ⊆ PSPACE. A language L is polynomial-time reducible to a language L0 , denoted as L ≤ p L0 if there is a polynomial-time computable function f : Σ∗ 7→ Σ∗ such that for every x ∈ Σ∗ , x ∈ L iff f (x) ∈ L0 . Reductions are transitive, i.e. if L1 ≤ p L2 and L2 ≤ p L3 then L1 ≤ p L3 for all languages L1 , L2 and L3 . Given that C is a complexity class, we say that a lan- guage L is C-hard if L ≤ p L0 for all L0 ∈ C. L is C-complete if L is in C and L is C-hard. 3 Abduction in Contextual Logic Programs A contextual abductive framework is a tuple hP , A , |=wcs i, consisting of an acyclic con- textual program P , a set of abducibles A ⊆ AP and the entailment relation |=wcs . The set of abducibles AP is defined as {A ← > | A is undefined in P or A is head of an exception clause in P } ∪ {A ← ⊥ | A is undefined in P and ¬A is not assumed3 in P }, Let an observation O be a non-empty set of literals. Abductive reasoning can be characterized as the problem to find an explanation E ⊆ A such that O can be inferred by P ∪ E by deductive reasoning. Often, explanations are restricted to be basic and that they are consistent with P . An explanation E is basic, if E cannot be explained by other facts or assumptions, i.e. E can only be explained by itself.4 It is easy to see that given an acyclic logic program P and that E ⊆ A , the resulting program P ∪ E is acyclic as well. Further, as the ΦP operator always yields a least fixed point for acyclic programs, P ∪ E is guaranteed to be consistent. We will impose a further restriction on expla- nations such that explanations do not allow to change the context of the observation. Formally, this is defined using the following relation: Definition 4. The strongly depends on – relation w.r.t. P is the smallest transitive rela- tion with the following properties: 1. If A ← L1 ∧ . . . ∧ Lm ∧ ctxt(Lm+1 ) ∧ . . . ∧ ctxt(Lm+p ) ∈ P , then A strongly depends on Li for all i ∈ {1, . . . , m}. 2. If L strongly depends on L0 , then ¬L strongly depends on L0 . 3. If L strongly depends on L0 , then L strongly depends on ¬L0 . t u Example 5 Given P = {p ← r, p ← ctxt(q)}, p strongly depends on r and ¬r, ¬p strongly depends on r and ¬r. p does not strongly depend on q, neither on ctxt(q). We formalize the abductive reasoning process as follows: Definition 6. Given the contextual abductive framework hP , A , |=wcs i E is a contex- tual explanation of O given P iff E ⊆ A , P ∪ E |=wcs O , and for all A ← > ∈ E and A ← ⊥ ∈ E there exists an L ∈ O , such that L strongly depends on A. 3 It is not the case that A ← ⊥ ∈ P and this is the only clause where A is the head of in P . 4 If P = {p ← q} and O = {q}, then E = {q ← >} is basic. 7 In the following, we abbreviate the contextual abductive framework, by referring to the abductive problem A P = hP , A , O i. E is an explanation for the abductive problem A P = hP , A , O i iff E is a contextual explanation of O given P . Notice that P ∪ E is consistent since the resulting program is acyclic, and therefore a least fixed point of ΦP exists. We demonstrate the formalism by Example 7. Example 7 Let us consider again Pcar from the introduction and recall that, if we know that ‘the brakes are pressed’ is true i.e. press ← >, then under the Weak Completion Semantics, we cannot derive from P ∪ {press ← >} that ‘slow down’ is true, because we don’t know whether the road is slippery, the brakes are OK or the car accelerates. ctxt Given Pcar , Pcar is defined as follows: slow down ← press ∧ ¬ab1 . slippery ← icy road ∧ ¬ab2 . ab1 ← ctxt(slippery). accelerate ← downhill ∧ ¬ab3 . ab1 ← ctxt(¬brakes ok). ab2 ← ctxt(snow chain). ab1 ← ctxt(accelerate). ab3 ← ctxt(snow chain) ∧ ctxt(press). ctxt until the least fixed point is reached, we obtain h0, By iterating ΦPcar / {ab1 , ab2 , ab3 }i. All abnormality predicates are false, as nothing is known about ‘slippery’, ‘brakes ok’, ‘accelerate’ and ‘snow chain’. According to Table 1, ‘ctxt(slippery)’, ‘ctxt(brakes ok)’, ‘ctxt(accelerate)’ and ‘ctxt(snow chain)’ are evaluated to false under h0, / which in / 0i, turn makes ‘ab1 ’, ‘ab2 ’ and ‘ab3 ’ false. Assume that we observe O1 = {press}. A con- textual explanation E1 for O1 has to be a subset of the set of abducibles A . A consists of the following facts and assumptions: press ← >. brakes ok ← >. icy road ← >. ab1 ← >. press ← ⊥. brakes ok ← ⊥. icy road ← ⊥. ab2 ← >. downhill ← >. snow chain ← >. ab2 ← >. downhill ← ⊥. snow chain ← ⊥. E1 = {press ← >} is the only contextual explanation for O1 . The least fixed point of the program together with the corresponding explanation is as follows: lfp (ΦP ∪E1 ) = h{slow down, press}, {ab1 , ab2 , ab3 }i. Assume that we observe that the car does not slow down, i.e. O2 = {¬slow down}. The only contextual explanation for O2 is E2 = {press ← ⊥}. lfp (ΦP ∪E2 ) is as follows: / {slow down, press, ab1 , ab2 , ab3 }i, h0, and indeed this model entails ‘¬slow down’. Note that neither E3 = {icy road ← >} nor E4 = {brakes ok ← ⊥} can be contextual explanations for O2 , because the ad- ditional condition for contextual explanations, that ‘for all A ← > ∈ E and for all A ← ⊥ ∈ E there exists an L ∈ O , such that L strongly depends on A,’ does not hold: ‘¬slow down’ strongly depends on ‘press’ but it does not strongly depend on ‘brakes ok’ neither does it strongly depend on ‘icy road’. Assume that we additionally observe that the road is slippery: O3 = O2 ∪ {slippery}. As ‘slippery’ strongly depends on ‘icy road’, E3 is a contextual explanation for O3 . ctxt ∪E ) = h{icy road, slippery, ab1 , }, {slow down, ab2 , ab3 }i and entails both lfp (ΦPcar 3 ‘¬slow down’ and ‘slippery’. E3 is the only contextual explanation for O3 . 8 Example 8 Let us extend the scenario from the introduction as follows: If the engine shaft rotates (rotate E) and nothing is abnormal (ab4 ), then the wheels rotate (rotate W). If the clutch is not pressed (¬press clutch), then something abnormal is the case w.r.t. ab4 . If the wheels rotate (rotate W) and nothing is abnormal (ab4 ), then the shaft rotates. If the wheels rotate then some- thing is the case w.r.t. ab1 . If the wheels engine shaft rotates then something is the case w.r.t. ab1 . ctxt Pcar is extended by the following clauses: rotate W ← rotate E ∧ ¬ab4 . ab4 ← ctxt(¬press clutch). rotate E ← rotate W ∧ ¬ab4 . ab1 ← ctxt(rotate W). ab1 ← ctxt(rotate E). Even though there is a cycle in this program, by iterating Φ w.r.t. this program, the following least fixed point is reached: h0, / {ab1 , ab2 , ab3 , ab4 }i. Example 8 shows that some programs with cycles still can reach a least fixed point. We assume that acyclicity only needs to be restricted w.r.t. the literals within ctxt. 4 Complexity of Consistency of Contextual Abductive Problems A contextual abductive problem A = hP , A , O i is consistent if there is an explanation for A . We will now investigate the complexity of deciding consistency. First, we show that computing the least fixed point of ΦP for acyclic contextual programs can be done in polynomial time. From this, we can easily show that consistency is in NP. Hardness follows analogously to [5]. For showing that ΦP can be computed in polynomial time, observe that several nice properties of ΦP do not hold if we consider contextual programs. For instance, for logic programs that do not contain the context connective, ctxt, the least fixed point of ΦP is monotonously increasing if we add facts and assumptions whose head is undefined. As the following example demonstrates, this does not hold for contextual programs. Example 9 Consider P = {p ← ctxt(r)} where lfp ΦP = h0, / {p}i. However, h0, / {p}i 6⊆ h{r, p}, 0i / = lfp ΦP ∪{r←>} . ΦP is non-monotonic even for acyclic programs as the following example demonstrates: Example 10 Given P = {p ← ctxt(q)}, I1 = h0, / ⊆ h0, / 0i / {p, q}i = I2 , and F = {q ← > }. Then ΦP (I1 ) = h0, / {p}i ⊆ h{q}, {p}i = ΦP ∪F (I2 ). However, lfp ΦP (I1 ) = h0, / {p}i 6⊆ h{p, q}, 0i / = lfp ΦP ∪F (I2 ). t u We can establish a weak form of monotonicity for a logic program P that is acyclic w.r.t. `: If the atom A is true (false, resp.) after the nth application of ΦP starting from the empty interpretation, and `(A) ≤ n, then A remains true (false, resp.). We define ΦP ↑ 0 = h0, / and ΦP ↑ (n + 1) = ΦP (ΦP ↑ n) for all n ∈ N. / 0i 9 Lemma 11. Let P be a logic program that is acyclic w.r.t. a level mapping `. Let In = hIn> , In⊥ i = ΦP ↑ n for all n ∈ N. If n < m, then: In> ∩ {A | `(A) ≤ n} ⊆ Im> and In⊥ ∩ {A | `(A) ≤ n} ⊆ Im⊥ . Proof. We show the claim by induction on n. For the induction base case, this is straightforward as I0> = I0⊥ = 0. / For the induction step, assume the claim holds for n: In> ∩ {A | `(A) ≤ n} ⊆ Im> for all m ∈ N with n < m, (1) In⊥ ∩ {A | `(A) ≤ n} ⊆ Im⊥ for all m ∈ N with n < m. (2) > ∩ {A | `(A) ≤ n + 1} ⊆ I > for all k ∈ N with n + 1 ≤ k. – To show: In+1 k 1. We show it by contradiction, i.e. assume that i) A ∈ In+1> , ii) `(A) ≤ n + 1 and iii) A 6∈ Ik> . 2. As i), there is A ← body ∈ P with the property that In (body) = >. 3. As P is acyclic, `(L) < `(A) for all literals L appearing in body. For all L the following holds: (a) if L = B, then B ∈ In> and as ii) `(B) < n, by (1), B ∈ Ik−1 > . ⊥ (b) if L = ¬B, then B ∈ In and as ii) `(B) < n, by (2), B ∈ Ik−1⊥ . 4. By 3a and 3b follows that Ik−1 (body) = >. Accordingly, A ∈ Ik> which contra- dicts iii). ⊥ ∩ {A | `(A) ≤ n + 1} ⊆ I ⊥ for all k ∈ N with n + 1 ≤ k. – To show: In+1 k 1. Again, we show by contradiction, i.e. assume that i) A ∈ In+1⊥ , ii) `(A) ≤ n + 1 ⊥ and iii.) A 6∈ Ik . 2. As i), there is A ← body ∈ P , and we find that In (body) = ⊥ for all A ∈ body ∈ P . As P is acyclic, `(L) < `(A) for all literals L appearing in body. For at least one L in each body the following holds: (a) if L = B, then B ∈ In⊥ and as ii) `(B) < n, by (1), B ∈ Ik−1 ⊥ . (b) if L = ¬B, then B ∈ In> and as ii) `(B) < n, by (2), B ∈ Ik−1 > . 3. By 2a and 2b follows that Ik−1 (body) = ⊥ for all A ∈ body ∈ P . Accordingly, A ∈ Jk⊥ which contradicts iii). Proposition 12. Computing lfp ΦP can be done in polynomial time for acyclic logic programs P . Proof. By [2, Corollary 4] the least fixed point can be obtained from finite applications of ΦP , i.e. there is n such that ΦP ↑ n = ΦP ↑ m for all m > n. We show that n is polyno- mially restricted in P as follows: The number of atoms appearing in P is polynomially restricted in the length of the string P . Consequently, we can assume a maximum level m such that `(A) ≤ m for all atoms A appearing in P . We now compute ΦP ↑ m which can be done in polynomial time. By Lemma 11, we know that ΦP is monotonic after m steps. Afterwards, we can add only polynomially many atoms to I > or I ⊥ using ΦP . Hence, after polynomial iterations, we have reached the least fixed point. Theorem 13. Deciding, whether a contextual problem hP , A , O i has an explanation is NP-complete. 10 Proof. We show that the problem belongs to NP, and afterwards we show NP-hard. To show NP-membership, observe that explanations are polynomially bounded by the abductive framework. Then, showing NP-membership only requires to show that checking whether a set E is an explanation. This is done as follows: 1. E is a consistent subset of A : This can be done in polynomial time [5]. 2. P ∪ E |=wcs O : Computing M = lfp ΦP ∪E can be done in polynomial time (Propo- sition 12). The last step is to check whether P ∪ E |=wcs L for all L ∈ O , can be done as follows. For all literals L ∈ O , if L = A, then check if A ∈ I > and if L = ¬A, then check if A ∈ I ⊥ 3. for all A ← > ∈ E and for all A ← ⊥ ∈ E , respectively, there exists L ∈ O such that L strongly depends on A ← > and A ← ⊥, respectively: The strongly depends on relation for every two literals can be checked in |P | steps, and thus the computation can be done in polynomial time. It remains to show that consistency is NP-hard. As already consistency with no context connective is NP-hard [5], it easily follows that consistency is NP-hard. 5 Complexity of Skeptical Reasoning with Abductive Explanations We are not only interested in deciding whether an observation can be explained, but what can be inferred from the possible explanations. We distinguish between skepti- cal and credulous reasoning: Given an abductive problem A P = hP , A , O i, F follows skeptically from A P iff A P is consistent, and for all explanations E for A P it holds that P ∪ E |=wcs F. The formula F follows credulously from A P iff there exists an ex- planation E of A P and P ∪ E |=wcs F. Proposition 14. Deciding if P ∪ E |=wcs F does not hold for all explanations E given A P is NP-complete. Proof. To show that the problem is in NP, we guess a E ⊆ A for A P and check in polynomial time whether E is an explanation for O and whether P ∪ E 6|=wcs F. This can be done in polynomial time. To show that the problem is NP-hard, we use the result from Theorem 13, by reducing consistency to the problem above, i.e. reduce the question whether a contextual problem hP , A , O i has an explanation to the question whether there exists an explanation E such that P ∪ E 6|=wcs ¬(A ← A) for all A ∈ atoms(P ) given A P. The correctness of the construction follows because for every interpretation I, it holds that I 6|= ¬(A → A). Proposition 15. Let L ⊆ Σ∗ be a language. Then L is NP-complete iff L is CO NP- complete. Proof. See [7, Proposition 10.1]. Proposition 16. Deciding if P ∪ E |=wcs F holds for all explanations E given A P is CO NP-complete. 11 Proof. The opposite problem is shown to be NP-complete by Proposition 14. By Propo- sition 15, deciding the above problem is CO NP-complete. Theorem 17. The question, whether F follows skeptically from an abductive problem hP , A , O i is DP-complete. Proof. We first show that the problem belongs to DP, and afterwards we show that it is DP-hard. Let A P = hP , A , O i be an abductive problem and F a formula. P ∪ E |=wcs F for all explanations E for A P iff i.) A P is consistent and ii.) F follows from all explanations E for A P. By Theorem 13, i.) is in NP and by Proposition 16, ii.) is in CO NP. Hence, deciding whether F follows skeptically from A P is in DP. Let P be a decision problem in DP. P consists of two decision problems P1 and P2 , where P1 ∈ NP and P2 ∈ CO NP by the definition of the class DP. By Theorem 13, i.) is NP-complete, thus we know that P1 is polynomially reducible to consistency. By Proposition 16 ii.) is CO NP-complete, thus P2 is polynomially reducible to it. Hence, P can be polynomially reduced to the combined problem i) and ii.). Hence, whether F follows skeptically from hP , A , O i is DP-hard. 6 Skeptical Reasoning with Minimal Abductive Explanations Often, one is interested in reasoning w.r.t. minimal explanations, i.e. there is no other contextual explanation E 0 ⊂ E for an observation O . If explanations are monotonic, i.e. the addition of further facts and assumptions are still an explanation, then checking minimality can be done in polynomial time [5]: It is enough to check that E \ {A ← ⊥} and E \ {A ← >} is not an explanation for all A ← > ∈ E and A ← ⊥ ∈ E . We cannot even guarantee that explanations are monotonic for logic programs without the context operator as Example 18 shows. Yet, if the set of abducibles is restricted to the set of facts and assumptions w.r.t. the undefined atoms in P , i.e. A = {A ← > | A ∈ undef(P )} ∪ {A ← ⊥ | A ∈ undef(P )} then explanations are indeed monotonic [5]. Example 18 Given P = {p ← q ∧ r, p ← ¬q, q ← ⊥} and observation O = {p}. E1 = {q ← >, r ← >} is an explanation for O . E1 ⊃ {q ← >} is not an explanation for O , but E2 = 0/ ⊆ {q ← >} ⊆ E1 is again an explanation for O . Yet, restricting the the set of abducibles, does not make explanations monotonic if we consider contextual programs, as Example 19 shows. Example 19 Given P = {p ← q, p ← ctxt(r)} and observation O = {p}. Then, E = {q ← >} is a contextual explanation for O , but {q ← >, r ← >} ⊃ E not anymore, because r does not strongly depend on p. As Example 20 shows, given that E is a contextual explanation for O , we cannot simply iterate over all A ← > ∈ E (A ← ⊥ ∈ E , resp.) and check whether E \ {A ← >} (E \ {A ← ⊥, resp.) is a contextual explanation for O . If this would be the case, then we could decide whether E is a minimal contextual explanation in polynomial time [5]. Instead, we might have to check all subsets of E , for which there are 2|E | many, i.e. this might have to be done exponentially in time. 12 Example 20 Given P = {p ← r ∧ ¬t,t ← ctxt(q),t ← ctxt(s), p ← r ∧ q ∧ s}. Assume that O = {p}: E1 = {r ← >} and E2 = {r ← >, q ← >, s ← >} are both contextual explanations for O . As E1 ⊂ E2 holds, E1 is a minimal contextual explanation, whereas E2 is not. However, note that none of E2 \ {r ← >}, E2 \ {q ← >} or E2 \ {s ← >} is a contextual explanation for O . Still, we can show an upper bound for the complexity of deciding minimality: Theorem 21. The question, whether a set E is a minimal explanation for an abductive problem hP , A , O i is in PSPACE. Proof. Given that hP , A , O i is an abductive problem, we need to check all subsets of E , in order to decide whether E is a minimal explanation for O . As we don’t need to store the subsets of E as soon as we have tested them, deciding whether E is minimal can be done polynomial in space. 7 Conclusion This paper investigates contextual abductive reasoning, a new approach embedded within the Weak Completion Semantics. We first show with the help of an example the limi- tations of the Weak Completion Semantics, when we want to express the preference of the usual case over the exception cases. Furthermore, we cannot syntactically specify contextual knowledge in the logic programs as they have been presented so far. After that, we introduce contextual programs together with contextual abduction, we show how the previous limitations can be solved. This contextual reasoning ap- proach allows us to indicate contextual knowledge and express the preference among explanations, depending on the context. However, as has already been shown previously in [2], some advantageous prop- erties which hold for programs under the Weak Completion Semantics, do not hold for contextual programs. For instance, the ΦP operator is not necessarily monotonic. Further, if a contextual program contains a cycle, it might not even have a fixed point. In this paper, we first show that even though ΦP is not monotonic, the least fixed point can still be computed in polynomial time for acyclic contextual programs. There- after, we show that whether an observation has a contextual explanation, is NP-complete. Furthermore, by examining the complexity of skeptical reasoning, deciding whether something follows skeptically from an observation is DP-complete. Unfortunately, ex- planations might not be monotonic in contextual abduction anymore, a property that holds in abduction for non-contextual programs [5]. We can however show that decid- ing whether a contextual explanation is minimal lies in PSPACE. The approach discussed here brings up a number of interesting questions: In the end of Section 2.3, we have shown that the weak completion of contextual programs might have more than only one minimal model. It seems that a possible characterization for the model computed by the ΦP operator, is the only minimal model for which all undefined atoms in P are mapped to unknown. Yet, another aspect which arises from Section 6, is whether skeptical reasoning with minimal explanations is PSPACE-hard. Further, we would like to investigate how a development of a neural network perspective for reasoning with contextual programs could be done. 13 Acknowledgements We are grateful to the constructive comments and ideas of the three reviewers. The Graduate Academy at TU Dresden supported Tobias Philipp. References 1. E.-A. Dietz, S. Hölldobler, and C. Wernhard. Modeling the suppression task under weak completion and well-founded semantics. Journal of Applied Non-Classical Logics, 24(1- 2):61–85, 2014. 2. E.-A. Dietz Saldanha, S. Hölldobler, and L. M. Pereira. Contextual reasoning: Usually birds can abductively fly. In Proceedings of the 14th International Conference on Logic Program- ming and Nonmonotonic Reasoning (LPNMR 2017), 2017. to appear. 3. S. Hölldobler. Weak completion semantics and its applications in human reasoning. In U. Furbach and C. Schon, editors, Proceedings of the 1st Workshop on Bridging the Gap between Human and Automated Reasoning (Bridging-2015), CEUR Workshop Proceedings, pages 2–16. CEUR-WS.org, 2015. 4. S. Hölldobler and C. D. Kencana Ramli. Logic programs under three-valued Łukasiewicz semantics. In P. M. Hill and D. S. Warren, editors, Proceedings of the 25th International Conference on Logic Programming (ICLP 2009), volume 5649 of LNCS, pages 464–478, Heidelberg, 2009. Springer. 5. S. Hölldobler, T. Philipp, and C. Wernhard. An abductive model for human reasoning. In Logical Formalizations of Commonsense Reasoning, Papers from the AAAI 2011 Spring Symposium, AAAI Spring Symposium Series Technical Reports, pages 135–138, Cam- bridge, MA, 2011. AAAI Press. 6. J. W. Lloyd. Foundations of Logic Programming. Springer-Verlag New York, Inc., New York, NY, USA, 1984. 7. C. H. Papadimitriou. Computational complexity. Addison-Wesley, 1994. 8. L. M. Pereira and A. M. Pinto. Inspecting side-effects of abduction in logic programs. In M. Balduccini and T. C. Son, editors, Logic Programming, Knowledge Representation, and Nonmonotonic Reasoning: Essays in honour of Michael Gelfond, volume 6565 of LNAI, pages 148–163. Springer, 2011. 9. R. Reiter. A logic for default reasoning. Artificial Intelligence, 13:81–132, 1980. 10. K. Stenning, L. Martignon, and A. Varga. Adaptive reasoning: integrating fast and frugal heuristics with a logic of interpretation. Decision, in press. 11. K. Stenning and M. van Lambalgen. Human Reasoning and Cognitive Science. A Bradford Book. MIT Press, Cambridge, MA, 2008. 12. A. Van Gelder, K. A. Ross, and J. S. Schlipf. The well-founded semantics for general logic programs. Journal of the ACM, 38(3):619–649, 1991.