=Paper=
{{Paper
|id=Vol-1868/p2
|storemode=property
|title=Generalized Answer Set Planning with Incomplete Information
|pdfUrl=https://ceur-ws.org/Vol-1868/p2.pdf
|volume=Vol-1868
|authors=Javier Romero,Torsten Schaub,Tran Cao Son
|dblpUrl=https://dblp.org/rec/conf/lpnmr/0003SS17
}}
==Generalized Answer Set Planning with Incomplete Information ==
Generalized Answer Set Planning with Incomplete Information Javier Romero1 , Torsten Schaub1,2 , Tran Cao Son3 1 University of Potsdam 2 INRIA Rennes 3 New Mexico State University, Las Cruces Abstract. Answer Set Planning was one of the first challenging applications of Answer Set Programming (ASP). However, when putting plans into practice, their execution and monitoring lead to an increased complexity. Foremost, this is due to the inherent incompleteness of information faced in real-world scenarios. This fundamental problem has already been addressed in various ways, as in conformant, assertional, and assumption-based planning, and often combined with additional sensing actions. Our objective is to conceive a framework for planning and execution under incomplete information by combining the key concepts of the aforementioned approaches in a uniform ASP-based framework. This paper reports on our first step in this direction. More precisely, we define a series of increasingly complex ASP problems reflecting the successive integration of the key concepts. Our prime concern lies in obtaining a uniform characterization that matches the expressive realm of ASP. This is reflected by developing appropriate approximations that are implemented by means of metaencoding techniques. 1 Introduction Being able to operate in a dynamic environment has been one of the critical traits of intelligent agents. This includes the ability to create plans that transform the state of the world from an initial state to a desirable state. Planning is computationally expensive and it is harder when the initial state is incomplete and sensing actions are needed. Assuming that actions are deterministic and the plan’s length is polynomially bounded by the size of the problem, the complexity of the planning problem in this situation is PSPACE- complete. By not considering sensing actions, the complexity of the planning problem, often referred to as conformant planning problem, is Σ2P -complete; using an incomplete approximation semantics lowers the complexity of the problem to NP-complete (e.g., using the 0-approximation) [6]. Planning using logic programs under the answer set semantics, or answer set planning (ASP, [15]), was one of the earlier applications of answer set programming. It has been extended to deal with incomplete information about the initial state and/or sensing actions. More specifically, ASP has been extended to solve conformant planning [10, 18, 20] and conditional planning problems [19]. It is worth noticing that, except [10], previous approaches to planning with incomplete information and sensing actions using logic programming have to rely on the approximation semantics. The main reason lies in its low complexity. The experimental results in these works show that ASP is competitive with other approaches to planning with incompleteness and sensing actions. 2 Javier Romero, Torsten Schaub, Tran Cao Son ASP has been an integral part of the intelligent agent architecture AAA proposed in [5] and further developed in [3, 4] whose key component is the Observe-Think-Act loop. However, the planning problem considered in applications employing this architecture— solved during the Think phase of the loop—does not consider incomplete information and sensing actions. Alternative approaches for intelligent agents to dealing with incomplete information and sensing actions include assumption-based planning (ABP) [9] and continual plan- ning [7]. In ABP, an agent acts as though it knows the truth values of unknown fluents when it needs to create a plan; during the execution of the plan, the agent adjusts and replans when some assumptions turn out to be incorrect. Continual planning addresses the problem by providing an agent with a skeleton of a plan (or an incompletely specified plan, that can only be completed when additional information is available) that allows the agent to interleave planning and execution while guaranteeing that the intended goal is achieved. This approach is similar to that of using GOLOG [14] (or its extensions) in robotics [8]. Although there has been an attempt to extend ASP to employ GOLOG in plan generation [17], it does not deal with incomplete information and/or sensing actions; it has not been integrated into an implementation of the AAA architecture as well. Our goal in this paper is to further investigate the use of logic programming in dealing with incomplete information in dynamic domains. Instead of focusing on the encoding of a specific planning problem (e.g., conformant, conditional, assumption-based, or continual planning), we develop an abstract framework that can eventually combine the features present in all types of planning problems. Our long term goal is to employ this framework in robotic environments and integrate it with appropriate ASP reasoning engines such as ROSoClingo [1]. 2 Foundations A logic program P over a set A of atoms is a set of rules of form a1 ; . . . ; am ← am+1 , . . . , an , ∼ an+1 , . . . , ∼ ao (1) where each ai is an atom in A. As usual, ∼ denotes (default) negation. Given a rule r as in (1), we call {a0 , . . . , am } and {am+1 , . . . , an , ∼ an+1 , . . . , ∼ ao } its head and body + and denote them by H (r) and B (r). Moreover, we define B (r) = {am+1 , . . . , an } − and B (r) = {an+1 , . . . , ao }. A rule is called fact if m, o = 1, normal if m = 1, and an integrity constraint if m = 0. Semantically, logic programs induce sets of stable models, which are distinguished models of the program determined by the stable models semantics; see [13] for details. To ease the use of ASP in practice, several extensions have been developed. First of all, rules with variables are viewed as shorthands for the set of their ground instances. For a ∈ A, a choice rule r is of the form {a} ← B (r) and stands for the pair of rules a ← B (r), ∼ a0 and a0 ← B (r), ∼ a where a0 is a new symbol associated with a. We define H (r) = {a} for a choice rule as r. Further language constructs include conditional literals and cardinality constraints [16]. The former are of the form a : b1 , . . . , bm , the latter can be written as s {d1 ; . . . ; dn } t, where a and bi are possibly negated (regular) Generalized Answer Set Planning with Incomplete Information 3 literals and each dj is a conditional literal; s and t provide optional lower and upper bounds on the number of satisfied literals in the cardinality constraint. We refer to b1 , . . . , bm as a condition. The practical value of both constructs becomes apparent when used with variables. For instance, a conditional literal like a(X) : b(X) in a rule’s antecedent expands to the conjunction of all instances of a(X) for which the corresponding instance of b(X) holds. Similarly, 2 {a(X) : b(X)} 4 is true whenever at least two and at most four instances of a(X) (subject to b(X)) are true. Finally, assignments are of form t = {d1 ; . . . ; dn }. E.g., Y = {a(X) : b(X)} is true if Y equals the number of satisfied instances of a(X) : b(X). We identify the choice rules, normal rules, and integrity constraint in a logic program P by means of c(P ), n(P ), and i(P ), respectively. We define a program P as normalized, if it can be partitioned into c(P ), n(P ), and i(P ), such that B (r) = ∅ for each r ∈ c(P ) and n(P ) is stratified [2]. For simplicity, in this paper, we restrict ourselves to normalized programs, although the implementation handles arbitrary programs. In what follows, we rely on reification for translating a logic program P into a set of facts R(P ) = {choice(a) ← | r ∈ c(P ), a ∈ H (r)} (2) ∪ {rule(r ) ← | r ∈ n(P ) ∪ i(P )} (3) ∪ {head (r , a) ← | r ∈ n(P ), a ∈ H (r)} (4) + ∪ {body(r , +, a) ← | r ∈ n(P ) ∪ i(P ), a ∈ B (r) } (5) − ∪ {body(r , −, a) ← | r ∈ n(P ) ∪ i(P ), a ∈ B (r) } (6) ∪ {scc(c, a) ← | a ∈ Cc , Cc ∈ SCC (P )} (7) where SCC (P ) contains all strongly connected components of the positive atom depen- dency graph of P .1 As an example, consider the program P1 consisting of the rules {a} ← q ← a, ∼ b ← ∼q along with its reification R(P1 ): choice(a) ←, rule(1 ) ←, head (1 , q) ←, body(1 , +, a) ←, body(1 , −, b) ←, rule(2 ) ←, body(2 , −, q) ← We interpret such reified programs by means of the meta encoding M consisting of the rules in (8) to (24), where predicates t/1 and f /1 represent whether an atom is true or f alse, respectively; cf. [12]: 1{t(X ); f (X )}1 ← choice(X ) (8) tbody(R) ← rule(R), t(X ) : body(R, +, X ); f (Y ) : body(R, −, Y ) (9) fbody(R) ← rule(R), f (X ), body(R, +, X ) (10) fbody(R) ← rule(R), t(Y ), body(R, −, Y ) (11) 1 This information is also obtained by gringo’s reified output. 4 Javier Romero, Torsten Schaub, Tran Cao Son t(X ) ← head (R, X ), tbody(R) (12) f (X ) ← atom(X ), ∼ ab(X ), fbody(R) : head (R, X ) (13) 0 0 internal (C , R) ← scc(C , X ), head (R, X ), scc(C , X ), body(R, +, X ) (14) wait(C , X , 0 ) ← scc(C , X ), ∼ ab(X ), fbody(R) : head (R, X ), ∼ internal (C , R) (15) wait(C , R, I ) ← internal (C , R), body(R, +, X ), wait(C , X , I ) (16) wait(C , X , I ) ← wait(C , X , 0 ), Z = {scc(C , )}, I = 1 .. Z, (17) wait(C , R, I − 1 ) : head (R, X ), internal (C , R) (18) f (X ) ← scc(C , X ), wait(C , X , Z ), Z = {scc(C , )} (19) normal (R) ← head (R, X ) (20) ← rule(R), ∼ normal (R), ∼ fbody(R) (21) atom(X ) ← head (R, X ) (22) atom(X ) ← body(R, V , X ) (23) ab(X ) ← choice(X ) (24) Note that the rules in lines (9) to (13) amount to Fitting’s operator [11]. This is extended to the well-founded operator [21] by computing unfounded atoms in lines (14) to (19). Finally, the rules in lines (20) and (21) account for integrity constraints. Proposition 1. The stable models of a normalized logic program P correspond one-to- one to the stable models of R(P ) ∪ M . For instance, the unique stable model of P1 is {a, q}; it corresponds to the only stable model of R(P1 ) ∪ M containing t(a), f (b), and t(q). Accordingly, neither P1 ∪ {b ←} nor R(P1 ) ∪ {rule(3 ) ←, head (3 , b) ←} ∪ M have a stable model. Our goal in this paper is to develop an abstract framework for planning with incom- plete information that builds on a basic ASP encoding. As such, we present this encoding through an example. Example 1. Consider a cleaning robot in a house with two rooms 1 and 2 connected through a door. Initially, the robot is at 1, the door is open, and both rooms are not clean. The robot can move between rooms if the door is open. It can also sweep the room it is in. The goal is to clean both rooms. Obviously, the robot can start with sweeping room 1, then go to room 2, and sweep room 2. A logic program encoding this problem would use atoms of the form holds(f, t), holds(neg(f ), t), and occ(a, t) where f is a fluent in {clean(1), clean(2), at(1), at(2)}, a is an action in {move(1, 2), move(2, 1), sweep(1), sweep(2)}, t is an integer in the range 0.. max, and max is an integer denoting the maximal desirable plan length. The basic ASP encoding contains rules for – expressing the preconditions of actions, e.g., to state that the robot can only sweep a room R if it is in the room R: possible(sweep(R), T ) ← holds(at(R), T ) – expressing the direct effects of actions, e.g., to state that if the robot sweeps room R at step T then room R is clean at step T + 1: holds(clean(R), T + 1) ← occ(sweep(R), T ) Generalized Answer Set Planning with Incomplete Information 5 – expressing the indirect effects of actions, e.g., to state that the robot is not at one room if it is at another room: holds(neg(at(R)), T ) ← holds(at(R0 ), T ), R 6= R0 The program also contains rules for expressing inertia, rules for generating action occurrences, rules for goal verification, and rules encoding the initial states. For this problem, the initial state is encoded by holds(at(1), 0) ← (25) holds(neg(clean(1)), 0) ← holds(neg(clean(2)), 0) ← (26) For max = 3, the program returns the plan sweep(1), move(1, 2), and sweep(2). 3 ∃∀ASP In this section, we introduce the notion of a ∃∀ASP problem, similar to that of a quantified Boolean formula in propositional logic, to capture conformant planning problems. More specifically, a conformant planning problem is viewed as a program with two sets of atoms, one characterizes the set of possible plans, and the other the incomplete initial state. This notion allows us to deal with incompleteness keeping intact the ASP encoding of the planning problem except for the rules representing the initial state. For example, assume that the robot in Example 1 does not know whether any room is clean, we only need to declare that the problem is concerned with the situation where clean(1) and clean(2) are initially unknown. In other words, the notion of ∃∀ASP problem provides an elaboration tolerant way for dealing with incompleteness that is different from previous works such as [18, 20] or develops a specific language for this purpose as in [10]. Definition 1 (∃∀ASP Problem). An ∃∀ASP problem is a tuple (P, A, X, Y, Z) where P is a a logic program over A, and (X, Y, Z) is a partition of A. For simplicity, we sometimes use ∃X∀Y P to refer to the ∃∀ASP problem (P, A, X, Y, Z), leaving implicit the sets A of atoms appearing in P and Z = A − (X ∪ Y ). Let Pr be the program in Example 1 without the rules in (26), A be its set of atoms, X = {occ(a, t) | a is an action, t is a step}, Y = {holds(clean(1), 0), holds(clean(2), 0)}, and Z be the other atoms in A which are not in X or Y , then (Pr , A, X, Y, Z), also written ∃X∀Y Pr , represents the conformant planning problem for the cleaning robot in Example 1. We sometimes abuse notation and identify a set X of atoms with the set of facts {a ← | a ∈ X}. Next we define the solutions to ∃∀ASP problems. Definition 2 (∃∀ASP Solution). A set of atoms X 0 ⊆ X is a solution to the ∃∀ASP problem (P, A, X, Y, Z) if for all sets Y 0 ⊆ Y , the program X 0 ∪ Y 0 ∪ P has some stable model. There is a unique solution to the problem ∃X∀Y Pr of Example 1. It contains the atoms occ(sweep(1), 0), occ(move(1, 2), 1), and occ(sweep(2), 2). In the following, we il- lustrate our approach with simple examples, where letters a, b, . . . stand for actions, letters f , g . . . stand for properties of the domain, u is another property which is typically unknown, and q is some query that has to be achieved. 6 Javier Romero, Torsten Schaub, Tran Cao Son Example 2. Let P2 be the logic program f ←u ← ∼q g ← a, f q←g h ← a, ∼ f q←h The solutions to the ∃{a, u}∀{}P2 problem are {a} and {a,u}, and the unique solution to the ∃{a}∀{u}P2 problem is {a}. Example 3. Let P3 be the result of extending P2 with the set of rules h←b ← a, b The solutions to ∃{a, b}∀{u}P3 are both {a} and {b}. We compute the solutions to ∃X∀Y P with an encoding extending R(P ) ∪ M . For simplicity, we assume that atoms in X and Y do not appear in any rule heads in P , i.e., (X ∪ Y ) ∩ {a ∈ H(r) | r ∈ c(P ) ∪ n(P )} = ∅, although this restriction may easily be lifted. Let F (X, Y ) be the set of facts {exists(a) ←| a ∈ X} ∪ {forall (a) ←| a ∈ Y } reifying the atoms in X and Y , and let EF be the set of rules choice(X ) ← exists(X ) (27) ab(X ) ← forall (X ) (28) The rule in (27) allows for choosing the truth values of the existentially quantified atoms. Without the one in (28) we would derive f (a) for all atoms a in Y , since we are assuming that atoms in Y are excluded from rule heads. Observe that now in the meta encoding M no rule can infer t(a) nor f (a) for any a in Y . Hence, every atom derived using the meta encoding is independent of the truth value of the atoms a ∈ Y . Then, the rules in lines (20) and (21) guarantee that the integrity constraints of the original program are not violated for any truth value of the atoms a ∈ Y . For normalized programs, this implies that the choices made on atoms from X always lead to a stable model, for every possible Y0 ⊆Y. In the following, we write E(∃X∀Y P ) to denote the encoding R(P ) ∪ M ∪ F (X, Y ) ∪ EF . The next theorem establishes that it can be used to compute solu- tions of ∃X∀Y P . Theorem 1. Let (P, A, X, Y, Z) be an ∃∀ASP problem, then for every stable model S of the logic program E(∃X∀Y P ), {a | t(a) ∈ S} ∩ X is a solution to the problem. Example 4. (Cont. Example 2). The encoding E(∃{a, u}∀{}P2 ) has two stable models, one contains atoms t(a), f (u), f (f ), f (g), t(h), t(q), and the other t(a), t(u), t(f ), t(g), f (h), t(q). They correspond to the solutions {a} and {a,u}. The next examples show that in general the encoding E(∃X∀Y P ) is not complete, i.e., there might be solutions it cannot compute. Generalized Answer Set Planning with Incomplete Information 7 Example 5. (Cont. Example 2). The problem ∃{a}∀{u}P2 has one solution {a}, but E(∃{a}∀{u}P2 ) has no stable model. This example shows that our encoding cannot reason by cases, which is needed to find the solution: if u is true, using a we obtain q after deriving f and g, and if u is false, we obtain q after deriving h, so in every case we could obtain q. Example 6. (Cont. Example 3). The encoding E(∃{a, b}∀{u}P3 ) has one stable model with f (a), t(b), f (g), t(h), t(q), corresponding to the solution {b} of ∃{a, b}∀{u}P3 , but there is no stable model for the solution {a}. All our encodings extend E(∃X∀Y P ) and are therefore incomplete. For this reason we say that they are approximations to the problems they aim to solve. 3.1 Extended Solutions to ∃∀ASP The stable models of the encoding E(∃X∀Y P ) may contain atoms t(a) or f (a) for some a ∈ Z. To capture their meaning we define the extended solutions to a ∃∀ASP problem, which may assign atoms a ∈ Z to true or false. To this end, we need some extra notation. Let X, Y ⊆ A be sets of atoms. By XY we denote the set X ∩ Y . A partial interpretation over X is a pair hT, F i where T, F ⊆ X and T ∩ F = ∅. We say that hT, F i is total on X if TX ∪ FX = X. We define a preorder over partial interpretations, where hT, F i is more specific than hT 0 , F 0 i, written hT, F i hT 0 , F 0 i, if T ⊇ T 0 , and F ⊇ F 0 . Definition 3 (∃∀ASP Extended Solution). A partial interpretation hT, F i over X ∪ Z being total on X is an extended solution to the ∃∀ASP problem (P, A, X, Y, Z) if for all sets Y 0 ⊆ Y , the program TX ∪ Y 0 ∪ P ∪ { ← ∼ a | a ∈ TZ } ∪ { ← a | a ∈ FZ } has some stable model. It turns out that, considering the atoms t(a) and f (a) for a ∈ X ∪ Z, every stable model of E(∃X∀Y P ) corresponds to an extended solution to the problem. Note that in Theorem 1 we considered only the t(a) for a ∈ X. Theorem 2. Let (P, A, X, Y, Z) be an ∃∀ASP problem, then for every stable model S of the logic program E(∃X∀Y P ), h{a | t(a) ∈ S}X∪Z , {a | f (a) ∈ S}X∪Z i is an extended solution to the problem. The following propositions establish the relation between solutions and extended solu- tions to an ∃∀ASP problem. Proposition 2. Let X 0 be a solution to the ∃∀ASP problem (P, A, X, Y, Z), then hX 0 , X− X 0 i is an extended solution to the problem. Proposition 3. Let hT, F i be an extended solution to the ∃∀ASP problem (P, A, X, Y, Z), then TX is a solution to the problem. We say that an extended solution hT, F i is maximal, if it is maximal among all extended solutions wrt the ordering . It is easy to see that if hT, F i is a maximal extended solution, then all partial interpretations hT 0 , F 0 i with hTX 0 0 , FX i = hTX , FX i 0 0 and hTZ , FZ i hTZ , FZ i are also extended solutions. In particular, hTX , FX i is an extended solution. 8 Javier Romero, Torsten Schaub, Tran Cao Son Example 7. (Cont. Example 4). The ∃∀ASP ∃{a, u}∀{}P2 problem has two maximal extended solutions, h{a, h, q}, {u, f, g}i, and h{a, u, f, g, q}, {h}i, which correspond to the two stable models of E(∃{a, u}∀{}P2 ) described in Example 4. Some non maximal extended solutions are, for example, h{a}, {u}i and h{a, u, q}, {}i. Example 8. (Cont. Example 5). The extended solutions to ∃{a}∀{u}P2 are h{a, q}, {}i and h{a}, {}i; only the first one is maximal. None of them is computed by E(∃{a}∀{u}P2 ), that has no stable models. Example 9. (Cont. Example 6). The maximal extended solutions to ∃{a, b}∀{u}P3 are h{a, q}, {b}i and h{b, h, q}, {a, g}i. The last one corresponds to the unique stable model of the encoding E(∃{a, b}∀{u}P3 ), described in Example 6. In general, not all stable models of E(∃X∀Y P ) correspond to maximal extended solutions. For example, deleting the constraint ← ∼ q from P2 , the unique stable model of E(∃{a}∀{u}P2 ) would contain t(a) and correspond to the extended solution h{a}, {}i, which is less specific than the maximal extended solution h{a, q}, {}i. 4 Programs with Knowledge ∃∀ASP is sufficiently expressive for representing conformant planning problems, but it cannot be used to represent problems that require sensing actions, i.e., reasoning about beliefs/knowledge. For instance, consider a variation of the problem in Example 1. Assume that initially the door is closed. The door may either have a key lock or a card lock. The robot has a key and a card, but the type of the lock is initially unknown. If the robot tries to open the door with the wrong object, the lock gets stucked so that afterwards there is no way to open the door. The robot can sense the type of the lock with a sensing action check. It is easy to see that this problem has no conformant plan, while there is a conditional plan with two branches after the execution of the sensing action check. Another approach for solving this problem is to introduce an assertion action assert(open) [7] establishing that the door is open if the type of the lock is known. Then there is a plan that contains actions check and assert(open), and achieves the goal. Assertions are abstract actions that by definition cannot be executed in the real world. So after sensing the type of the lock, assert(open) cannot be executed, and the robot must replan. Observe that now the type of the lock is already known, so there is again a plan achieving the goal, using the appropriate object to open the door. This example shows the method for continual planning using assertions: first, a plan is found containing a sensing and an assertion action, then, after executing the sensing action in the real world, the system replans, and using the sensed information it finds a plan without assertions that can be normally executed. To represent such an approach, we introduce the notion of a logic program with knowledge and extend the notion of ∃∀ASP to this new class of programs. If X is a set of atoms, by X kw we denote the set {akw | a ∈ X} of know-whether atoms of X. Definition 4 (Logic program with knowledge). A logic program with knowledge P over A is a logic program over A ∪ Akw where Akw ∩ {a | r ∈ P, a ∈ H (r)} = ∅. Generalized Answer Set Planning with Incomplete Information 9 The last condition expresses that the know-whether atoms Akw are not allowed to appear in the head of the rules. Given a partial interpretation hT, F i over A, we say that an atom a ∈ A is (known to be) true if a ∈ T , it is (known to be) false if a ∈ F , it is known(-whether) if a ∈ T ∪ F , and otherwise it is unknown(-whether). The know- whether atoms of logic programs with knowledge allow expressing that an atom a is known or unknown in a partial interpretation, via akw or ∼ akw , respectively. They are restricted to the bodies of rules to guarantee that their truth value is determined by the partial interpretation they are referring to. We next define the notions of a problem with knowledge and the corresponding solution. Definition 5 (∃∀ASP Problem with Knowledge). A ∃∀ASP problem with knowledge is a tuple (P, A, X, Y, Z) where P is a logic program with knowledge over A, and (X, Y, Z) is a partition of A. Definition 6 (∃∀ASP Solution with Knowledge). A partial interpretation hT, F i over X∪Z being total on X is a solution to the ∃∀ASP problem with knowledge (P, A, X, Y, Z) if for all sets Y 0 ⊆ Y , the program TX ∪ Y 0 ∪ P ∪ { ← ∼ a | a ∈ TZ } ∪ { ← a | a ∈ FZ } ∪ {akw ← a | a ∈ T } ∪ {akw ← ∼ a | a ∈ F } has some stable model. As before, we say that a solution to a ∃∀ASP problem with knowledge is maximal if it is a maximal element of the set of solutions to the problem ordered by . Example 10. The problem with knowledge ∃{}∀{u}P4 where P4 is the following logic program f← ←g h←u kw kw kw i←f ,g j ← ∼g k ← ∼ hkw has a unique maximal solution h{f, i, k}, {g, j}i. From the rules in the first line we obtain that f is known to be true, g is known to be false, and h is unknown. From the second line, given that f and g are known, we derive i but not j, and we obtain k since h is unknown. Given a partial interpretation hT, F i, the rules {akw ← a | a ∈ T } ∪ {akw ← ∼ a | a ∈ F } of Definition 6 infer the atoms akw for every a ∈ T ∪ F . In the Example 10, they derive atoms f kw and g kw , which are used to obtain i and block the derivation of j. Note that simply adding the set of rules {akw ← a | a ∈ A} ∪ {akw ← ∼ a | a ∈ A} to Definition 6 would not work, because then akw would always be true for all a ∈ A. The know-whether atoms akw express that either a is known to be true or a is known to be false in a partial interpretation. We can introduce new atoms akt and akf for expressing that a is known true in a partial interpretation, and that a is known false in a partial interpretation, respectively. They can be defined with the rules {akt ← a, akw | a ∈ A} ∪ {akf ← ∼ a, akw | a ∈ A} (29) Example 11. (Cont. Example 10) The problem ∃{}∀{u}P4 ∪ { ← h} has no solution, since we cannot prove that h is false for all values of u. However, adding (29) and replacing ← h by ← hkt , the problem ∃{}∀{u}P4 ∪ (29) ∪ { ← hkt } has some solution again, since now the constraint only requires h not known to be true. 10 Javier Romero, Torsten Schaub, Tran Cao Son The next theorem states the relation between the solutions of a problem with knowl- edge and the solutions defined in the previous section. Theorem 3. A partial interpretation hT, F i over X ∪ Z being total on X is a solution to the ∃∀ASP problem with knowledge (P, A, X, Y, Z) if hT, F i is an extended solution to the ∃∀ASP problem (without knowledge) (P 0 , A ∪ Akw , X, Y, Z ∪ Akw ) where P 0 = P ∪ {akw ← a | a ∈ T } ∪ {akw ← ∼ a | a ∈ F }. The theorem reduces a problem with knowledge to a problem without it, where the know-whether atoms are treated as regular atoms, and the original program is extended with the rules to derive them appropriately. Example 12. The problem with knowledge ∃{}∀{u}P5 where P5 is the following logic program f ←u sense ← ∼ f kw f ← ∼u has two maximal solutions: h{f }, {sense}i and h{sense}, {}i. In the first solution the atom f is true and sense cannot be derived, while in the second solution f is unknown and sense can be obtained. Usually, approaches to knowledge consider that an atom is known if it holds for all possible values of the unknown atoms. Under this interpretation, in the previous example, only h{f }, {sense}i would be a valid solution, since no matter the value of u, atom f is always true, so we can conclude that it is known and therefore sense is false. However, for an agent using an approximation, the solution h{sense}, {}i is also reasonable. The agent using our encoding actually does not know the value of f , given that it cannot reason by cases. Then it is reasonable for the agent to derive sense, so that afterwards it senses the value of f . In this respect, our approach to knowledge does not capture the notion of what may be known by an agent using a logic program, but rather it represents what is actually known by that agent using an approximation. Now let’s consider an example with an assertion action. Example 13. Let P6 be the program that results after replacing in P2 (from Example 2) the rule h ← a, ∼ f by these two h ← b, ∼ f ← a, b The problem ∃{a, b}∀{u}P6 has no solution. Intuitively, there is a conditional solution where if u is true, then a is made true to achieve q, while if u is false, then b is made true instead. This simple type of conditional solution can be modeled with an assertion. Let P7 extend P6 with the following rules f 0 ← f kw f 0 ← sense q ← c, f 0 where f 0 is a new atom representing that f is known either because we know-whether it or because we have sensed it, sense senses fluent f , and c stands for an assertion that Generalized Answer Set Planning with Incomplete Information 11 derives q when f 0 is true. The problem ∃{c, sense}∀{u}P7 has one maximal solution h{c, sense, f 0 , q}, {a, b, g, h}i. Atoms f and f 0 are unknown and true, respectively, representing the case where although f is not known-whether, it is known because it has been sensed. In the real-world, the agent would first sense f , and afterwards, given that the assertion c is an abstract action that cannot be executed, the agent would have to search for a new solution using the knowlege gathered by sensing. For computing the solutions to a ∃∀ASP problem with knowledge (P, A, X, Y, Z), we extend the encoding E(∃X∀Y P ) with the set of rules K {t(a kw ) ← t(a) | a ∈ A} ∪ {t(a kw ) ← f (a) | a ∈ A} ∪ {ab(a kw ) ← t(a) | a ∈ A} ∪ {ab(a kw ) ← f (a) | a ∈ A} adding appropriately the rules {akw ← a | a ∈ T } ∪ {akw ← ∼ a | a ∈ F } of Definition 6. When an atom a is known true or false in a partial interpretation hT, F i, the encoding E(∃X∀Y P ) derives t(a) or f (a), respectively. Then the rules in the first line infer t(a kw ), and the rules in the second line block the establishment of f (a kw ) by the meta encoding. On the other hand, if a is unknown in hT, F i, then the rules in K are not applied, while the rules in the meta encoding infer f (a kw ), given that know-whether atoms do not appear in the head of rules. Theorem 4. Let (P, A, X, Y, Z) be an ∃∀ASP problem with knowledge, then for each stable model S of the logic program E(∃X∀Y P ) ∪ K, h{a | t(a) ∈ S}X∪Z , {a | f (a) ∈ S}X∪Z i is a solution to the problem. Example 14. (Cont. Example 10). The encoding E(∃{}∀{u}P4 ) ∪ K has one stable model, corresponding to the unique maximal solution to the problem. Example 15. (Cont. Example 11). The encoding E(∃{}∀{u}P4 ∪ { ← h}) ∪ K has no stable model, while E(∃{}∀{u}P4 ∪ (29) ∪ { ← hkt }) ∪ K has one, corresponding to the unique maximal solution to the problem. Example 16. (Cont. Example 12). The encoding E(∃{}∀{u}P5 )∪K has only one stable model, corresponding to the maximal solution h{sense}, {}i to the problem. There is no stable model for the solution h{f }, {sense}i, given that it would require reasoning by cases. Example 17. (Cont. Example 13). The encoding E(∃{a, b}∀{u}P6 ) ∪ K has no stable model, while E(∃{c, sense}∀{u}P7 ) ∪ K has one stable model corresponding to the unique maximal solution to the problem. 5 Solutions with Assumptions In general, for dealing with incomplete information and sensing actions, agents may have to devise conditional plans, or plans with assertions. However, as argued in [9], this might be not necessary in many practical situations, where agents can still achieve their goals by making reasonable assumptions, planning, and executing the generated plan with the understanding that replanning might be needed sometimes. Let us reconsider 12 Javier Romero, Torsten Schaub, Tran Cao Son the variation of the problem in Example 1 discussed in Section 4 with a slight change: when the robot tries to open the door with the wrong object, the door does not open, and there is a signal indicating the error. In this domain, the robot could proceed by assuming that the key can be used to open the door, and generating a plan involving this assumption (e.g., a reasonable plan is the sequence: sweep room 1, open the door with the key, move to b, and sweep room 2). The assumption is justified because the robot is familiar with this type of arrangements, and it has encountered far more doors with the key lock than with the card lock. When the robot executes the plan, if it realizes that the assumption was incorrect, it can address the issue by replanning. To represent this approach, we introduce the notion of a solution with assumptions, extending the notion of solution of a ∃∀ASP problem. Definition 7 (Solution with Assumptions of ∃∀ASP). A partial interpretation hT, F i over A being total on X is a solution with assumptions hTY , FY i to the ∃∀ASP problem (P, A, X, Y, Z) if hT, F i is a solution to the ∃∀ASP problem (P ∪ TY ∪ {← a | a ∈ FY }, A, X, Y − (TY ∪ FY ), Z ∪ TY ∪ FY ). A solution with assumptions assumes truth values for a subset of the universally quan- tified atoms, and reduces the problem to one without assumptions. This is achieved extending the original program with facts for the atoms that are assumed to be true, and constraints for the atoms that are assumed to be false. Furthermore, the selected atoms are no longer quantified. The implementation only requires allowing choices over the universally quantified atoms. This can be done with the following program A: {choice(X )} ← forall (X ) Theorem 5. Let (P, A, X, Y, Z) be an ∃∀ASP problem, then for each stable model S of the logic program E(∃X∀Y P ) ∪ A, h{a | t(a) ∈ S}A , {a | f (a) ∈ S}A i is a solution with assumptions h{a | t(a) ∈ S}Y , {a | f (a) ∈ S}Y i to the problem. Example 18. (Cont. Example 2). Consider the problem ∃{a}∀{u}P2 of Example 2. As shown in Example 5, the solution h{a, q}, {}i cannot be computed by the program E(∃{a}∀{u}P2 ), but using E(∃{a}∀{u}P2 )∪A two solutions with assumptions can be computed. The first is h{a, f, g, q, u}, {h}i with assumptions h{u}, {}i, and the second is h{a, h, q}, {f, g, u}i with assumptions h{}, {u}i. Example 19. (Cont. Example 13). Consider the problem ∃{a, b}∀{u}P6 of Example 13 with no solutions. There are two solutions with assumptions that can be computed with E(∃{a, b}∀{u}P6 ) ∪ A. The first is h{a, f, g, q, u}, {b, h}i with assumptions h{u}, {}i, and the second is h{b, h, q}, {a, f, g, u}i with assumptions h{}, {u}i. Among the solutions with assumptions computed by E(∃X∀Y P ) ∪ A, we can select those with a minimal number of assumptions simply adding a minimize statement: #minimize{1, X : choice(X ), forall (X )} Finally, the previous notions are seamlessly applied to logic programs with knowl- edge. Generalized Answer Set Planning with Incomplete Information 13 Definition 8 (Solution with Assumptions to ∃∀ASP with Knowledge). A partial in- terpretation hT, F i over A being total on X is a solution with assumptions hTY , FY i to the ∃∀ASP problem with knowledge (P, A, X, Y, Z) if hT, F i is a solution to the ∃∀ASP problem with knowledge (P ∪TY ∪{← a | a ∈ FY }, A, X, Y −(TY ∪FY ), Z∪TY ∪FY ). Theorem 6. Let (P, A, X, Y, Z) be an ∃∀ASP problem with knowledge, then for each stable model S of the logic program E(∃X∀Y P ) ∪ K ∪ A, h{a | t(a) ∈ S}A , {a | f (a) ∈ S}A i is a solution with assumptions h{a | t(a) ∈ S}Y , {a | f (a) ∈ S}Y i to the problem. 6 Discussion Starting from the basic problem of ∃∀ASP, we provided a series of characterizations of increasing complexity in order to capture concepts for planning with incomplete information. We addressed the various problems by means of a metaencoding that allows us to approximate the actual solutions. This is driven by the desire to stay within the realm of ASP’s computational complexity. The current implementation deals with logic programs, whose decision problem is NP-complete and whose heads are free of quantified atoms. The latter are captured via predicates exists/1 and forall /1 . Predicate kw /1 is used to represent know-whether atoms. The actual metaencoding generalizes the one described previously, and works for full clingo programs (and not just normalized ones). Alternatively, a metaencoding based in [22] can be used. The input program is reified by clingo, and passed along with the metaencodings once more to clingo for solving. The current implementation is avail- able at https://github.com/javier-romero/oclingo/tree/master/ approximation (but will move to https://github.com/potassco). The approximation can be seen as an attempt to generalize [20] from action theories to ASP; it is closely related to the work on FO(ID) of [22]. The knowledge part generalizes the approach of [7] from action language SAS to ASP. The idea of planning with assumptions comes from [9]. They use an action language with conditional effects, and propose a translation to classical planning. Acknowledgement. This work was partially funded by DFG grant SCHA 550/9. References 1. B. Andres, D. Rajaratnam, O. Sabuncu, and T. Schaub. Integrating ASP into ROS for reasoning in robots. In LPNMR’15, pages 69–82. Springer, 2015. 2. K. Apt, H. Blair, and A. Walker. Towards a theory of declarative knowledge. In Foundations of Deductive Databases and Logic Programming, pages 89–148. Morgan Kaufmann, 1987. 3. M. Balduccini and M. Gelfond. Diagnostic reasoning with A-Prolog. Theory and Practice of Logic Programming, 3(4-5):425–461, 2003. 4. M. Balduccini and M. Gelfond. The autonomous agent architecture: An overview. In Architectures for Intelligent Theory-Based Agents, pages 1–6. AAAI Press, 2008. 5. C. Baral and M. Gelfond. Reasoning agents in dynamic domains. In Logic-Based Artificial Intelligence, pages 257–279. Kluwer, 2000. 14 Javier Romero, Torsten Schaub, Tran Cao Son 6. C. Baral, V. Kreinovich, and R. Trejo. Computational complexity of planning and approximate planning in the presence of incompleteness. Artificial Intelligence, 122:241–267, 2000. 7. M. Brenner and B. Nebel. Continual planning and acting in dynamic multiagent environments. Autonomous Agents and Multi-Agent Systems, 19(3):297–331, 2009. 8. W. Burgard, A. Cremers, D. Fox, D. Hähnel, G. Lakemeyer, D. Schulz, W. Steiner, S. Thrun. The interactive museum tour-guide robot. In AAAI’98, pages 11–18. AAAI Press, 1998. 9. S. Davis-Mendelow, J. Baier, S. McIlraith. Assumption-based planning: Generating plans and explanations under incomplete knowledge. In AAAI’13, pages 209–216. AAAI Press, 2013. 10. T. Eiter, W. Faber, N. Leone, G. Pfeifer, and A. Polleres. A logic programming approach to knowledge-state planning. Artificial Intelligence, 144(1-2):157–211, 2003. 11. M. Fitting. A Kripke-Kleene semantics for logic programs. Journal of Logic Programming, 2(4):295–312, 1985. 12. M. Gebser, R. Kaminski, and T. Schaub. Complex optimization in answer set programming. Theory and Practice of Logic Programming, 11(4-5):821–839, 2011. 13. M. Gelfond and V. Lifschitz. Classical negation in logic programs and disjunctive databases. New Generation Computing, 9:365–385, 1991. 14. H. Levesque, R. Reiter, Y. Lespérance, F. Lin, and R. Scherl. GOLOG: A logic programming language for dynamic domains. Journal of Logic Programming, 31(1-3):59–83, 1997. 15. V. Lifschitz. Answer set programming and plan generation. Artificial Intelligence, 138(1- 2):39–54, 2002. 16. P. Simons, I. Niemelä, and T. Soininen. Extending and implementing the stable model semantics. Artificial Intelligence, 138(1-2):181–234, 2002. 17. T. Son, C. Baral, T. Nam, and S. McIlraith. Domain-dependent knowledge in answer set planning. ACM Transactions on Computational Logic, 7(4):613–657, 2006. 18. T. Son, P. Tu, and C. Baral. Planning with sensing actions and incomplete information using logic programming. In LPNMR’04, pages 261–274. Springer, 2004. 19. P. Tu, T. Son, and C. Baral. Reasoning and planning with sensing actions, incomplete information, and static causal laws using answer set programming. Theory and Practice of Logic Programming, 7:377–450, 2007. 20. P. Tu, T. Son, M. Gelfond, and A. Morales. Approximation of action theories and its application to conformant planning. Artificial Intelligence, 175(1):79–119, 2011. 21. A. Van Gelder, K. Ross, and J. Schlipf. The well-founded semantics for general logic programs. Journal of the ACM, 38(3):620–650, 1991. 22. H. Vlaeminck, J. Vennekens, M. Denecker, and M. Bruynooghe. An approximative inference method for solving ∃∀SO satisfiability problems. JAIR, 45:79–124, 2012.