=Paper=
{{Paper
|id=Vol-2659/schueller
|storemode=property
|title=A new OWLAPI interface for HEX-programs applied
to explaining contingencies in production planning
|pdfUrl=https://ceur-ws.org/Vol-2659/schueller.pdf
|volume=Vol-2659
|authors=Peter Schüller
|dblpUrl=https://dblp.org/rec/conf/ecai/Schuller20
}}
==A new OWLAPI interface for HEX-programs applied
to explaining contingencies in production planning==
A new OWLAPI interface for HEX-Programs applied to Explaining Contingencies in Production Planning Peter Schüller1 Abstract. Description Logics (DLs) and Logic Programs are for- ClosedBox v Box closed boxes are boxes malisms for describing knowledge about the world. In industrial set- OpenBox v Box open boxes are boxes tings, DLs are often used to describe factories and inventories, and ClosedBox u OpenBox v ⊥ no box is both open and closed logic programs could be useful for planning production - if they can MRobot v Robot there are manipulation robots . . . be integrated with DLs. We here introduce a variation of the DL Pro- PRobot v Robot . . . and painting robots gram formalism for integrating description logics with Answer Set AffordsClosing v Affordance Programming (ASP). We a new implementation based on OWLAPI AffordsOpening v Affordance and a use case in Explainable AI for production planning. Our inte- AffordsPainting v Affordance gration is based on the HEX formalism and the Hexlite solver. Differ- OpenBox v AffordsClosing open boxes can be closed ent from previous work, we extend the integration interface to permit ClosedBox v AffordsOpening closed boxes can be opened an arbitrary number of parallel modifications of the OWL ontology ClosedBox v AffordsPainting closed boxes can be painted to be considered for the computation of a single answer set. This is Figure 1. Production Planning Terminology (OWL TBox). useful to model changes of the planning domain over time, where each time step is a separate modification of the ontology. We present a case study towards the EU recommendations on ‘Human-centered, trustworthy AI systems’, concretely we present a r1 : MRobot r2 : MRobot r2 : PRobot production planning framework that can be challenged by users with b1 : OpenBox b2 : OpenBox b3 : ClosedBox alternative scenarios. With our case study we argue that Explainabil- Figure 2. Production Planning Assertions (OWL ABox): we have two ity in the context of planning is not about the rationale behind deci- boxes b1 and b2 and two robots r1 and r2. sions (those are defined by formal semantics and by the program) but we argue that Explainability should mainly focus on the possibility of human operators to challenge the system and request alternative is represented as a sequence of steps and in each step different truth solutions under alternative assumptions. We provide the following values can hold in the world. examples to show that Logic Programming requires only very small In practice, representing planning in DLs or representing ontologi- modifications to provide insights by means of the following what- cal knowledge in ASP is usually impractical.2 Therefore, it is natural if questions: (i) which machines could be deactivated without pre- to integrate DLs and ASP in a way that both formalisms are used in venting to reach the production goal and (ii) which product could be parallel within one integrated reasoning method. HEX is an ASP for- skipped to reach a reduced production goal in a shorter time limit. malism for integrating ASP with external computations of all kinds, and one existing method for integrating ASP with DLs, DL-programs [8] actually use HEX as an underlying implementation formalism. 1 Introduction As a running example, consider an ontology about robots that can manipulate and paint boxes: the TBox is shown in Figure 1, the ABox Description Logics (DLs) are useful tools for describing knowledge of a concrete factory is shown in Figure 2. The goal of our production about objects in the world, and they are in practical usage in indus- planning is to find a sequence of actions that paint all boxes where trial settings, where they are the basis for describing factory invento- only robots in the class PRobot can paint and only closed boxes ries and production processes. These descriptions are static and usu- afford3 painting. Finally, robots in the class MRobot can open and ally considered timeless. close boxes. Consider now the following information we would like In production planning, processes in a factory change products to gain by quering this planning domain: (I) is there a plan that fin- and materials. For planning it is useful to consider changes in the ishes the production within n1 steps, using as few actions as possible, assertional knowledge of an ontology. One method for planning is and what is that plan; (II) if we would have n2 > n1 steps available, the usage of Answer Set Programming (ASP) [4], which is a non- which robots could we omit from the domain and still finish the job monotonic knowledge representation formalism based on rules and in time; (III) if we have only n3 < n1 steps available, which product solver tools based on SAT solving techniques. In ASP planning, time 2 It is sometimes possible due to complexity results. 3 Affordances are a popular concept in robotics regarding prototypical ac- 1 Technische Universität Wien, Institute of Logic and Computation, tions: if an object affords a certain action, it provides the possibility to be Knowledge-Based Systems Group, Austria, peter.schueller@tuwien.ac.at affected by that action. An affordance is a property of an object, not an Copyright c 2020 for this paper by its authors. Use permitted under Cre- agent, and some agents might not be capable of performing the action due ative Commons License Attribution 4.0 International (CC BY 4.0). to geometric constraints or missing actuators. could we omit from production to finish the other products in time. framework for integrating OWLAPI in HEX and comment on the im- DL-programs [8] are well-suited to express planning domains, in- plementation, in Section 4 we use the framework for demonstrating cluding the above explanatory queries to the planning domain. DL- how to explain contingencies in production planning, we discuss Ex- programs permit queries to ontologies based on a modification of plainability in Section 5 and we conclude in Section 6 with a brief an ontology that depends on the answer set of the solver. There- discussion. fore, DL-programs provide a bidirectional integration between ASP and DLs. Unfortunately, the concrete syntactic realization of DL- programs makes it cumbersome to encode production planning prob- 2 Preliminaries lems, because it is necessary to represent each time step using distinct 2.1 HEX - Answer Set Programming with External logical predicates. Moreover, there is no current working implemen- Computations tation of DL-programs available as the licensing and the API of the underlying DL reasoner has changed. We give syntax and semantics of the HEX formalism [5] which gener- In the following, we use the idea of DL-programs but we develop alizes logic programs under answer set semantics [13] with external and present the problem directly within the HEX-formalism. This re- computations. A comprehensive introduction to HEX is given in [10]. 4,5 moves one layer of abstraction and simplifies presentation. HEXLITE [21] is a solver for the HEX formalism which is based ASP is often praised for its Elaboration Tolerance [18] which is the on Clingo [12] and enables implementation of external computations property to create modular programs with a separation of concerns using PYTHON. among program parts, and the possibility to modify each concern separately and easily without a need to modify other parts of the pro- gram. ASP also provides multiple answers and alternative solutions. HEX Syntax This makes AI systems based on ASP suitable for addressing several Let C, X , and G be mutually disjoint sets whose elements are called concerns of the EU recommendation on ‘human-centered, trustwor- constant names, variable names, and external predicate names, re- thy AI systems’, in particular spectively. Usually, elements from X and C are denoted with first • Human Agency, concretely to ‘self-assess and challenge the sys- letter in upper case and lower case, respectively; while elements from tem‘; and G are prefixed with ‘ & ’. Elements from C ∪ X are called terms. An • Technical Robustness, concretely to provide ‘Fallback Plans‘. (ordinary) atom is a tuple p(Y1 , . . . , Yn ) where p ∈ C is a predicate name and Y1 , . . . , Yn are terms and n ≥ 0 is the arity of the atom. We next address above challenges and make the following contri- An atom is ground if all its terms are constants. An external atom is butions. of the form &g[Y1 , . . . , Yn ](X1 , . . . , Xm ), where Y1 , . . . , Yn and • We introduce a new way of interfacing with DLs from HEX where X1 , . . . , Xm are two lists of terms, called input and output lists, it is easily possible to describe multiple ontology modifications respectively, and &g ∈ G is an external predicate name. We as- in the extension of one predicate, which facilitates the encoding sume that input and output lists have fixed lengths in(&g) = n and of domains where multiple alternative worlds must be considered out(&g) = m. With each term Yi in the input list, 1 ≤ i ≤ n, we and we do not know beforehand how many of such alternative associate a type ti ∈ {cons} ∪ N. We call the term constant input worlds exist. Planning is such a domain. iff ti = cons, otherwise we call it predicate input of arity ti . • We implement this integration using the OWLAPI interface and the A rule r is of the form α1 ∨ · · · ∨ αk ← HEXLITE solver [21], which provides a free and universal API for β1 , . . . , βn , not βn+1 , . . . , not βm with m, k ≥ 0 where all OWL DLs to be used together with answer set programs. HEXLITE αi are atoms and all βj are either atoms or external atoms. We is based on CLINGO [12] and therefore uses the state of the art in let H(r) = {α1 , . . . , αk } and B(r) = B + (r) ∪ B − (r), where ASP solving. OWLAPI interfaces with many existing DL reason- B + (r) = {β1 , . . . , βn } and B − (r) = {βn+1 , . . . , βm }. A HEX -program is a finite set P of rules. ers. • We present a use case of production planning where action pre- conditions are defined by the ontology, action effects are realized HEX Semantics as ontology modifications in the HEX program, and the initial state of the world is taken from the ontology, including in particular all Given a rule r, the grounding grnd (r) of r is obtained by system- individual names which are not known to the HEX program prior atically replacing all variables with constants from C. Given a HEX- to solving. program P , the Herbrand base HB P of P is the set of all possi- • We formulate our thoughts about how Explainability can be re- ble ground versions of atoms and external atoms occurring in P ob- alized in Logic programs and argue that the classical notion of S from C. The grounding tained by replacing variables with constants Explainability does not apply to Logic Programming, and that grnd (P ) of P is given by grnd (P ) = r∈P grnd (r). Importantly, instead the possibility to challenge the system with alternative the set of constants C that is used for grounding a program is only scenarios is an appropriate way to make a Logic Program more partially given by the program itself: external computations may in- human-centric. troduce new constants, for example external computations can import • We discuss a possible usage of the production planning encoding individual names from an ontology. for obtaining explanations about feasible and necessary changes in Extensional Semantics [9, 5] of external atoms are defined the production process to reach given production goals. Such rea- as follows: we associate a (n+1)-ary extensional evaluation soning can be useful for optimizing the factory and for hardening function F&g with every external predicate name &g ∈ G. it against equipment malfunctions. Given an interpretation I ⊆ HB P and a ground input tuple In the following, in Section 2 we provide preliminaries of HEX- 4 github.com/hexhex/hexlite 5 www.ai4eu.eu/resource/hexlite programs and Description Logics, in Section 3 we describe the new 2 (x1 , . . . , xm ), F&g (I, y1 , . . . , yn ) returns a set of ground output tu- 3 Parameterized Multi-Modification-Integration of ples (x1 , . . . , xm ). The external computation is restricted to depend OWLAPI in HEX (a) for contant inputs, i.e., ti = cons, only on the constant value of yi ; and (b) for predicate inputs, i.e., ti ∈ N, only on the extension of Our novel integration interface has two major components: a repre- predicate yi of arity ti in I.6 sentation for ontology modifications in ASP atoms (Section 3.1) and An interpretation I ⊆ HB P is a model of an atom a, denoted external atoms that query the ontology based on these modifications I |= a if a is an ordinary atom and a ∈ I. I is a model of a ground (Section 3.2). external atom a = &g[y1 , . . . , yn ](x1 , . . . , xm ) if (x1 , . . . , xm ) ∈ F&g (I, y1 , . . . , yn ). Given a ground rule r, I |= H(r) if I |= a for 3.1 Ontology Modifications some a ∈ H(r); I |= B(r) if I |= a for all a ∈ B + (r) and I 6|= a for all a ∈ B − (r); and I |= r if I |= H(r) whenever I |= B(r). Ontology modifications are represented as atoms in ASP as follows. Given a HEX-program P , I |= P if I |= r for all r ∈ grnd (P ); the Definition 1. A modification atom is of the form FLP-reduct [11] of P with respect to I ⊆ HB P , denoted f P I , is the set of all r ∈ grnd (P ) such that I |= B(r); I ⊆ HB P is an answer δ(τ, µ) (1) set of P if I is a minimal model of f P I , and we denote by AS(P ) the set of all answer sets of P . with δ some predicate name, τ ∈ N and µ one of the following terms: An important shortcut notation that we use in this work is the guess in the head of a rule: {α1 ; · · · ; αk } ← addc(C, I) addop(OP , I1 , I2 ) adddp(DP , I, D) (2) β1 , . . . , βn , not βn+1 , . . . , not βm with m, k ≥ 0 is rewritten into delc(C, I) delop(OP , I1 , I2 ) deldp(DP , I, D) (3) a set of rules such that, given the rule body is satisfied, candidate an- swer sets are generated which contain all subsets of {α1 , . . . , αk }. where C, OP , and DP are OWL Class, Object Property, and Data Property IRIs, respectively, I, I1 , and I2 are OWL Individual IRIs, and D is either an integer or a string parseable as OWL Data value. 2.2 Description Logics and OWLAPI Intuitively the modifications in (2) add ABox assertions and those in (3) remove ABox assertions. Description Logics (DLs) [1] are logics of limited expressivity with decidable reasoning. In particular a DL usually permits only unary and binary predicates, called concepts and roles, respectively. DL 3.2 External Atoms reasoning is usually organized in a TBox (terminological axioms) External atoms for querying the ontology are of the following form, and in an ABox (assertional axioms). where ω is a specifier for the ontology and δ/τ are as above. For the purposes of this paper it is sufficient to introduce concept inclusion axioms, concept intersection, the empty concept, and con- &dlConsistent[ω, δ, τ ]() for consistency of the ontology (4) cept assertions. Concept inclusion axioms of form C v D denote that ∀x : C(x) → D(x), i.e., every instance in concept C is also &dlC [ω, δ, τ, C ](I ) for querying class instances (5) in concept D. Concept intersection C u D is a concept expression &dlDP [ω, δ, τ, DP ](I1 , I2 ) for querying data properties (6) that contains the intersection of two concepts, i.e., all instances that &dlOP [ω, δ, τ, OP ](I1 , D) for querying object properties (7) are both in C and in D. The empty concept, ⊥, denotes an impossi- ble class that is always empty. A concept membership axiom of form Intuitively, these external atoms query the ontology ω after it has i : C denotes that the individual/constant i is in concept C. been modified using all modifications in the extension of δ in the current answer set candidate I, selected by τ . Formally, the above Example 1. In our running example TBox in Figure 1, the meaning external atoms perform reasoning on the ontology modified by of each axiom is written next to the axiom, except for the affordances which can be read as ‘AffordsClosing is an Affordance’, etc. The {µ | δ(τ, µ) ∈ I}. ABox in Figure 2 introduces individual constants into the theory and assigns concept memberships to them. Example 2. In our running example, the external atom Answer Set Programs and DLs have been combined in Description &dlC[onto,delta,T,"ex:AffordsOpening"](B) Logic Programs (DL-Programs) [8]. This formalism paved the way with T=0 and an empty extension of delta is false for B=b1 for the development of HEX-programs. In brief, DL-programs are and true for B=b2, because the ontology contains assertions b1 : Answer Set Programs with special DL-atoms of form ClosedBox and b2 : OpenBox from which the reasoner infers that b1 affords opening (via Axiom ClosedBox v AffordsOpening) but DL[C1 op1 p1 , . . . , Cm opm pm ; Q](~t) b2 does not. The HEX-program defines in each time step T that the modification which evaluates the DL-Query Q on an ontology modified by op- of the ontology corresponds with the planning state which is repre- erations of form C op p and evaluates to true for all tuples ~t in the sented in predicate s(Box,State,T). Therefore, if at T=2 some result of query Q. The modifications can add (op = ]) or subtract action has caused a change in the state such that s(b1,open,2) (op = ∪ - ) the extension of predicate p from the assertions of concept and s(b2,open,2) are true, then the above external atom, with C. Note that one DL-atom always takes the whole extension of all T=2, is false for both B=b1 and B=b2. predicates p1 , . . . , pm to modify the ontology. The format of the ontology specifier ω is a string pointing to a JSON file that holds the location of the OWL ontology file and pro- 6 Formally, this is the set {y (v , . . . , v ) ∈ I}. i 1 ti vides namespaces for usage in ASP. 3 Example 3. In our running example, the file meta.json has the # c o n s t o n t o =” meta . j s o n ” . following content. % timesteps step ( 0 . . finaltimestep ) . { n e x t ( T , T+ 1 ) :− s t e p ( T ) , s t e p ( T + 1 ) . "load-uri": "sample.owl", a c t S t e p ( T ) :− n e x t ( T , ) . "namespaces": { % r o b o t s and b o x e s a r e t a k e n f r o m t h e o n t o l o g y "owl": "http://www.w3.org/2002/07/owl#", r o b o t ( Robot ) :− &d l C r o [ o n t o , ” ex : Robot ” ] ( Robot ) . "ex": "http://www.kr.tuwien.ac.at/hexlite/example#" box ( Box ) :− &d l C r o [ o n t o , ” ex : Box ” ] ( Box ) . } % d e r i v e i n i t i a l s from t h e o n t o l o g y } s ( Box , open , 0 ) :− &d l C r o [ o n t o , ” ex : OpenBox ” ] ( Box ) . s ( Box , c l o s e d , 0 ) :− &d l C r o [ o n t o , ” ex : ClosedBox ” ] ( Box ) . % f l u e n t i n e r t i a , m u t u a l e x c l u s i v i t y o f open and c l o s e d s (A, B , T ’ ) :− s (A, B , T ) , n e x t ( T , T ’ ) , n o t −s (A, B , T ’ ) . −s ( Box , c l o s e d , T ) :− s ( Box , open , T ) . 3.3 Implementation −s ( Box , open , T ) :− s ( Box , c l o s e d , T ) . The external atoms (4)–(7) have been implemented in the OWLAPI % t h e s t a t e o f open / c l o s e d i s p r o p a g a t e d t o t h e o n t o l o g y d e l t a ( T , a d d c ( ” ex : ClosedBox ” , Box ) ) :− s ( Box , c l o s e d , T ) . Plugin7,8 for the HEXLITE [21] solver. The plugin uses JP YPE9 to d e l t a ( T , d e l c ( ” ex : OpenBox ” , Box ) ) :− s ( Box , c l o s e d , T ) . access OWLAPI10 [17] and as reasoner currently uses H ERMI T11 [16]. d e l t a ( T , a d d c ( ” ex : OpenBox ” , Box ) ) :− s ( Box , open , T ) . To facilitate value invention from the ontology, i.e., to allow for d e l t a ( T , d e l c ( ” ex : ClosedBox ” , Box ) ) :− s ( Box , open , T ) . importing all individual names from the ontology instead of speci- % r e q u i r e c o n s i s t e n t ontology a t each time s t e p :− n o t &d l C o n s i s t e n t [ o n t o , d e l t a , T ] , s t e p ( T ) . fying them in the HEX program, the OWLAPI plugin contains ‘read- only’ external atoms of the form % actions are licensed through the ontology { do ( a c t ( open , Robot , Box ) , T ) } :− a c t S t e p ( T ) , &dlC [ o n t o , d e l t a , T , ” ex : A f f o r d s O p e n i n g ” ] ( Box ) , box ( Box ) , &dlCro[ω, C ](I ) &dlC [ o n t o , d e l t a , T , ” ex : MRobot ” ] ( Robot ) , r o b o t ( Robot ) . &dlDPro[ω, DP ](I1 , I2 ) { do ( a c t ( c l o s e , Robot , Box ) , T ) } :− a c t S t e p ( T ) , &dlC [ o n t o , d e l t a , T , ” ex : A f f o r d s C l o s i n g ” ] ( Box ) , box ( Box ) , &dlOPro[ω, OP ](I1 , D) &dlC [ o n t o , d e l t a , T , ” ex : MRobot ” ] ( Robot ) , r o b o t ( Robot ) . { do ( a c t ( p a i n t , Robot , Box ) , T ) } :− a c t S t e p ( T ) , &dlC [ o n t o , d e l t a , T , ” ex : A f f o r d s P a i n t i n g ” ] ( Box ) , box ( Box ) , which correspond to (5)–(7) but operate on the unmodified ontology. &dlC [ o n t o , d e l t a , T , ” ex : PRobot ” ] ( Robot ) , r o b o t ( Robot ) . % o n l y one a c t i o n p e r r o b o t p e r t i m e p o i n t 4 Use Case: Explaining Contingencies in :− s t e p ( T ) , r o b o t (R ) , 2<=# c o u n t { A, B : do ( a c t (A, R , B ) , T ) } . % o n l y one a c t p e r box p e r t i m e p o i n t Production Planning :− s t e p ( T ) , box (B ) , 2<=# c o u n t { A, R : do ( a c t (A, R , B ) , T ) } . Here we concretely develop the use case described in the Introduc- % action effects s ( Box , open , T ’ ) :− do ( a c t ( open , , Box ) , T ) , n e x t ( T , T ’ ) . tion as a HEX-program. s ( Box , c l o s e d , T ’ ) :− do ( a c t ( c l o s e , , Box ) , T ) , n e x t ( T , T ’ ) . Figure 3 shows a HEX-program that integrates action planning s ( Box , p a i n t e d , T ’ ) :− do ( a c t ( p a i n t , , Box ) , T ) , n e x t ( T , T ’ ) . with reasoning over the ontologies in Figures 1 and 2. The following predicates are particularly important in this encod- Figure 3. Production Planning Domain (HEX-program). ing: • the state of the world is represented in a fluent predicate of the form s(- ,
, ) which represents which item is in which state at a given time step; • the initial state of the planning domain is obtained from the ontol- • actions in the world are represented in a predicate of the form ogy; do( , ) which represents what is done at which • all fluents are inertial, i.e., the fluent state it is maintained unless time step. otherwise specified, moreover a box cannot be open and closed at Conceptually, actions are applied to a state at time step T and their the same time; effect is realized in time step T + 1. • the fluents are represented in delta predicates so that the OWLAPI integration can represent the fluent values not only in ASP but also in the ontology;12 4.1 Encoding • we require consistency of the ontology for all time steps, using the From top to bottom, the encoding contains the following sections &dlConsistent external atom; (see also the comments in the figure). • we guess which action is applied, where actions are licensed by • The constant onto is defined that points to the JSON file describing certain affordances on boxes and can be performed by certain ontology location and namespaces; robot types; • a sequence of time steps, starting from 0 until finaltimestep is de- • we constrain actions so that each robot can only perform one ac- fined, and all but the last time step are steps where actions can tion at each step, and each box can be affected by only one action happen ( actStep ); at each step; finally • the set of known robot and box constants is imported read-only • we define the effect of actions on the fluents. from the ontology into domain predicates box and robot ; 12 Note, that we remove the OWL assertions that correspond with non-true 7 github.com/hexhex/hexlite-owlapi-plugin/ fluent values, and we add assertions for true fluent values. This ensures that 8 www.ai4eu.eu/resource/hexlite-owlapi-plugin/ the initial state (which is represented in the ontology in form of OpenBox and ClosedBox assertions does not interfere with reasoning in time steps 9 github.com/jpype-project/jpype/ T > 0. We could get rid of all ‘del’ operations by using a separate ontol- 10 github.com/owlcs/owlapi ogy for specifying the initial state. This is possible using the plugin but it 11 www.hermit-reasoner.com/ would complicate the example. 4 (C)+(I) Painting all boxes until time step 3: Table 1 shows two (c) % make a c t i o n s a s e a r l y a s p o s s i b l e % and a s f e w a s p o s s i b l e optimal answer sets for this query. We observe that the integration : ∼ do (A, T ) . [ T+1@1, A] successfully permitted only those actions that are licensed by affor- (i) % aim t o f i n i s h in time step 3 dances in the ontology. Multiple plans of the same quality exist, they # const f i n a l t i m e s t e p =3. differ in the order of closed and painted boxes. Table 2 provides the % r e q u i r e b o x e s t o be p a i n t e d i n t h e f i n a l s t e p ontology modifications that are used in each time step of the planning :− box (B ) , n o t s t a t e ( B , p a i n t e d , f i n a l t i m e s t e p ) . to obtain the first plan in Table 1. % aim t o f i n i s h i n t i m e s t e p 5 (ii) # c o n s tf i n a l t i m e s t e p =5. (C)+(II) Explaining how to reduce the number of robots by % d e a c t i v a t e a s many r o b o t s a s p o s s i b l e achieving the goal only at time step 5: Table 3 shows one of the { e x p l a n a t i o n ( d e a c t i v a t e (R ) ) } :− r o b o t (R ) . : ∼ n o t e x p l a n a t i o n ( d e a c t i v a t e (R ) ) , r o b o t (R ) . [ 1@2, R] optimal answer sets: an explanation together with a witnessing plan. We observe that the robot r2, which can both paint and manipulate % d e a c t i v a t e d r o b o t s c a n n o t do a c t i o n s :− e x p l a n a t i o n ( d e a c t i v a t e (R ) ) , boxes, was chosen to do the whole work, and r1, which can only a c t S t e p ( S ) , do ( a c t i o n ( , R , ) , T ) . manipulate boxes, was deactivated. As in the previous example, mul- % r e q u i r e b o x e s t o be p a i n t e d i n t h e f i n a l s t e p tiple plans of same quality exist. However, all plans deactivate r1 :− box (B ) , n o t s t a t e ( B , p a i n t e d , f i n a l t i m e s t e p ) . because without r2 the goal can not be reached. % aim t o f i n i s h i n t i m e s t e p 2 # const f i n a l t i m e s t e p =2. (iii) Table 1. Two optimal plans (states + actions) for query (C)+(I): painting % guess which boxes t o s k i p f o r p a i n t i n g all boxes until step 3. { e x p l a n a t i o n ( s k i p (B ) ) } :− box (B ) . % s k i p as few as p o s s i b l e : ∼ e x p l a n a t i o n ( s k i p (B ) ) , box (B ) . [ 1@2, B] Step b1 b2 b3 Actions % r e q u i r e non−s k i p p e d b o x e s t o be p a i n t e d 0 open open closed r1 closes b1 :− box (B ) , n o t e x p l a n a t i o n ( s k i p (B ) ) , r2 paints b3 not s t a t e (B, p a i n t e d , f i n a l t i m e s t e p ) . 1 closed open closed r1 closes b2 painted r2 paints b1 Figure 4. Production Planning Queries (HEX-program fragments): (c) 2 closed closed closed r2 paints b2 common optimization constraint, (i) planning query “paint all boxes within 3 painted painted time steps”, (ii) explanation query “which robots could be omitted when 3 closed closed closed achieving (i) within 5 time steps”, (iii) explanation query “which boxes need painted painted painted to remain unpainted when attempting (i) within 2 time steps”. Step b1 b2 b3 Actions Given a value for finaltimestep this encoding does not contain a plan- 0 open open closed r1 closes b2 ning goal, that means it will produce all possible plans and all re- r2 paints b3 sulting states without a restriction on the final state. For example, 1 open closed closed r1 closes b1 resulting plans include the plan without actions, and the plan that painted r2 paints b2 repeatedly opens and closes a single box. 2 closed closed closed r2 paints b1 painted painted 3 closed closed closed 4.2 Queries and Results painted painted painted Figure 4 contains HEX-program fragments (C) and (I)–(III) that can be added to the encoding in Figure 3 to query the domain for certain planning goals. Table 2. Ontology states corresponding to time steps for the first plan in (C) The weak constraint in (C) incurs a cost of T for each action at Table 1. time step T at priority level @1, that means the solution will aim to use as few actions as possible and they will be done as early as Step Effective Ontology Modifications possible. 0 delc (”ex:ClosedBox”,b1) addc(”ex:OpenBox”,b1) (I) Fragment (I) defines a query for plans that end at time step 3. The delc (”ex:ClosedBox”,b2) addc(”ex:OpenBox”,b2) delc (”ex:OpenBox”,b3) addc(”ex:ClosedBox”,b3) constraint requires that all boxes must be painted at the final time step. 1 delc (”ex:OpenBox”,b1) addc(”ex:ClosedBox”,b1) (II) Fragment (III) queries for plans with final time step 5 and delc (”ex:ClosedBox”,b2) addc(”ex:OpenBox”,b2) delc (”ex:OpenBox”,b3) addc(”ex:ClosedBox”,b3) guesses for each robot whether it shall be deactivated or not. A maximum of robots will be deactivated at priority level @2. More- 2 delc (”ex:OpenBox”,b1) addc(”ex:ClosedBox”,b1) delc (”ex:OpenBox”,b2) addc(”ex:ClosedBox”,b2) over, for a deactivated robot all actions are forbidden and we again delc (”ex:OpenBox”,b3) addc(”ex:ClosedBox”,b3) require all boxes to be painted in the final step. 3 Ontology is not evaluated (step 3 is not an actStep ) (III) Fragment (III) defines a query for plans that end at time step 2. A guess is done for skipping certain boxes. The weak constraint incurs a cost for each box skipped at priority level @2. Finally we (C)+(III) Explaining how to finish the work until time step 2 by require all non-skipped boxes to be painted at the final time step. skipping a product: Table 4 displays one of the optimal explana- When we compute answer sets using the above program fragments tions together with its witnessing plan. Note that b2 is not touched integrated with the above ontology, we obtain the following results. by the robots, although r1 is idle in Step 1 and could close it. This 5 end-users experts in order to use explanations. This approach would Table 3. Optimal plan for (C)+(II): explaining how to reduce the number of robots by postponing the goal to time step 5. be too fine-grained to have any practical value. Therefore, a more macroscopic approach for Explainability is necessary. Explanation: ‘deactivate r1’ We therefore argue that the best way to gain Explainability in Logic Programming is an interaction possibility where users can Step b1 b2 b3 Actions (i) challenge the system with alternative scenarios, (ii) explore dif- 0 open open closed r2 closes b1 ferences between alternative solutions to a single scenario, and 1 closed open closed r2 closes b2 (iii) change optimization criteria to explore their effect on solutions. 2 closed closed closed r2 paints b2 The short query snippets we presented in the previous section (Fig- ure 4) provide such a way of challenging the system using concise 3 closed closed closed r2 paints b3 painted program additions or modifications. The ASP literature contains several studies about such types of 4 closed closed closed r2 paints b1 Explainability under the names Diagnosis [3], Debugging [20], and painted painted Explaining Inconsistency [22]. 5 closed closed closed painted painted painted 6 Discussion, Related and Future Work can be explained by constraint (C) which requires robots to perform only the minimally required actions for reaching the goal. We have described a new interface for integrating Description Log- ics, in particular every DL reasoner that is compatible with OWLAPI, Table 4. Optimal plan for (C)+(II): explaining how to reduce the number with logic programming under Answer Set Semantics, concretely us- of robots by postponing the goal to time step 5. ing the HEX formalism. The benefit of such an integration is, that the ontology is used in its original form, and that the ASP planning do- Explanation: ‘skip b1’ main permits all freedoms that ASP provides for planning, as demon- Step b1 b2 b3 Actions strated by our example use cases with explanatory queries. An alter- native solution for integrating ontologies with logic programs would 0 open open closed r1 closes b1 r2 paints b3 be to convert the ontology into a logic program, but this creates the need for a conversion process and for maintenance of not only the 1 closed open closed r2 paints b1 painted ontology but also the conversion process. Using logic programming for planning is only one possibility 2 closed open closed painted painted among many others. The benefit of using Answer Set Program- ming is the power and freedom that comes with a guess-and-check paradigm where guesses and constraints and optimizations can be combined freely. In particular, Answer Set Programming permits 5 Explainability expressive representations such as indirect effects/ramifications and domain-global constraints and optimization criteria that are difficult In this section, we argue that Explainability as it is usually under- or impossible to model in STRIPS and PDDL. stood is not applicable to Answer Set Programming and propose an alternative. The classical notion of Explanation in AI aims to provide to the Related Work user of a system at least some parts of the rationale, assumptions, data, and reasoning process that leads to a decision. In the case of DL-Programs [8] permit the integration of DL reasoning into an ASP Logic Programming, decisions are made on (i) the basis of a math- planning domain similar to our domain above, however for each time ematical framework, i.e., based on the mathematical definition of step a separate DL atom with a separate delta predicate would be Logic Programming, (ii) based on the program given by the program- required (i.e., for n time steps we would need to define delta 1 , . . . , mer, and (iii) based on the query we pose to the program, again in the delta n and create n instances of all rules in Figure 3 where δ is used). shape of program rules that have clearly defined syntax and seman- Therefore, our new integration interface does not increase expressiv- tics. Therefore, explanations of the classical form would amount to ity but it greatly improves conciseness, readability, and maintainabil- teaching formal semantics and presenting the program that is rea- ity of the encoding, because ASP variables can be used to express soned upon by the solver software, which is neither feasible for an selection of a certain sub-extension of delta to be used as ontology end-user nor does it serve the intention of Explainability. Classical modification. explanations apply mostly to Machine Learning systems where the Action Languages, e.g., PDDL [19] or AR [14] are custom system makes a decision based on input data and a model learned languages for representing planning domains. While basic PDDL from training data, and where usually a big amount of input data does not provide support for non-deterministic effects, its extension (such as audio or video data) is fed to an algorithm that makes a PPDDL supports representation of probabilistic effects and AR (and decision that is based on a small part of that input data. several other planning languages) support representation of possi- Classical Logic Programming makes no decisions in the presence bilistic effects (i.e., effects that can happen without providing a prob- of uncertainty, it produces multiple solutions that are equally viable. ability). These action languages usually come with specific reasoning In case of optimization criteria, it is also clearly determined which methods and do not support what-if scenarios out of the box, unless solution(s) are preferred. they can be represented directly in the respective planning input lan- Teaching end-users the formal semantics of ASP so that they can guage. ASP planning has the advantage that everything is represented understand the reasoning process and programs requires to make as an ASP rule and the programmer is free to modify rules in order 6 to represent additional what-if scenarios just by replacing a deter- [11] Wolfgang Faber, Gerald Pfeifer, and Nicola Leone, ‘Semantics and ministic rule by a rule with a guessing construction in the head, or complexity of recursive aggregates in Answer Set Programming’, Arti- ficial Intelligence, 175(1), 278–298, (2011). by weakening or strengthening the rule in other ways. However, the [12] Martin Gebser, Roland Kaminski, Benjamin Kaufmann, and Torsten advantages of dedicated action languages and the freedom of repre- Schaub, ‘Multi-shot ASP solving with clingo’, Theory and Practice of sentation in ASP can be combined by rewriting a planning domain Logic Programming, 1–56, (2018). to ASP rules for evaluating it in a generic ASP solver. This has been [13] Michael Gelfond and Vladimir Lifschitz, ‘Classical negation in logic done, e.g., for the K [6, 7] and C+ [2, 15] planning languages. programs and deductive databases’, New Generation Computing, 9, 365–385, (1991). [14] Enrico Giunchiglia, G Neelakantan Kartha, and Vladimir Lifschitz, ‘Representing action: Indeterminacy and ramifications’, Artificial In- Future Work telligence, 95(2), 409–438, (1997). [15] Enrico Giunchiglia, Joohyung Lee, Vladimir Lifschitz, Norman Mc- As future work on the API side of the integration, we see an exten- Cain, and Hudson Turner, ‘Nonmonotonic causal theories’, Artificial sion of the interface to permit arbitrary DL queries using the OWLAPI Intelligence, 153(1-2), 49–104, (2004). DL Query Parser, and additional ontology modifications that permit [16] Birte Glimm, Ian Horrocks, Boris Motik, Giorgos Stoilos, and Zhe adding and removing axioms. As future work on the implementa- Wang, ‘HermiT: an OWL 2 reasoner’, Journal of Automated Reason- ing, 53(3), 245–269, (2014). tion side, it will be necessary to generate on-demand constraints that [17] Matthew Horridge and Sean Bechhofer, ‘The OWL API: a Java API for speed up the reasoning. Due to time constraints it was not possible OWL ontologies’, Semantic web, 2(1), 11–21, (2011). to implement this technique so far, therefore we here do not report [18] John McCarthy, ‘Elaboration tolerance’, in Common Sense, volume 98, timing results. (1998). [19] Drew McDermott, Malik Ghallab, Adele Howe, Craig Knoblock, Ash- win Ram, Manuela Veloso, Daniel Weld, and David Wilkins. PDDL – The planning domain definition language, 1998. Technical Report CVC ACKNOWLEDGEMENTS TR-98-003/DCS TR-1165. [20] Johannes Oetsch, Jörg Pührer, and Hans Tompits, ‘Catching the We would like to acknowledge fruitful discussions about Description ouroboros: On debugging non-ground answer-set programs’, Theory Logics and Production Planning with Magdalena Ortiz, Ivan Gocev, Pract. Log. Program., 10(4-6), 513–529, (2010). Raoul Blankertz, Sonja Zillner, and Stephan Grimm. We would also [21] Peter Schüller, ‘The Hexlite solver’, in European Conference on Logics like to thank the referees and chairs for their feedback. This work has in Artificial Intelligence (JELIA), pp. 593–607. Springer, (2019). [22] Claudia Schulz, Ken Satoh, and Francesca Toni, ‘Characterising and ex- received funding from the European Union’s Horizon 2020 research plaining inconsistency in logic programs’, in International Conference and innovation programme under grant agreement 825619 (AI4EU). on Logic Programming and Nonmonotonic Reasoning, pp. 467–479. Springer, (2015). REFERENCES [1] Franz Baader, Diego Calvanese, Deborah McGuinness, Daniele Nardi, and Peter Patel-Schneider, volume 32, Cambridge University Press, 2003. [2] Joseph Babb and Joohyung Lee, ‘cplus2asp: Computing action lan- guage C+ in answer set programming’, in International Conference on Logic Programming and Nonmonotonic Reasoning, pp. 122–134. Springer, (2013). [3] Marcello Balduccini and Michael Gelfond, ‘Diagnostic reasoning with a-prolog’, Theory and Practice of Logic Programming, 3(4-5), 425– 461, (2003). [4] Gerd Brewka, Thomas Eiter, and Miroslaw Truszczynski, AI Magazine: Special Issue on Answer Set Programming, vol. 37(3), AAAI Press, 2016. [5] T. Eiter, M. Fink, G. Ianni, T. Krennwallner, C. Redl, and P. Schüller, ‘A model building framework for Answer Set Programming with exter- nal computations’, Theory and Practice of Logic Programming, 16(4), (2016). [6] Thomas Eiter, Wolfgang Faber, Nicola Leone, Gerald Pfeifer, and Axel Polleres, ‘Planning under incomplete knowledge’, in Interna- tional Conference on Computational Logic, pp. 807–821. Springer, (2000). [7] Thomas Eiter, Wolfgang Faber, Nicola Leone, Gerald Pfeifer, and Axel Polleres, ‘The dlv k planning system: Progress report’, in European Workshop on Logics in Artificial Intelligence, pp. 541–544. Springer, (2002). [8] Thomas Eiter, Giovambattista Ianni, Thomas Lukasiewicz, Roman Schindlauer, and Hans Tompits, ‘Combining Answer Set Programming with Description Logics for the Semantic Web’, Artificial Intelligence, 172(12-13), 1495–1539, (2008). [9] Thomas Eiter, Giovambattista Ianni, Roman Schindlauer, and Hans Tompits, ‘Effective integration of declarative rules with external evalu- ations for Semantic-Web reasoning’, in European Semantic Web Con- ference (ESWC), pp. 273–287, (2006). [10] Thomas Eiter, Tobias Kaminski, Christoph Redl, Peter Schüller, and Antonius Weinzierl, ‘Answer Set Programming with external source access’, in Reasoning Web International Summer School, pp. 204–275, (2017). 7