=Paper= {{Paper |id=Vol-1350/paper-43 |storemode=property |title=Integrating Ontologies and Planning for Cognitive Systems |pdfUrl=https://ceur-ws.org/Vol-1350/paper-43.pdf |volume=Vol-1350 |dblpUrl=https://dblp.org/rec/conf/dlog/BehnkeBBGPS15 }} ==Integrating Ontologies and Planning for Cognitive Systems== https://ceur-ws.org/Vol-1350/paper-43.pdf
    Integrating Ontologies and Planning for Cognitive
                        Systems

         Gregor Behnke1 , Pascal Bercher1 , Susanne Biundo1 , Birte Glimm1 ,
                    Denis Ponomaryov2 , and Marvin Schiller1
                 1
                   Institute of Artificial Intelligence, Ulm University, Germany
             2
                 A.P. Ershov Institute of Informatics Systems, Novosibirsk, Russia



       Abstract. We present an approach for integrating ontological reasoning and plan-
       ning within cognitive systems. Patterns and mechanisms that suitably link plan-
       ning domains and interrelated knowledge in an ontology are devised. In partic-
       ular, this enables the use of (standard) ontology reasoning for extending a (hi-
       erarchical) planning domain. Furthermore, explanations of plans generated by
       a cognitive system benefit from additional explanations relying on background
       knowledge in the ontology and inference. An application of this approach in the
       domain of fitness training is presented.


1   Introduction

Cognitive systems aim to perform complex tasks by imitating the cognitive capabil-
ities of human problem-solvers. This is typically achieved by combining specialised
components, each of which is suited to fulfil a specific function within that system,
such as planning, reasoning, and interacting with the user or the environment. Each
component requires extensive knowledge of the domain at hand. Traditionally, several
representations of knowledge are used by the different components, each well suited
for their respective components. As a result, domain knowledge is distributed across
various parts of such a system, often in different formalisms. Thus, redundancy and
maintaining consistency pose a challenge.
     In this paper, we present an approach for using an ontology as the central source
of domain knowledge for a cognitive system. The main issue we address is how the
planning domain (representing procedural knowledge) and other (ontological) domain
knowledge can be suitably combined. The approach uses the ontology and ontological
reasoning to automatically generate knowledge models for different components of a
cognitive system, such as the planning component or an explanation facility. In par-
ticular, the planning domain is automatically extended using ontological background
knowledge. The same ontology is used by an explanation mechanism providing coher-
ent textual explanations for the generated plans and associated background knowledge
to the user. The planning component uses Hierarchical Task Network (HTN) planning
[7, 9], which is well-suited for human-oriented planning.
     This paper is organised as follows. We commence with the relevant preliminar-
ies in Section 2. A general outline of the approach and its foundations is presented in
Section 3, illustrated by a case study using a real-world fitness training scenario. In
Section 4, the explanation mechanism is described by using accompanying examples
from the case study. Related work is discussed in Section 5 and Section 6 concludes the
paper.


2   Preliminaries
In the paper, we refer to the Description Logic ALC [24], which, as we assume, is fa-
miliar to the reader. We briefly introduce the relevant concepts of HTN planning that
help to understand the context of our work. HTN planning [7, 9] is a discipline of au-
tomated planning where tasks are hierarchically organised; tasks are either “primitive”
(they can be executed directly) or abstract (also called complex or compound in the
literature), i.e., they must be decomposed into sets of subtasks which can in turn be ab-
stract or primitive. Plans are represented by so-called task networks, which are partially
ordered sets of tasks. Planning is done in a top-down manner, such that abstract tasks in
a task network are refined stepwise into more concrete courses of action. This approach
to planning is deemed similar to human problem solving, and thus considered appropri-
ate for use in cognitive systems [4]. HTN planning problems consist of an initial task
network, a planning domain – a set of operators (specifying the preconditions and ef-
fects of primitive actions) together with a set of decomposition methods (which specify
how each abstract task can be decomposed by replacing it with a network of subtasks)
– and an initial state (specifying the state of the world before the tasks in the plan are
carried out). A decomposition method m is denoted as A 7→≺ B1 , ..., Bn , stipulating
that the abstract task A may be decomposed into the plan containing the subtasks B1 to
Bn adhering to the partial order ≺. The subscript ≺ is omitted if no order is defined on
the subtasks. Applying a method m to a plan containing A refines it into a plan where
A is replaced by the subtasks of m with the given order ≺. All orderings imposed on A
are inherited by its subtasks Bi . A plan is a solution for a planning problem if it consists
of a fully executable network of tasks.


3   Integrating Ontologies and Planning
When bringing ontologies and planning together, a key challenge is to find a represen-
tation that suitably links general ontological knowledge with information in the plan-
ning domain. The aim is to enable both a specialised reasoner and a planner to play
to their strengths in a common application domain of interest, while the consistency of
the shared domain model remains ensured. The first part of our approach is to embed
planning knowledge (a hierarchical planning domain) into a general ontology of the
application domain. To address differences in the formalisms of planning and DL, we
present suitable modelling patterns and translation mechanisms. As a second step, an
off-the-shelf DL reasoner is applied to infer new knowledge about the planning domain
in terms of new decomposition methods. Thanks to the integrated model, the struc-
ture of decompositions in the planning domain is accessible to DL reasoning, i.e., new
methods are implied using knowledge of both domains. This modelling yields a stan-
dard planning domain such that an off-the-shelf planner can be used. We have applied
this approach to a fitness scenario and have developed a system which creates a training
plan for a user pursuing some fitness objective. Such a plan defines a training schedule,
comprising training and rest days, as well as the exercises and their duration which are
necessary to achieve a certain goal, e.g., to train the lower body. A similar scenario is
used by Pulido et al. [21], where exercises for physiotherapy of upper-limb injuries are
arranged by a planner.


3.1   Embedding Planning Methods into Ontologies

In this section we describe how the knowledge contained in a hierarchical planning do-
main can be represented in an ontology such that its contents are described declaratively
and thus become amenable to logical reasoning.
    First, a link between the planning domain and corresponding/additional information
in the ontology is defined by a common vocabulary – there is a distinguished set of task
concepts in the ontology which correspond to planning tasks. If T is a planning task,
then a corresponding task concept is denoted as T. In addition to task concepts, the
ontology allows for concepts to model further aspects of the application domain. Task
concepts are provided with definitions/told subsumptions in the ontology which are de-
termined by the set of predefined decomposition methods for the counterpart planning
tasks. Based on these predefined methods, new ones are derived from the ontology,
as further described in Section 3.2. A simple decomposition method A 7→ B is inter-
preted as a subsumption B v A between task concepts A, B. This pattern, however,
works only for such simple decomposition methods. In general, a method has the form
A 7→ B1 , ..., Bn and can be viewed as an instruction that A is achieved by “executing”
the tasks B1 , ..., Bn . When representing such decomposition methods in an ontology
one needs to take into account that a decomposition method is a precise definition of
the tasks created. It specifies the subtasks required to achieve an abstract task, but si-
multaneously states that these tasks are also sufficient. Ontologies, on the other hand,
are built on the open world assumption – representing a task decomposition by the tasks
B1 , ..., Bn is insufficient. The ontology must also contain the explicit information that
only these tasks belong to the decomposition. For this purpose we use the onlysome
construct (see, e.g., Horridge et al. [13]), which is a short-hand notation for a pattern
combining the existential and universal restrictions to represent collections of concepts.

Definition 1 Let r be a role, I an index set, and {Ci }di∈I concepts. We F
                                                                         define the
                                                                                 
onlysome restriction Cr.({Ci }i∈I ) by Cr.({Ci }i∈I ) := i∈I ∃r.Ci u ∀r.   i∈I Ci .

With the onlysome restriction we can give a decomposition method in a definition
stating that a collection of tasks corresponds to a task concept to be decomposed. Thus, a
method A           7→      B1 , ..., Bn can be expressed by using the axiom
A ≡ Cincludes.(B1 , ..., Bn ).
    Let us now turn to our application domain of fitness training. Here, elementary train-
ing exercises are represented as planning tasks whose preconditions and effects follow
certain rules (e.g. muscles must be warmed up before being exercised, intense exer-
cises must precede lighter ones, ...). The ontology further encompasses concepts for
training exercises, equipment, workouts, training objectives, and a part of the NCICB
corpus [17], describing muscles etc. Contained in the ontology are four planning-related
types of concepts: exercises, workouts, workout templates, and trainings. Workouts are
predefined (partially ordered) sets of exercises, modelled by a domain expert using
onlysome-definitions. This implicitly defines decomposition methods for each work-
out. Workouts are, e.g., defined by:

                  Workout1 ≡ Cincludes.(FrontSquat, SumoDeadlift)
 This axiom postulates that Workout1 includes front squats and sumo deadlifts but noth-
ing else. At a more abstract level, workout templates serve to specify groups of work-
outs with similar properties. For example, there exist many similar variants of squats
and deadlifts with similar training effects, which can be grouped together. For instance,
consider WorkoutTemplate1:

                   WorkoutTemplate1 ≡ Cincludes.(Squat, Deadlift)
 This workout template subsumes Workout1 wrt the ontology (since FrontSquat v
Squat and SumoDeadlift v Deadlift). As detailed in the next subsection, these sub-
sumptions are the basis for generating corresponding decomposition methods, e.g., in
this case a method decomposing the task WorkoutTemplate1 into Workout1 which again
is decomposed into its concrete subtasks. Finally, trainings define abstract training ob-
jectives, such as improving strength or training the lower body. Here, the requirements
that need to be met are formulated using onlysome restrictions. For instance, the fol-
lowing axiom postulates that lower body training contains at least one and only exer-
cises targeting muscles in the lower body:

      LowerBodyTraining ≡ Cincludes.∃engages target.(∃part of.LowerBody)
The cornerstone of our approach is to establish a correspondence between subsumptions
among task concepts in the ontology and corresponding decompositions in the planning
domain. For two task concepts representing collections of tasks using onlysome, we
require that one is subsumed by the other if and only if the tasks defined by the first
serve to achieve all requirements specified by the second. In more detail, a collection
C1 (representing a set of tasks) should be subsumed by a collection C2 if and only if for
any task concept (requirement) from C2 , there is a task concept in C1 , which achieves
it (i.e., C1 is subsumed by C2 ), and there are only those task concepts in C1 that meet
some requirement from C2 . For this property to hold, we require that in onlysome
restrictions Cr.({Ci }i∈I ) the role r is independent of concepts Ci , i.e., has no semantic
relationship with them, as captured by the following definition.
Definition 2 Let O be an ontology, r a role, and C1 , ..., Cn concepts. We call r inde-
pendent of C1 , ..., Cn wrt O if, for any model I of O and any binary relation [s] on the
domain of I, there is a model J of O with the same domain such that r is interpreted
as [s] in J and the interpretation of Ci , 1 ≤ i ≤ n, in I and J coincides.
Next, we show that the given intuition holds for the collections defined with the
onlysome restriction.

Theorem 1 Let O be an ontology, C1 , ..., Cm concepts satisfiable wrt O, D1 , ..., Dn
concepts, and r a role independent of C1 , ..., Cm , D1 , ..., Dn wrt O.
Then it holds O |= Cr.(C1 , ..., Cm ) v Cr.(D1 , ..., Dn ) if and only if
(1) ∀i, 1 ≤ i ≤ m, ∃j, 1 ≤ j ≤ n, such that O |= Ci v Dj and
(2) ∀j, 1 ≤ j ≤ n, ∃i, 1 ≤ i ≤ m, such that O |= Ci v Dj .

Proof (Sketch). The if direction can be shown using monotonicity of existential/universal
restrictions and Conditions (1) and (2). Using contraposition, we can show the only-if
direction by constructing a countermodel for the subsumption. Since each Ci is satisfi-
able, there are models of O with some instance ci of Ci . Using the negation of Condi-
tion (1), there is further a model with an instance x of some Ci that is not an instance
of any Dj . We now build a new model as the disjoint union of these models and take
an arbitrary element d in the constructed model. Using independence of r, we obtain a
model in which r contains hd, xi and the tuples hd, ci i. We obtain the desired contradic-
tion since d is an instance of Cr.(C1 , ..., Cm ), but not an instance of ∀r.(D1 t ... t Dn )
(due to hd, xi) and, hence, of Cr.(D1 , ..., Dn ). We can proceed similarly for the case
of Condition (2) not holding.                                                             

     Until now, only the tasks contained in a decomposition method have been regarded,
while their partial ordering was ignored. To incorporate it in the ontology, the notion of
collections must be extended to partially ordered sets of concepts. Unfortunately, many
DLs (also ALC) are not well suited to represent partial orders, since their expressivity
is limited by the tree-model property [28], stating that non-tree structures cannot fully
be axiomatized. Since our aim is to completely represent the planning domain in the
ontology, we propose a syntactic encoding for this information that is opaque to DL
reasoners and has no influence on the semantics. A task A following a task B is ex-
pressed by replacing the concept A in onlysome expressions by At(⊥u∃after.B). The
latter disjunct is trivially unsatisfiable and the given expression is semantically equiv-
alent to just A. This makes order opaque to any reasoner. Preconditions and effects of
primitive tasks are modelled using auxiliary roles, making them accessible for logical
reasoning, too.


3.2   Extending Planning Domains by DL Inference

Embedding the planning domain into the ontology enables us to infer new decomposi-
tion methods using off-the-shelf DL reasoners. More precisely, subsumption relations
between task concepts inferred from an ontology O result in decomposition methods
being added to the planning domain. Suppose there are task concepts A and B such that
O |= B v A and there is no other task concept C such that O |= {B v C, C v A}.
Then, a decomposition method A 7→ B is created in analogy to the way such methods
are encoded in the ontology. This simple scheme provides only for methods decom-
posing into a single subtask. Further, we interpret onlysome-definitions provided by
the ontology as told decompositions that provide collections of tasks. Besides these
two simple cases, we are also interested in knowing whether an abstract task A can
be achieved by combining some task concepts B1 , ..., Bn into a new decomposition
method A 7→ B1 , ..., Bn . If so, some concept expression E describing this combination
should be subsumed by A. In keeping with the principle of matching task collections
described by Theorem 1, we consider combining concepts using onlysome restrictions.
The concepts B1 , ..., Bn describe requirements, which another concept A might fulfil.
If for some tasks A, B1 , ..., Bn it holds O |= Cr.(B1 , ..., Bn ) v A, then a decompo-
sition method of A into the collection B1 , ..., Bn is created. Let us once again consider
our use-case to discuss an example. Consider the definition of lower body training in-
troduced in the previous subsection:

       LowerBodyTraining ≡ Cincludes.∃engages target.(∃part of.LowerBody)
Since our ontology respects the requirement that the role includes is independent of
concepts occurring in onlysome expressions, Theorem 1 applies and we know that any
workout solely comprised of lower body exercises is subsumed by LowerBodyTraining.
This results in decomposition methods for LowerBodyTraining into every possible
workout for the lower body. Furthermore, consider the following definition of full body
training:

                      FullBodyTraining ≡ Cincludes.(
                             ∃engages target.(∃part of.LowerBody),
                             ∃engages target.(∃part of.UpperBody))
Using reasoning, one can now establish whether combinations of different workouts
achieve such a training objective, such that a corresponding decomposition method is
automatically introduced into the planning domain. For instance, we can infer that a
workout solely training the lower body and a workout solely training the upper body
in combination constitute this training. Hence a decomposition method for a full body
training into these two workouts is added to the planning domain. The following theo-
rem clarifies the relationship between the obtained decomposition methods and entailed
concept inclusions.
Theorem 2 Let A be an abstract task, which can be refined into some plan P by ap-
plying decomposition methods created from the ontology O. Let the plan P contain the
tasks B1 , ..., Bn , n ≥ 1. Then there is a concept P such that O |= P v A, B1 , ..., Bn
occur in P , and P is either of the form:
 1. a task concept from O, or
 2. an expression Cr.(F1 , ..., Fm ) with each Fi a concept of the form 1 or 2.
Proof. The claim is proved by induction on the number m of (decomposition) steps
made to obtain P using O. In the induction base, for m = 0, A is the required con-
cept. For m > 0, let P 0 = {C1 , ..., Ck } be a set of tasks obtained in m − 1 decom-
position steps and let P 0 be a concept satisfying the claim for P 0 . Let P be obtained
from P 0 by decomposing some task Ci , for i ∈ {1..., k}. Assume that a decompo-
sition method for Ci was created from the concept inclusion D v Ci entailed by O.
Then D is one of the task concepts B1 , ..., Bn , we have O |= PC0 i /D v P 0 , hence,
O |= PC0 i /D v A and thus, PC0 i /D is the required concept (denoting P 0 with every
occurrence of Ci substituted with D). If a decomposition method for Ci was created
from a concept inclusion Cr.(D1 , ..., D` ) v Ci entailed by O (and possibly obtained
from an axiom Ci ≡ Cr.(D1 , ..., D` )), then every Dj , j = 1, ..., `, is a task concept
among B1 , ..., Bn and we have O |= PC0 i /Cr.(D1 , ... ,D` ) v A, so PC0 i /Cr.(D1 , ... ,D` ) is
the required concept.                                                                           
    Taking into account all possible combinations of tasks in the ontology presents a
problem, since there are exponentially many. We propose a pragmatic solution. First,
the maximal number of task concepts to be combined in onlysome expressions can be
restricted by some number k. Second, most real-world domains (including our appli-
cation example) have restrictions on which tasks can be combined. In our case study
using the fitness training scenario, we only considered combinations of two task con-
cepts defined by an onlysome axiom. This already enabled a considerable number of
methods to be inferred.
    Essentially, the ontology used in our system consists of two parts O1 and O2 , with
O1 containing definitions as above and O2 representing the core knowledge about the
subject domain: exercises (being task concepts), training equipment, body anatomy,
etc. The ontology is built in such a way that it guarantees independence of the role
includes from any concept occurring under Cincludes, which means that the principle
of matching task collections outlined in Section 3.1 correctly applies. The general shape
of our ontology is formally described in the following theorem, where the role includes
is abbreviated as r.


Theorem 3 Let r be a role and O = O1 ∪ O2 an ontology such that O1 is an acyclic
terminology consisting of definitions A ≡ Cr.(C1 , ..., Cm ), where r does not occur in
C1 , ..., Cm and A, r do not occur in O2 . Then the role r is independent of any concepts
appearing under Cr in O1 .


Proof (Sketch). Let A be the set of concept names occurring on the left-hand side of
axioms in O1 . Let I be a model of O and [s] a binary relation on the domain of I. Let
I 0 be an interpretation obtained from I by changing the interpretation of r to [s]. Since
r and A-concepts do not occur in O2 , we have I 0 |= O2 . It remains to consider an
expansion of the terminology O1 (cf. [2, Prop. 2.1]) to verify that there exists a model
J of O1 ∪ O2 obtained from I 0 by changing the interpretation of A-concepts (and
leaving the interpretation of other symbols unchanged). For any concepts C1 , ..., Cm
appearing under Cr in O1 , their interpretations in J and I coincide, which shows the
required statement.                                                                     

    The initial planning domain of our case-study scenario encompasses 310 differ-
ent tasks and a few methods. Its formalisation in the ontology consists of 1 230 con-
cepts and 2 903 axioms, of which 613 concepts and 664 axioms are imported from
the NCICB corpus. Initially, 9 different training objectives and 24 workout templates
are specified. For extending the ontology with inferred decompositions, we employ
the OWL reasoner FaCT++ [27], which requires 3.6 seconds on an up-to-date laptop
computer (Intel R CoreTM i5-4300U). After being extended, the planning domain con-
tains 471 tasks and 967 methods, of which 203 are created based on subsumptions
between workouts and workout templates, and further three methods have been created
by onlysome-combinations of task concepts. In addition, 59 decomposition methods
for training objectives into workout templates are created of which 24 are onlysome-
combinations of task concepts.
Lying
          Gluteus Maximus   Sumo       obtained by          obtained by Strength       ...
Gluteus   warmed up                    decomposing Workout1 decomposing Training
                            Deadlift
Stretch

            Fig. 1. Example of a plan explanation for the task lying gluteus stretch


4   Explanations

Cognitive systems that interact with users need to be able to adequately communicate
their solutions and actions. In the field of HCI and dialog modelling, it was shown
that systems that provide additional explanations receive increased trust from their
users [16, 20]. Bercher et al. [3] empirically investigated the role of plan explanations
in an interactive companion system in a real-world scenario.
    In our integrated approach, both DL reasoning and planning work together, thereby
using the procedural and declarative information contained in the integrated knowledge
model, and generating new information artefacts of different flavours (in particular, in-
ferred facts and refined plans). We now describe how explanations are generated from
this information by combining techniques from plan explanation (specifically, an ap-
proach for explaining hybrid plans [25]) and an approach for explaining ontological
inferences [23]. Together they offer complimentary views on a given application do-
main.
    Plan explanation focuses on dependencies between tasks in a plan, in particular how
tasks are decomposed into subtasks, and how these tasks are linked by preconditions and
effects. To provide an explanation why a particular task is part of a generated plan, the
information relevant to justify its purpose is extracted from the causal dependencies and
the decompositions applied to the plan. Technically, this information (which internally
is represented in a logic formalism) is considered the “explanation”. It is guaranteed
that plan explanations are always linear chains of arguments, each based on its prede-
cessor. For instance, consider that in our application scenario, the user asks to justify
a particular action, for example “Why do I have to do a lying gluteus stretch?” Fig.
1 shows the dependencies that establish the use of the lying gluteus stretch within a
plan that achieves a strength training. This information is further converted into text to
be communicated to the user by using simple templates. Causal dependencies between
two tasks A and B, where A provides a precondition l for B, are verbalised as “A is
necessary as it ensures that l, which is needed by B.” Similarly, decompositions are
justified by patterns such as “Task A is necessary, since it is part of B.” In the running
example, this yields:

The lying gluteus stretch is necessary as it ensures that
the gluteus maximus is warmed up, which is needed by the sumo
deadlift. The sumo deadlift is necessary, since it is part
of the Workout1. The Workout1 is necessary, since it is part
of the strength training. ...

The mechanism presented so far represents the part of “traditional” plan explanation.
Here, method decompositions are treated as facts (e.g. “Workout1 is part of strength
         XvY   Y ≡Z              Y is an
RvDef                            atomic
                                                      “... hhX v Y ii. Thus, hhX v Zii according
            XvZ                  concept              to the definition of hh Y ii.”

Fig. 2. Sample inference rule (left) together with corresponding text template (right). Guillements
indicate the application of a function translating DL formulas into text.


training”); however, in our approach, they can be justified further, since they correspond
to subsumptions inferred from background knowledge in the ontology. Therefore, the
reasoning behind the subsumption can be used as an explanation to justify the decom-
position. For example, the user may ask the question “Why is Workout1 a strength
training?” For this purpose, we use the second explanation mechanism, which has been
implemented as a prototype. Its aim is to generate stepwise textual explanations for
ontological inferences. In the running example, it outputs:
According to its definition, Workout1 includes front squat
and sumo deadlift. Furthermore, since sumo deadlift is an
isotonic exercise, it follows that Workout1 includes an isotonic
exercise. Given that something that includes an isotonic
exercise has strength as an intended health outcome, Workout1
has strength as an intended health outcome. Thus, Workout1
is a strength training according to the definition of strength
training.

To generate such explanations, first a consequence-based inference mechanism is used
to construct a derivation tree for a given subsumption. This is done in two stages. First
the relevant axioms in the ontology (the “justifications”) are identified using the im-
plementation provided by Horridge [12], which is done efficiently using a standard
tableau-based reasoner. Then, a (slower) proof search mechanism using consequence-
based rules is applied for building a derivation tree. This tree is then linearised to yield
a sequence of explanation steps. The ordering of the explanation steps corresponds to
a post-order traversal in the tree structure of inference rule applications (where the in-
ference step that yields the conclusion is taken to be the root). As an example of a
consequence-based inference rule used for explanation generation and its correspond-
ing text template, consider Fig. 2. This template generates the last statement in the
sample text shown above. Note that even though the presented inference rule is quite
simple, it represents a (logical) shortcut, since the conclusion X v Z could also be
obtained in two steps by inferring Y v Z from the equivalence axiom and then us-
ing the transitivity of v. For the generation of explanations, this (logically redundant)
rule is given precedence over the “standard” inference rules, in the interest of smaller
proofs and greater conciseness of the generated text. By contrast, some inference rules
are specified to never generate output, since it would be considered uninformative for
a user. Consider the rule deriving X v (Y tZ) from X v Y , which is always ignored
during text generation. Such considerations provide ample scope for further work into
adjusting the generated texts according to pragmatics and user preferences.
    To answer the general question how well people understand automatically gener-
ated verbalisations of ontological axioms and inferences, studies have already been per-
formed as part of related work. For example, in experiments, Nguyen [18] found that
the understandability of verbalised inference steps depends on the employed inference
rules, where some kinds of inference rules were found considerably more difficult than
others. Furthermore, if the more difficult-to-understand inference rules were verbalised
in a more elaborated manner, understanding was improved. Such work hints at a general
challenge for empirical evaluations of the general “usefulness” of such verbalisations
(to be addressed as future work); their accessibility partially depends on the complexity
of the formalised domain and on the prerequisites of the user.


5   Related Work

Past research on coupling ontological reasoning and planning mainly addressed the aim
of increasing expressivity/efficiency. There is a large body of research on integrating
ontology and action formalisms, see e.g., the overview in Calvanese et al. [6], which
aims at bringing together the benefits of static and dynamic knowledge representation.
Gil [10] provides a survey of approaches joining classical planning and ontologies and
names a number of planners that use ontological reasoning to speed-up plan genera-
tion. Hartanto and Hertzberg [11] use a domain-specific ontology to prune a given HTN
model. However, additional content in the domain can not be inferred in their paradigm.
A number of approaches use ontologies to enrich the structure of the planning domain.
Typically, ontologies provide hierarchies of tasks and plans and are often used to repre-
sent states under the open world assumption [22, 26]. For a survey, we refer to Sirin [26,
Chapter 8]. Further approaches (e.g. [14, 8]) use OWL as a representation language for
HTN planning domains but do not employ ontology reasoning to extend these domains.
Sirin [26] describes HTN-DL, combining HTN and description logics to solve Web
Service composition problems. HTN-DL planning domains are encoded in an ontology
by representing tasks as concepts and decomposition methods as individuals. Both are
augmented with axioms describing their preconditions and effects. Here Sirin’s view of
methods differs from standard HTN, as they define a partially ordered list of actions, but
not an abstract task they decompose. Further, preconditions and effects are assigned to
methods, which are not necessarily related to the contents of the plan. Although Sirin’s
and our approach are similar in the idea of using an ontology and DL reasoning to
generate planning domains, there are conceptual differences. In HTN-DL all decompo-
sition methods are provided in the domain by a modeller, while the presented approach
infers new decomposition methods. Sirin applies reasoning to determine whether a de-
composition method can be applied to an abstract task, using their preconditions and
effects. His approach does not allow inference based on the tasks in a method nor their
decompositions or other properties, which is the cornerstone of ours.
     Related work on the generation of explanations from ontologies encompasses a
number of approaches to verbalise the formalised contents with the goal of imitating
natural language. Systems targeting non-expert users include, e.g., the NaturalOWL
system [1] and the ontoVerbal verbaliser [15]. By contrast, the generation of expla-
nations for entailments is addressed only by some approaches, e.g., by Horridge [12],
whose approach targets expert users. The generation of stepwise explanations of entail-
ments for non-expert users is considered by Borgida et al. [5], Nguyen et al. [19, 18] and
Schiller and Glimm [23], whose work provided a basis for the approach to explanations
presented in this paper.


6   Conclusion
We presented an approach to integrating hierarchical planning knowledge into ontolo-
gies that encompass a general representation of the application domain. A central issue
of this paper is to establish the semantic correspondence between the constructs of the
planning domain (in particular, decomposition methods) and their representation in the
ontology. Our results enable a cognitive system to use a coherent knowledge model for
both planning and reasoning, which at the same time enables coherent and detailed ex-
planations for the user, as demonstrated in the application scenario. Our investigation
also highlights avenues for future work. One such topic is incorporating mixed-initiative
planning into the approach, such that communication with and participation of the user
would benefit from explanations by the system. Second, the explanations generated by
the system raise the issue of selecting the right level of verbosity. Future work should
address this from the viewpoint of pragmatics (e.g. that explanations for “obvious”
inferences should generally be omitted) and user modelling (e.g. taking prior knowl-
edge of the user into account). While our approach enables the hierarchical structure
of a planning domain to be exploited by DL reasoning, the partial order of tasks is not
amenable to DL reasoning. The question how this limitation could be addressed can be
taken up by future work.

Acknowledgements. This work was done within the Transregional Collaborative Re-
search Centre SFB/TRR 62 “A Companion-Technology for Cognitive Technical Sys-
tems” funded by the German Research Foundation (DFG).
                                  Bibliography


 [1] Androutsopoulos, I., Lampouras, G., Galanis, D.: Generating natural language de-
     scriptions from OWL ontologies: The NaturalOWL system. JAIR 48, 671–715
     (2013)
 [2] Baader, F., Calvanese, D., McGuinness, D., Nardi, D., Patel-Schneider, P. (eds.):
     The Description Logic Handbook: Theory, Implementation, and Applications.
     Cambridge University Press (2003)
 [3] Bercher, P., Biundo, S., Geier, T., Hoernle, T., Nothdurft, F., Richter, F., Schat-
     tenberg, B.: Plan, repair, execute, explain - How planning helps to assemble your
     home theater. In: Proc. of the Int. Conf. on Automated Planning and Scheduling.
     pp. 386–394. AAAI Press (2014)
 [4] Biundo, S., Bercher, P., Geier, T., Müller, F., Schattenberg, B.: Advanced user
     assistance based on AI planning. Cognitive Systems Research 12(3-4), 219–236
     (2011), Special Issue on Complex Cognition
 [5] Borgida, A., Franconi, E., Horrocks, I.: Explaining ALC subsumption. In: Proc.
     of the European Conf. on Artificial Intelligence. pp. 209–213. IOS Press (2000)
 [6] Calvanese, D., De Giacomo, G., Montali, M., Patrizi, F.: Verification and synthesis
     in description logic based dynamic systems. In: Proc. of the 7th Int. Conf. on Web
     Reasoning and Rule Systems (RR 2013). LNCS, vol. 7994, pp. 50–64. Springer
     (2013)
 [7] Erol, K., Hendler, J.A., Nau, D.S.: Complexity results for HTN planning. Annals
     of Mathematics and Artificial Intelligence 18(1), 69–93 (1996)
 [8] Freitas, A., Schmidt, D., Panisson, A., Meneguzzi, F., Vieira, R., Bordini, R.H.:
     Semantic representations of agent plans and planning problem domains. In: Engi-
     neering Multi-Agent Systems, pp. 351–366. Springer (2014)
 [9] Geier, T., Bercher, P.: On the decidability of HTN planning with task insertion. In:
     Proc. of the 22nd International Joint Conference on Artificial Intelligence (IJCAI
     2011). pp. 1955–1961. AAAI Press (2011)
[10] Gil, Y.: Description logics and planning. AI Magazine 26(2), 73–84 (2005)
[11] Hartanto, R., Hertzberg, J.: Fusing DL reasoning with HTN planning. In: KI 2008:
     Advances in Artificial Intelligence. LNCS, vol. 5243, pp. 62–69. Springer (2008)
[12] Horridge, M.: Justification Based Explanations in Ontologies. Ph.D. thesis, Uni-
     versity of Manchester, Manchester, UK (2011)
[13] Horridge, M., Drummond, N., Goodwin, J., Rector, A., Stevens, R., Wang, H.:
     The Manchester OWL syntax. In: Proc. of the Workshop on OWL Experiences
     and Directions. Athens, GA, USA (2006)
[14] Ko, R.K.L., Lee, E.W., Lee, S.G.: Business-OWL (BOWL) – A hierarchical task
     network ontology for dynamic business process decomposition and formulation.
     IEEE Transactions on Services Computing 5(2), 246–259 (2012)
[15] Liang, S.F., Scott, D., Stevens, R., Rector, A.: OntoVerbal: a generic tool and
     practical application to SNOMED CT. Int. J. of Advanced Computer Science and
     Applications 4(6), 227–239 (2013)
[16] Lim, B.Y., Dey, A.K., Avrahami, D.: Why and why not explanations improve the
     intelligibility of context-aware intelligent systems. In: Proc. of the SIGCHI Conf.
     on Human Factors in Comp. Systems. pp. 2119–2128 (2009)
[17] NCICB, N.: (2015), http://ncicb.nci.nih.gov/xml/owl/EVS/
     Thesaurus.owl (accessed February 9, 2015)
[18] Nguyen, T.A.T.: Generating Natural Language Explanations For Entailments In
     Ontologies. Ph.D. thesis, The Open University, Milton Keynes, UK (2013)
[19] Nguyen, T.A.T., Power, R., Piwek, P., Williams, S.: Predicting the understandabil-
     ity of OWL inferences. In: The Semantic Web: Semantics and Big Data, LNCS,
     vol. 7882, pp. 109–123. Springer (2013)
[20] Nothdurft, F., Richter, F., Minker, W.: Probabilistic human-computer trust han-
     dling. In: Proc. of the Annual Meeting of the Special Interest Group on Discourse
     and Dialogue. pp. 51–59. Association for Computational Linguistics (2014)
[21] Pulido, J.C., González, J.C., González-Ferrer, A., Garcı́a, J., Fernández, F., Ban-
     dera, A., Bustos, P., Suárez, C.: Goal-directed generation of exercise sets for
     upper-limb rehabilitation. In: Proc. of the Workshop on Knowledge Engineering
     for Planning and Scheduling (2014)
[22] Sánchez-Ruiz, A.A., González-Calero, P.A., Dı́az-Agudo, B.: Abstraction in
     knowledge-rich models for case-based planning. In: Case-Based Reasoning Re-
     search and Development, LNCS, vol. 5650, pp. 313–327. Springer (2009)
[23] Schiller, M., Glimm, B.: Towards explicative inference for OWL. In: Proc. of the
     Int. Description Logic Workshop. vol. 1014, pp. 930–941. CEUR (2013)
[24] Schmidt-Schauss, M., Smolka, G.: Attributive concept descriptions with comple-
     ments. AIJ 48, 1–26 (1991)
[25] Seegebarth, B., Müller, F., Schattenberg, B., Biundo, S.: Making hybrid plans
     more clear to human users – a formal approach for generating sound explanations.
     In: Proc. of the Int. Conf. on Automated Planning and Scheduling. pp. 225–233.
     AAAI Press (2012)
[26] Sirin, E.: Combining Description Logic Reasoning with AI Planning for Com-
     position of Web Services. Ph.D. thesis, University of Maryland at College Park
     (2006)
[27] Tsarkov, D., Horrocks, I.: Fact++ description logic reasoner: System description.
     In: Proc. of the Third Int. Joint Conf. on Automated Reasoning (IJCAR). pp. 292–
     297. Springer (2006)
[28] Vardi, M.Y.: Why is modal logic so robustly decidable? In: Descriptive Com-
     plexity and Finite Models. vol. 31, pp. 149–184. American Mathematical Society
     (1997)