=Paper= {{Paper |id=Vol-1648/paper10 |storemode=property |title=Non-Deterministic Planning with Temporally Extended Goals: Completing the Story for Finite and Infinite LTL (Amended Version) |pdfUrl=https://ceur-ws.org/Vol-1648/paper10.pdf |volume=Vol-1648 |authors=Alberto Camacho,Eleni Triantafillou,Christian Muise,Jorge A. Baier,Sheila A. McIlraith |dblpUrl=https://dblp.org/rec/conf/ijcai/CamachoTMBM16 }} ==Non-Deterministic Planning with Temporally Extended Goals: Completing the Story for Finite and Infinite LTL (Amended Version)== https://ceur-ws.org/Vol-1648/paper10.pdf
              Non-Deterministic Planning with Temporally Extended Goals:
            Completing the story for finite and infinite LTL (Amended Version⇤ )
    Alberto Camacho† , Eleni Triantafillou† , Christian Muise? , Jorge A. Baier‡ , Sheila A. McIlraith†
                         †
                           Department of Computer Science, University of Toronto
                              ?
                                CSAIL, Massachusetts Institute of Technology
          †
            Departamento de Ciencia de la Computación, Pontificia Universidad Católica de Chile
           †
             {acamacho,eleni,sheila}@cs.toronto.edu, ? cjmuise@mit.edu, ‡ jabaier@ing.puc.cl

                          Abstract                                     Planning with deterministic actions and LTL goals has been
                                                                    well studied, commencing with the works of Bacchus and
     Temporally extended goals are critical to the speci-           Kabanza [2000] and Doherty and Kvarnström [2001]. Sig-
     fication of a diversity of real-world planning prob-           nificant attention has been given to compilation-based ap-
     lems. Here we examine the problem of plan-                     proaches (e.g., [Rintanen, 2000; Cresswell and Codding-
     ning with temporally extended goals over both fi-              ton, 2004; Edelkamp, 2006; Baier and McIlraith, 2006;
     nite and infinite traces where actions can be non-             Patrizi et al., 2011]), which take a planning problem with
     deterministic, and where temporally extended goals             an LTL goal and transform it into a classical planning prob-
     are specified in linear temporal logic (LTL). Un-              lem for which state-of-the-art classical planning technology
     like existing LTL planners, we place no restrictions           can often be leveraged. The more challenging problem of
     on our LTL formulae beyond those necessary to                  planning with non-deterministic actions and LTL goals has
     distinguish finite from infinite trace interpretations.        not been studied to the same extent; Kabanza, Barbeau, and
     We realize our planner by compiling temporally ex-             St.-Denis [1997], and Pistore and Traverso [2001] have pro-
     tended goals, represented in LTL, into Planning Do-            posed their own LTL planners, while Patrizi, Lipovetzky, and
     main Definition Language problem instances, and                Geffner [2013] have proposed the only compilation-based ap-
     exploiting a state-of-the-art fully observable non-            proach that exists. Unfortunately, the latter approach is lim-
     deterministic planner to compute solutions. The                ited to the proper subset of LTL for which there exists a de-
     resulting planner is sound and complete. Our ap-               terministic Büchi automata. In addition, it is restricted to the
     proach exploits the correspondence between LTL                 interpretation of LTL over infinite traces and the compilation
     and automata. We propose several different compi-              is worst-case exponential in the size of the goal formula. Fi-
     lations based on translations of LTL to (Büchi) al-           nally, it is subject to a double-exponential blowup, since there
     ternating or non-deterministic finite state automata,          exists LTL formulae of size n for which     the recognizing de-
     and evaluate various properties of the competing                                                     n
                                                                    terministic Büchi automata has 22 states [Kupferman and
     approaches. We address a diverse spectrum of LTL               Rosenberg, 2010].
     planning problems that, to this point, had not been               In this paper, we propose a number of compilation-based
     solvable using AI planning techniques. We do so                approaches for LTL planning with non-deterministic actions.
     while demonstrating competitive performance rela-              Specifically, we present two approaches for LTL planning
     tive to the state of the art in LTL planning.                  with non-deterministic actions over infinite traces and two
                                                                    approaches for LTL planning with non-deterministic actions
1    Introduction                                                   over finite traces1 . In each case, we exploit translations from
                                                                    LTL to (Büchi) alternating or non-deterministic finite state au-
Most real-world planning problems involve complex goals             tomata. All of our compilations are sound and complete and
that are temporally extended, require adherence to safety con-      result in Planning Domain Definition Language (PDDL) en-
straints and directives, necessitate the optimization of prefer-    codings suitable for input to standard fully observable non-
ences or other quality measures, and/or require or may ben-         deterministic (FOND) planners. Our compilations based on
efit from following a prescribed high-level script that speci-      alternating automata are linear in time and space with respect
fies how the task is to be realized. In this paper we focus on      to the size of the LTL formula, while those based on non-
the problem of planning for temporally extended goals, con-         deterministic finite state automata are worst-case exponential
straints, directives or scripts that are expressed in Linear Tem-   in time and space (although optimizations in the implementa-
poral Logic (LTL) for planning domains in which actions can         tion avoid this in our experimental analysis).
have non-deterministic effects, and where LTL is interpreted           Our approaches build on methods for finite LTL planning
over either finite or infinite traces.
   ⇤                                                                    1
     A version of this paper was presented at the Heuristics and          Subtleties relating to the interpretation of LTL over finite traces
Search for Domain-Independent Planning Workshop, ICAPS 2016.        are discussed in [De Giacomo and Vardi, 2013].
with deterministic actions by Baier and McIlraith [2006] and         p(s) = a, then a is applicable in s. The execution of a policy
Torres and Baier [2015], and for the infinite non-deterministic      p in state s is an infinite sequence s0 , a0 , s1 , a1 , . . . or a finite
case, on the work of Patrizi, Lipovetzky, and Geffner [2013].        sequence s0 , a0 , . . . , sn 1 , an 1 , sn , where s0 = s, and all of
While in the finite case the adaptation of these methods             its state-action-state substrings s, a, s0 are such that p(s) = a
was reasonably straightforward, the infinite case required           and s0 is a result of applying a in s. Finite executions ending
non-trivial insights and modifications to Torres and Baier’s         in a state s are such that p(s) is undefined. An execution
approach. We evaluate the relative performance of our                    yields the state trace ⇡ that results from removing all the
compilation-based approaches using state-of-the-art FOND             action symbols from .
planner PRP [Muise, McIlraith, and Beck, 2012], demonstrat-              Alternatively, solutions to P can be represented by means
ing that they are competitive with state-of-the-art LTL plan-        of finite-state controllers (FSCs). Formally, a FSC is a tu-
ning techniques.                                                     ple ⇧ = hC, c0 , , ⇤, ⇢, ⌦i, where C is the set of controller
   Our work presents the first realization of a compilation-         states, c0 2 C is the initial controller state, = S is the
based approach to planning with non-deterministic actions            input alphabet of ⇧, ⇤ = A is the output alphabet of ⇧,
where the LTL is interpreted over finite traces. Furthermore,        ⇢ : C ⇥ ! C is the transition function, and ⌦ : C ! ⇤ is
unlike previous approaches to LTL planning, our compila-             the controller output function (cf. [Geffner and Bonet, 2013;
tions make it possible, for the first time, to solve the com-        Patrizi, Lipovetzky, and Geffner, 2013]).                In a planning
plete spectrum of FOND planning with LTL goals interpreted           state s, ⇧ outputs action ⌦(ci ) when the controller state is
over infinite traces. Indeed, all of our translations capture the    ci . Then, the controller transitions to state ci+1 = ⇢(ci , s0 )
full expressivity of the LTL language. Table 1 summarizes            if s0 is the new planning state, assumed to be fully observ-
existing compilation-based approaches and the contributions          able, that results from applying ⌦(ci ) in s. The execution of
of this work. Our compilations enable a diversity of real-           a FSC ⇧ in controller state c (assumed to be c = c0 ) and
world planning problems as well as supporting a number of            state s is an infinite sequence s0 , a0 , s1 , a1 , . . . or a finite se-
applications outside planning proper ranging from business           quence s0 , a0 , . . . , sn 1 , an 1 , sn , where s0 = s, and such
process analysis, and web service composition to narrative           that all of its state-action-state substrings si , ai , si+1 are such
generation, automated diagnosis, and automated verification.         that ⌦(ci ) = ai , si+1 is a result of applying ai in si , and
Finally and importantly, our compilations can be seen as a           ci+1 = ⇢(ci , si ). Finite executions ending in a state sn are
practical step towards the efficient realization of a class of       such that ⌦(cn ) is undefined. An execution yields the state
LTL synthesis tasks using planning technology (e.g., [Pnueli         trace ⇡ that results from removing all the action symbols from
and Rosner, 1989; De Giacomo and Vardi, 2015]). We elabo-              .
rate further with respect to related work in Section 5.                  Following Geffner and Bonet [2013], an infinite execution
                                                                         is fair iff whenever s, a occurs infinitely often within ,
2     Preliminaries                                                  then so does s, a, s0 , for every s0 that is a result of applying
                                                                     a in s. A solution is a strong cyclic plan for hF, I, G, Ai iff
2.1    FOND Planning                                                 each of its executions in I is either finite and ends in a state
Following Ghallab, Nau, and Traverso [2004], a Fully Ob-             that satisfies G or is (infinite and) unfair.
servable Non-Deterministic (FOND) planning problem con-
sists of a tuple hF, I, G, Ai, where F is a set of propositions      2.2    Linear Temporal Logic
that we call fluents, I ✓ F characterizes what holds in the          Linear Temporal Logic (LTL) was first proposed for verifi-
initial state; G ✓ F characterizes what must hold for the goal       cation [Pnueli, 1977]. An LTL formula is interpreted over an
to be achieved. Finally A is the set of actions. The set of liter-   infinite sequence, or trace, of states. Because the execution of
als of F is Lits(F) = F [ {¬f | f 2 F}. Each action a 2 A            a sequence of actions induces a trace of planning states, LTL
is associated with hP rea , E↵ a i, where P rea ✓ Lits(F) is         can be naturally used to specify temporally extended plan-
the precondition and E↵ a is a set of outcomes of a. Each            ning goals when the execution of the plan naturally yields an
outcome e 2 E↵ a is a set of conditional effects, each of the        infinite state trace, as may be the case in non-deterministic
form (C ! `), where C ✓ Lits(F) and ` 2 Lits(F). Given               planning.
a planning state s ✓ F and a fluent f 2 F, we say that s                In classical planning –i.e. planning with deterministic ac-
satisfies f , denoted s |= f iff f 2 s. In addition s |= ¬f if       tions and final-state goals–, plans are finite sequences of ac-
f 62 s, and s |= L for a set of literals L, if s |= ` for every      tions which yield finite execution traces. As such, approaches
` 2 L. Action a is applicable in state s if s |= P rea . We          to planning with deterministic actions and LTL goals (e.g.,
say s0 is a result of applying a in s iff, for one outcome e in      [Baier and McIlraith, 2006]), including the Planning Domain
E↵ a , s0 is equal to s \ {p | (C ! ¬p) 2 e, s |= C} [ {p |          Definition Language (PDDL) version 3 [Gerevini and Long,
(C ! p) 2 e, s |= C}. The determinization of a FOND                  2005], use a finite semantics for LTL, whereby the goal for-
problem hF, I, G, Ai is the planning problem hF, I, G, A0 i,         mula is evaluated over a finite state trace. De Giacomo and
where each non-deterministic action a 2 A is replaced by a           Vardi [2013] formally described and analyzed such a version
set of deterministic actions, ai , one action corresponding to       of LTL, which they called LTLf , noting the distinction with
each of the distinct non-deterministic effects of a. Together        LTL [De Giacomo, Masellis, and Montali, 2014].
these deterministic actions comprise the set A0 .                       LTL and LTLf allow the use of modal operators next (⌦),
   Solutions to a FOND planning problem P are policies. A            and until ( U ), from which it is possible to define the well-
policy p is a partial function from states to actions such that if   known operators always ( ) and eventually ( ). LTLf , in
                                       Infinite LTL                                                                        Finite LTL
   Deterministic Actions                    Non-Deterministic Actions                     Deterministic Actions                    Non-Deterministic Actions
   [Albarghouthi et al., 2009] (EXP)        [Patrizi et al., 2013] (limited LTL) (2EXP)   [Edelkamp, 2006] (EXP)                   [this paper (NFA)] (EXP)
   [Patrizi et al., 2011] (EXP)             [this paper (BAA)] (LIN)                      [Cresswell & Coddington, 2006] (EXP)     [this paper (AA)] (LIN)
                                            [this paper (NBA)] (EXP)                      [Baier & McIlraith, 2006] (EXP)
                                                                                          [Torres & Baier, 2015] (LIN)


Table 1: Automata-based compilation approaches for LTL planning. (2EXP) worst-case double exponential (EXP): worst-case
exponential. (LIN): linear.


addition, allows a weak next (✏) operator. An LTLf formula                                automata-based approaches for deterministic and FOND LTL
over a set of propositions P is defined inductively: a propo-                             planning are summarized in Table 1. Patrizi, Lipovetzky,
sition in P is a formula, and if and are formulae, then                                   and Geffner [2013] present a Büchi automata-based compi-
so are ¬ , ( ^ ), ( U ), ⌦ , and ✏ . LTL is defined                                       lation for that subset of LTL which relies on the construction
analogously.                                                                              of a Büchi deterministic automata. It is a well-known fact
   The semantics of LTL and LTLf is defined as follows. For-                              that Büchi deterministic automata are not equivalent to Büchi
mally, a state trace ⇡ is a sequence of states, where each state                          non-deterministic automata, and thus this last approach is ap-
is an element in 2P . We assume that the first state in ⇡ is                              plicable to a limited subset of LTL formulae.
s1 , that the i-th state of ⇡ is si and that |⇡| is the length of ⇡
(which is 1 if ⇡ is infinite). We say that ⇡ satisfies ' (⇡ |= ',                         3     FOND Planning with LTL Goals
for short) iff ⇡, 1 |= ', where for every natural number i 1:                             An LTL-FOND planning problem is a tuple hF, I, ', Ai,
  • ⇡, i |= p, for a propositional variable p 2 P, iff p 2 si ,                           where F, I, and A are defined as in FOND problems, and
  • ⇡, i |= ¬ iff it is not the case that ⇡, i |= ,                                       ' is an LTL formula. Solutions to an LTL-FOND problem
                                                                                          are FSCs, as described below.
  • ⇡, i |= ( ^ ) iff ⇡, i |=                 and ⇡, i |= ,
                                                                                          Definition 1 (Finite LTL-FOND). An FSC ⇧ is a solution for
  • ⇡, i |= ⌦' iff i < |⇡| and ⇡, i + 1 |= ',                                             hF, I, ', Ai under the finite semantics iff every execution of
  • ⇡, i |= ('1 U '2 ) iff for some j in {i, . . . , |⇡|}, it holds                       ⇧ over I is such that either (1) it is finite and yields a state
    that ⇡, j |= '2 and for all k 2 {i, . . . , j 1}, ⇡, k |= '1 ,                        trace ⇡ such that ⇡ |= ' or (2) it is (infinite and) unfair.
  • ⇡, i |= ✏' iff i = |⇡| or ⇡, i + 1 |= '.                                              Definition 2 (Infinite LTL-FOND). An FSC ⇧ is a solution
                                                                                          for hF, I, ', Ai under the infinite semantics iff (1) every ex-
Observe operator ✏ is equivalent to ⌦ iff ⇡ is infinite. There-                           ecution of ⇧ over I is infinite and (2) every fair (infinite)
fore, henceforth we allow ✏ in LTL formulae, we do not use                                execution yields a state trace ⇡ such that ⇡ |= '.
the acronym LTLf , but we are explicit regarding which in-
                                                                                             Below we present two general approaches to solving LTL-
terpretation we use (either finite or infinite) when not obvi-
                                                                                          FOND planning problems by compiling them into standard
ous from the context. As usual, ' is defined as (true U '),
                                                                                          FOND problems. Each exploits correspondences between
and ' as ¬ ¬'. We use the release operator, defined by
         def                                                                              LTL and either alternating or non-deterministic automata, and
( R ) = ¬(¬ U ¬ ).                                                                        each is specialized, as necessary, to deal with LTL interpreted
                                                                                          over either infinite (Section 3.1) or finite (Section 3.2) traces.
2.3    LTL, Automata, and Planning                                                        We show that FSC representations of strong-cyclic solutions
Regardless of whether the interpretation is over an infinite                              to the resultant FOND problem are solutions to the original
or finite trace, given an LTL formula ' there exists an au-                               LTL-FOND problem. Our approaches are the first to address
tomata A' that accepts a trace ⇡ iff ⇡ |= '. For infi-                                    the full spectrum of FOND planning with LTL interpreted
nite interpretations of ', a trace ⇡ is accepting when the                                over finite and inifinte traces. In particular our work is the
run of (a Büchi non-deterministic automata) A' on ⇡ vis-                                 first to solve full LTL-FOND with respect to infinite trace in-
its accepting states infinitely often. For finite interpreta-                             terpretations, and represents the first realization of a compi-
tions, ⇡ is accepting when the final automata state is ac-                                lation approach for LTL-FOND with respect to finite trace
cepting. For the infinite case such automata may be ei-                                   interpretations.
ther Büchi non-deterministic or Büchi alternating [Vardi and
Wolper, 1994], whereas for the finite case such automata                                  3.1      From Infinite LTL-FOND to FOND
may be either non deterministic [Baier and McIlraith, 2006]                               We present two different approaches to infinite LTL-FOND
or alternating [De Giacomo, Masellis, and Montali, 2014;                                  planning. The first approach exploits Büchi alternating au-
Torres and Baier, 2015]. Alternation allows the generation of                             tomata (BAA) and is linear in time and space with respect to
compact automata; specifically, A' is linear in the size of '                             the size of the LTL formula. The second approach exploits
(both in the infinite and finite case), whereas the size of non-                          Büchi non-deterministic automata (NBA), and is worst-case
deterministic (Büchi) automata is worst-case exponential.                                exponential in time and space with respect to the size of the
   These automata constructions have been exploited in deter-                             LTL formula. Nevertheless, as we see in Section 4, the sec-
ministic and non-deterministic planning with LTL via compi-                               ond compilation does not exhibit this worst-case complex-
lation approaches that allow us to use existing planning tech-                            ity in practice, generating high quality solutions with reduced
nology for non-temporal goals. The different state of the art                             compilation run times and competitive search performance.
3.1.1 A BAA-based Compilation                                                                           p^        ¬p
Our BAA-based compilation builds on ideas by Torres and                                     p                ¬p                 p
Baier [2015] for alternating automata (AA) based compila-
tion of finite LTL planning with deterministic actions (hence-                          p       p            ¬p               >
forth, TB15), and from Patrizi, Lipovetzky, and Geffner’s
compilation [2013] (henceforth, PLG13) of LTL-FOND to                                   p       >       ¬p        ¬p
FOND. Combining these two approaches is not straightfor-                                    ...                   ...
ward. Among other reasons, TB15 does not yield a sound
translation for the infinite case, and thus we needed to mod-       Figure 1: An accepting run of a BAA for       p^      ¬p over
ify it significantly. This is because the accepting condition for   an infinite sequence of states in which the truth value of p
BAAs is more involved than that of regular AAs.                     alternates. Double-line ovals are accepting states/conditions.
   The first step in the compilation is to build a BAA for our
LTL goal formula ' over propositions F, which we hence-                Sync Action      Effect
forth assume to be in negation normal form (NNF). Trans-               tr(q`S )         {¬q`S , q`T ! ¬q`T }
forming an LTL formula ' to NNF can be done in linear time                  S
                                                                       tr(q↵^     )     {q↵S
                                                                                             , q S , ¬q↵^
                                                                                                        S        T
                                                                                                              , q↵^     ! {q↵T
                                                                                                                               , q T , ¬q↵^
                                                                                                                                         T
                                                                                                                                            }}
in the size of '. The BAA we use below is an adaptation of                    S
                                                                       tr1 (q↵_     )      S       S        T
                                                                                        {q↵ , ¬q↵_ , q↵_ ! {q↵ , ¬q↵_ }}  T    T

the BAA by Vardi [1995]. Formally, it is represented by a                     S
                                                                       tr2 (q↵_     )   {q S , ¬q↵_S        T
                                                                                                        , q↵_     ! {q T , ¬q↵_T
                                                                                                                                     }}
tuple A' = (Q, ⌃, , q' , QF in ), where the set of states, Q, is            S
                                                                       tr(q⌦↵   )       {q↵ , ¬q⌦↵ S      T
                                                                                                      , q⌦↵    ! {q↵   T     T
                                                                                                                         , ¬q⌦↵ }}
the set of subformulae of ', sub(') (including '), ⌃ contains          tr1 (q↵S
                                                                                U )     {q S , ¬q↵ S        T              T
                                                                                                     U , q↵ U ! {q , ¬q↵ U }}
                                                                                                                                 T

all sets of propositions in P, QF in = {↵ R 2 sub(')}, and             tr2 (q↵S
                                                                                U )
                                                                                           S                S         T
                                                                                        {q↵ , q↵ U , ¬q↵ U , q↵ U ! q↵ }         T

the transition function, is given by:                                  tr1 (q↵S
                                                                                R )     {q S , q↵S
                                                                                                   , ¬q↵S        T
                                                                                                          R , q↵ R ! ¬q↵ R }
                                                                                                                               T
                                                                              S
                            ⇢                                          tr2 (q↵  R )     {q S , q↵ R , ¬q↵   S
                                                                                                               R  , q T
                                                                                                                      ↵R    !  ¬q↵ T
                                                                                                                                     R }
                              > if s |= ` (`, literal)                 tr1 (q S ↵ )        S       S      T
                                                                                        {q↵ , ¬q ↵ , q ↵ ! {q↵ , ¬q ↵ }}T    T
                  (`, s) =
                              ? otherwise                              tr2 (q S ↵ )     {q ↵ , ¬q S ↵ }
           (↵ ^ , s) = (↵, s) ^ ( , s)                                 tr(q S↵ )        {q↵S
                                                                                             , q ↵ , ¬q S↵ , q T ↵ ! ¬q T ↵ }

             (⌦↵, s) = ↵                                            Table 2: Synchronization actions. The precondition of tr(q S )
           (↵ _ , s) = (↵, s) _ ( , s)                              is {sync, q S }, plus ` when = ` is a literal.
           (↵ U , s) = ( , s) _ ( (↵, s) ^ ↵ U )
           (↵ R , s) = ( , s) ^ ( (↵, s) _ ↵ R )
                                                                    chronization mode, in which the configuration of the BAA is
As a note for the reader unfamiliar with BAAs, the transition       updated.
function for these automata takes a state and a symbol and re-         Before providing details of the translation we overview the
turns a positive Boolean formula over the set of states Q. Fur-     main differences between our translation and that of TB15.
thermore, a run of a BAA over an infinite string ⇡ = s1 s2 . . .    TB15 recognizes an accepting run (i.e., a satisfied goal) by
is characterized by a tree with labeled nodes, in which (in-        observing that all automaton states at the last level of the
formally): (1) the root node is labeled with the initial state,     (finite) run are accepting states. In the infinite case, such a
(2) level i corresponds to the processing of symbol si , and        check does not work. As can be seen in the example of Fig-
(3) the children of a node labeled by q at level i are the states   ure 1, there is no single level of the (infinite) run that only
appearing in a minimal model of (q, si ). As such, multiple         contains final BAA states. Thus, when building a plan with
runs for a certain infinite string are produced when selecting      our translation, the planner is given the ability to “decide” at
different models of (q, si ). A special case is when (q, si )       any moment that an accepting run can be found and then the
reduces to > or ?, where there is one child labeled by > or ?,      objective is to “prove” this is the case by showing the exis-
respectively. A run of a BAA is accepting iff all of its finite     tence of a loop or lasso in the plan in which any non-accepting
branches end on > and in each of its infinite branches there is     state may turn into an accepting state. To keep track of those
an accepting state that repeats infinitely often. Figure 1 shows    non-accepting states that we require to eventually “turn into”
a run of the BAA for         p^       ¬p—a formula whose se-        accepting states we use special fluents that we call tokens.
mantics forces an infinite alternation, which is not necessarily       For an LTL-FOND problem P = hF, I, ', Ai, where ' is
immediate, between states that satisfy p and states that do not     an NNF LTL formula with BAA A' = (Q, ⌃, , q' , QF in ),
satisfy p.                                                          the translated FOND problem is P 0 = hF 0 , I 0 , G 0 , A0 i, where
   In our BAA translation for LTL-FOND we follow a similar          each component is described below.
approach to that developed in the TB15 translation: given an        Fluents P 0 has the same fluents as P plus fluents for the
input problem P, we generate an equivalent problem P 0 in           representation of the states of the automaton FQ = {q | 2
which we represent the configuration of the BAA with fluents        Q}, and flags copy, sync, world for controlling the differ-
(one fluent q per each state q of the BAA). P 0 contains the        ent modes. Finally, it includes the set FQS = {q S | 2 Q}
actions in P plus additional synchronization actions whose
                                                                    which are copies of the automata fluents, and tokens FQT =
objective is to update the configuration of the BAA. In P 0 ,
there are special fluents to alternate between so-called world      {q T |      2 Q}. We describe both sets below. Formally,
mode, in which only one action of P is allowed, and syn-            F 0 = F [ FQ [ FQS [ FQT [ {copy, sync, world, goal}.
   The set of actions A0 is the union of the sets Aw and As             Now we show how, given a strong cyclic policy for P 0 ,
plus the continue action.                                            we can generate an FSC for P. Observe that every state ⇠,
World Mode Aw contains the actions in A with precondi-               which is a set of fluents in F 0 , can be written as the disjoint
tions modified to allow execution only in world mode. Ef-            union of sets sw = ⇠ \ F and sq = ⇠ \ (F 0 \ F ). Abusing
fects are modified to allow the execution of the copy action,        notation, we use sw 2 2F to represent a state in P. For a
which initiates the synchronization phase, described below.          planning state ⇠ = sw [ sq green in which p(⇠) is defined, we
Formally, Aw = {a0 | a 2 A}, and for all a0 in Aw :                  define ⌦(⇠) to be the action in A whose translation is p(⇠).
                                                                     Recall now that executions of a strong-cyclic policy p for P 0
             P rea0 = P rea [ {world},                               in state ⇠ generate plans of the form a1 ↵1 a2 ↵2 . . . where each
             E↵ a0 = E↵ a [ {copy, ¬world}.                          ai is a world action in Aw and ↵i are sequences of actions
                                                                     in A0 \ Aw . Thus ⌦(⇠) can be generated by taking out the
Synchronization Mode This mode has three phases. In the              fluents world and copy from the precondition and effects of
first phase, the copy action is executed, adding a copy q S for      p(⇠). If state s0w is a result of applying ⌦(⇠) in sw , we define
each fluent q that is currently true, deleting q. Intuitively, q S   ⇢(⇠, s0w ) to be the state ⇠ 0 that results from the composition of
defines the state of the automaton prior to synchronization.         consecutive non-world actions ↵1 mandated by an execution
The precondition of copy is {copy}, while its effect is:             of p in s0w [ sq . Despite non-determinism in the executions,
                                                                     the state ⇠ 0 = ⇢(⇠, s0w ) is well-defined.
  E↵ copy = {q ! {q S , ¬q} | q 2 FQ } [ {sync, ¬copy}                  The BAA translation for LTL-FOND is sound and com-
                                                                     plete. Throughout the paper, the soundness property guaran-
   As soon as the sync fluent becomes true, the second phase         tees that FSCs obtained from solutions to the compiled prob-
of synchronization begins. Here the only executable actions          lem P 0 are solutions to the LTL-FOND problem P, whereas
are those that update the state of the automaton, which are de-      the completeness property guarantees that a solution to P 0
fined in Table 2. These actions update the state of the automa-      exists if one exists for P.
ton following the definition of the transition function, . In
                                                                     Theorem 1. The BAA translation for Infinite LTL-FOND
addition, each synchronization action for a formula that has
                                                                     planning is sound, complete, and linear in the size of the goal
an associated token q T , propagates such a token to its subfor-
                                                                     formula.
mulae, unless corresponds to either an accepting state (i.e.,
   is of the form ↵ R ) or to a literal ` whose truth can be            A complete proof is not included but we present some of
verified with respect to the current state via action tr(q`S ).      the intuitions our proof builds on. Consider a policy p0 for P 0 .
   When no more synchronization actions are possible, we             p0 yields three types of executions: (1) finite executions that
enter the third phase of synchronization. Here only two ac-          end in a state where goal is true, (2) infinite executions in
tions are executable: world and continue. The objective of           which the continue action is executed infinitely often and (3)
world action is to reestablish world mode. Its precondition is       infinite, unfair executions. We do not need to consider (3) be-
                                                                     cause of Definition 2. Because the precondition of continue
{sync} [ FQS , and its effect is {world, ¬sync}.                     does not admit token fluents, if continue executes infinitely
   The continue action also reestablishes world mode, but in         often we can guarantee that any state that was not a BAA ac-
addition “decides” that an accepting BAA can be reached in           cepting state turns into an accepting state. This in turn means
the future. This is reflected by the non-deterministic effect        that every branch of the run contains an infinite repetition of
that makes the fluent goal true. As such, it “tokenizes” all         final states. The plan for P, p, is obtained by removing all
states that are not final states in FQ , by adding q T for each      synchronization actions from p0 , and the FSC that is solution
BAA state q that is non-final and currently true. Formally,          to P is obtained as described above. In the other direction, a
                                                                     plan p0 for P 0 can be built from a plan p for P by adding syn-
 P recontinue = {sync} [ {¬q'T | ' 62 QF in }                        chronization actions. Theorem 1 follows from the argument
  E↵ continue = {{goal},                                             given above and reuses most of the argument that TB15 uses
              {q' ! q'T | ' 62 QF in } [ {world, ¬sync}}             to show their translation is correct.
                                                                     3.1.2 An NBA-based Compilation
   The set As is defined as the one containing actions copy,         This compilation relies on the construction of a non-
world, and all actions defined in Table 2.                           determinisitic Büchi automaton (NBA) for the goal formula,
Initial and Goal States The resulting problem P 0 has ini-           and builds on translation techniques for finite LTL planning
tial state I 0 = I [ {q' , copy} , and goal G 0 = {goal}.            with deterministic actions developed by Baier and McIl-
   In summary, our BAA-based approach builds on TB15                 raith [2006] (henceforth, BM06). Given a deterministic plan-
while integrating ideas from PLG13. Like PLG13 our ap-               ning problem P with LTL goal ', the BM06 translation runs
proach uses a continue action to find plans with lassos, but         in two phases: first, ' is transformed into a non-deterministic
unlike PLG13, our translation does not directly use the ac-          finite-state automata (NFA), A' , such that it accepts a finite
cepting configuration of the automaton. Rather, the planner          sequence of states if and only if |= '. In the second
“guesses” that such a configuration can be reached. The to-          phase, it builds an output problem P 0 that contains the same
ken fluents FQT , which did not exist in TB15, are created for       fluents as in P plus additional fluents of the form Fq , for each
each non-accepting state and can only be eliminated when a           state q of A' . Problem P 0 contains the same actions as in P
non-accepting BAA state becomes accepting.                           but each action may contain additional effects which model
the dynamics of the Fq fluents. The goal of P 0 is defined as            planning is sound, complete, and worst-case exponential in
the disjunction of all fluents of the form Ff , where f is an ac-        the size of the LTL formula.
cepting state of A' . The initial state of P 0 contains Fq iff q is        Theorem 2 follows from soundness, completeness, and the
a state that A' would reach after processing the initial state of        complexity of the BM06 translation, this time using a NBA
P. The most important property of BM06 is the following: let             automaton, and an argument similar to that of Theorem 1.
   = s0 s1 . . . sn+1 be a state trace induced by some sequence          This time, if continue executes infinitely often we can guar-
of actions a0 a1 . . . an in P 0 , then Fq is satisfied by sn+1 iff      antee accepting NBA states are reached infinitely often.
there exists a run of A' over that ends in q. This means
that a single sequence of planning states encodes all runs of            3.2    From Finite LTL-FOND to FOND
the NFA A' . The important consequence of this property is
that the angelic semantics of A' is immediately reflected in             Our approach to finite LTL-FOND extends the BM06 and
the planning states and does not need to be handled by the               TB15 translations, originally intended for finite LTL planning
planner (unlike TB15).                                                   with deterministic actions, to the non-deterministic action set-
                                                                         ting. Both the original BM06 and TB15 translations share
   For LTL-FOND problem P = hF, I, ', Ai, our NBA-
                                                                         two general steps. In step one, the LTL goal formula is trans-
based compilation constructs a FOND problem P 0 =
                                                                         lated to an automaton/automata – in the case of BM06 an
hF 0 , I 0 , G 0 , A0 i via the following three phases: (i) construct
                                                                         NFA, in the case of TB15, an AA. In step two, a planning
an NBA, A' for the NNF LTL goal formula ', (ii) apply the
                                                                         problem P 0 is constructed by augmenting P with additional
modified BM06 translation to the determinization of P (see
                                                                         fluents and action effects to account for the integration of the
Section 2.1) , and (iii) construct the final FOND problem P 0
                                                                         automaton. In the case of BM06 these capture the state of
by undoing the determinization, i.e., reconstruct the original
                                                                         the automaton and how domain actions cause the state of the
non-deterministic actions from their determinized counter-
                                                                         automaton to be updated. In the case of the TB15 translation,
parts. More precisely, the translation of a non-deterministic
                                                                         P must also be augmented with synchronization actions. Fi-
action a in P is a non-deterministic action a0 in P 0 that is
                                                                         nally, in both cases the original problem goals must be modi-
constructed by first determinizing a into a set of actions, ai
                                                                         fied to capture the accepting states of automata.
that correspond to each of the non-deterministic outcomes of
                                                                            When BM06 and TB15 are exploited for LTL-FOND, the
a, applying the BM06-based translation to each ai to pro-
                                                                         non-deterministic nature of the actions must be taken into ac-
duce a0i , and then reassembling the a0i s back into a non-
                                                                         count. This is done in much the same as with the NBA- and
deterministic action, a0 . In so doing, E↵ a0 is the set of out-
                                                                         BAA-based compilations described in the previous section.
comes in each of the deterministic actions a0i , and P rea0 is
                                                                         In particular, non-deterministic actions in the LTL-FOND
similarly the precondition of any of these a0i .
                                                                         problem are determinized, the BM06 (resp. TB15) transla-
   The modification of the BM06 translation used in the sec-             tion is applied to these determinized actions, and then the
ond phase leverages ideas present in PLG13 and our BAA-                  non-deterministic actions reconstructed from their translated
based compilations to capture infinite runs via induced non-             determinized counterparts (as done in the NBA-based compi-
determinism. In particular, it includes a continue action                lation) to produce FOND problem, P 0 . A FSC solution, ⇧, to
whose precondition is the accepting configuration of the NBA             the LTL-FOND problem P, can be obtained from a solution
(a disjunction of the fluents representing accepting states).            to P 0 . When the NFA-based translations are used, the FSC,
Unlike our BAA-based compilation, the tokenization is not                ⇧, is obtained from policy p following the approach described
required because accepting runs are those that achieve accept-           for NBA-based translations. When the AA-based translations
ing states infinitely often, no matter which ones. As before,            are used, the FSC, ⇧, is obtained from p following the ap-
one non-deterministic effect of continue is to achieve goal,             proach described for BAA-based translations.
while the other is to force the planner to perform at least one
action. This is ensured by adding an extra precondition to               Theorem 3. The NFA (resp. AA) translation for Finite LTL-
continue, can continue, which is true in the initial state, it           FOND is sound, complete, and exponential (resp. linear) in
is made true by every action but continue, and is deleted by             the size of the LTL formula.
continue.                                                                   Soundness and completeness in Theorem 3 follows from
   In order to construct a solution ⇧ to P from a strong-cyclic          soundness and completeness of the BM06 and TB15 trans-
solution p to P 0 = hF 0 , I 0 , G 0 , A0 i, it is useful to represent   lations. Fair executions of ⇧ yield finite plans for P 0 , and
states ⇠ in P 0 as the disjoint union of s = ⇠ \ F and q =               therefore state traces (excluding intermediate synchronization
⇠ \ (F 0 \ F ). Intuitively, s represents the planning state in          states) satisfy '. Conversely, our approach is complete as for
P, and q represents the automaton state. The controller ⇧ =              every plan in P, one can construct a plan in P 0 . Finally, the
hC, c0 , , ⇤, ⇢, ⌦i is defined as follows. c0 = I 0 is the initial       run-time complexity and size of the translations is that of the
controller state; = 2F ; ⇤ = A; ⇢(⇠, s0 ) = s0 [ q 0 , where q 0         original BM06 and TB15 translations – worst case exponen-
is the automaton state that results from applying action p(⇠)            tial in time and space for the NFA-based approach and linear
                                        0
in ⇠; ⌦(⇠) = p(⇠); and C ✓ 2F is the domain of p. Actions                in time and space for the AA approach.
in P are non-deterministic and have conditional effects, but
      0

the automaton state q 0 that results from applying action p(⇠)           4     Experiments
in state ⇠ = s [ q is deterministic, and thus ⇢ is well-defined.
                                                                         We evaluate our framework on a selection of benchmark
Theorem 2. The NBA translation for Infinite LTL-FOND                     domains with LTL goals from [Baier and McIlraith, 2006;
   (a) Translation run times.      (b) Number of new fluents.   (a) Run-time in Waldo problems. (b) Run-time in Lift problems.




      (c) PRP run times.              (d) PRP Policy Size.      (c) Run-time in Clerk problems. (d) Policy Size (world actions).

                                                                Figure 3: Performance of our planning system using BAA-
                                                                based translations in different LTL-FOND domains. We re-
                                                                port PRP run-times (in seconds) and policy sizes, excluding
                                                                synchronization actions.

                                                                of the plans obtained from each translation, we compared the
                                                                number of world actions (i.e., excluding automaton-state syn-
                                                                chronization actions) in the shortest plans of the policies ob-
(e) Preferred world Plan Length.                                tained (Figure 2e). This is a crude estimator of the quality
                                                                of plans, since these plans are not necessarily the ones that
Figure 2: Performance of our planning system using AA- and      minimize the number of world actions, as they also contain
NFA-based translations in different problems with determin-     synchronization actions. The number of world actions that
istic and non-deterministic actions and finite LTL goals.       we obtained in both compilations was very similar.
                                                                   Interestingly, whereas the size of the AA translations is lin-
Patrizi, Lipovetzky, and Geffner, 2013; Torres and Baier,       ear in the size of the original LTL formula and NFA trans-
2015], modified to include non-deterministic actions. Ex-       lations are worst-case exponential, in practice we observed
periments were conducted on an Intel Xeon E5-2430 CPU           the size of the NFA-based translated problems is smaller.
@2.2GHz Linux server, using a 4GB memory and a 30-              Furthermore, PRP performs better when problems are com-
minute time limit.                                              piled using NFAs, generating similar quality policies in lower
                                                                search run-times.
LTL-FOND Planning over Finite Traces: We evaluated the             We didn’t experience any untoward decrease in perfor-
performance of our BM06 (NFA) and TB15 (AA) translators,        mance in deterministic problems that were extended with
with respect to a collection of problems with deterministic     non-deterministic actions, suggesting that AA- and NFA-
and non-determinisitic actions and LTL goals, interpreted on    based translations remain competitive in LTL-FOND.
finite traces. We used the state-of-the-art FOND planner, PRP
[Muise, McIlraith, and Beck, 2012], to solve the translated     LTL-FOND Planning over Infinite Traces: The relative
problems. NFA-based translation times increased when the        performance observed between NBA- and BAA-based trans-
LTL formula had a large number of conjunctions and nested       lations for LTL-FOND planning, interpreted over infinite
modal operators, whereas AA-based translation times remain      traces, is reflective of the finite case. NBA translation run
negligible. However, the AA translation included a num-         times are greater, but result in lower planner run times and
ber of new fluents that were, in some cases, up to one or-      smaller policy sizes. For reference, we compared BAA trans-
der of magnitude larger than with the NFA (Figures 2a and       lations with the so-called sequential and parallel translations
2b). This seems to translate into more complex problems, as     developed by Patrizi, Lipovetzky, and Geffner [2013], subse-
PRP run times become almost consistently greater in prob-       quently referrd to as PLG13seq and PLG13par, respectively.
lems translated with AA (Figure 2c). The size of the poli-      The former alternates between world and sync actions (that
cies obtained from the AA compilations were considerably        update the automaton state), whereas the latter parallelizes
greater than those obtained with NFA compilations (Figure       this process in a single action. The current implementation of
3d). This is expected, as AA translations introduce a number    PLG13 translations forced us to perform such comparisons
of synchronization actions, whereas the number of actions in    only in the three domains that appear in [Patrizi, Lipovetzky,
NFA translations remains unchanged. To assess the quality       and Geffner, 2013]. Namely, the Waldo, Lift, and Clerk do-
mains. All problems have LTL goals that can be compiled              count. Policy sizes with BAA-based translations are similar,
into deterministic Büchi automata. Unfortunately, we could          but consistently smaller than those from PLG13par transla-
not include a fair comparison with NBA translations in the           tions, except in the Lift problems where the former results in
Lift and Clerk domains, due to a specific encoding that forced       considerably smaller policies. Finally, we evaluated the valid-
transitions to synchronization phases (existing in PLG13 and         ity of our system with LTL goals that could not be handled by
BAA translations, but not in NBA). In the Waldo problems,            PLG13. In particular, we solved Waldo problems with goals
however, NBA translations generated smaller solutions (by a          of the form       ↵.
half) with roughly half the run time required by BAA. On the            Overall, our system proves very competitive with (as good
other hand, NBA translation times timed out after the 12th in-       as or better than) the previous state-of-the-art LTL-FOND
stance (possibly due to lack of optimization of the translator).     planning methods, while supporting a much broader spectrum
   The Waldo problems require construction of a controller           (the full spectrum) of LTL formulae.
for a robot that moves around n rooms and finds Waldo
infinitely often. Waldo may or may not appear in the n-              5   Summary and Discussion
th and n/2-th rooms when these are visited. The dynam-
ics of the problem preclude visiting a room twice before             We have proposed four compilation-based approaches to fully
visiting the remaining ones, in which case the predicate             observable non-deterministic planning with LTL goals that
search again becomes true. The LTL goal of the problem               are interpreted over either finite or infinite traces. These com-
is      search again _ Waldo. The Lift problems requires             pilations support the full expressivity of LTL, in contrast to
construction of a controller for an n-floor building that serves     much existing work. In doing so, we address a number of
all requests. The dynamics of the problem require alternation        open problems in planning with LTL with non-deterministic
between move and push fi actions, i = 1, . . . , n. Fluents          actions, as noted in Table 1. Our LTL planning techniques are
ati and reqi model, respectively, whether the lift is at the i-      directly applicable to a number of real-world planning prob-
th floor, and whether a request from the i-th floor has been         lems as well as capturing a diversity of applications beyond
issued and not served. The lift can only move up if some             standard planning, including but not limited to genomic re-
request is issued. The push fi actions non-deterministically         arrangement [Uras and Erdem, 2010], program test genera-
request the lift to service the i-th floor. Initially, the lift is   tion [Razavi, Farzan, and McIlraith, 2014], story generation
at floor 1, and noVrequest is issued. The LTL goal of the            [Haslum, 2012], automated diagnosis [Grastien et al., 2007;
problem is ' = i=1
                      n
                               (reqi ! ati ). Finally, the Clerk     Sohrabi, Baier, and McIlraith, 2010], business process man-
problems require construction of a controller that serves all        agement [De Giacomo et al., 2014], verification [Albargh-
clients in a store. Clients can order one of n packages pi .         outhi, Baier, and McIlraith, 2009; Patrizi et al., 2011], and
If the package is not available, the clerk has to buy it from        robot motion planning [Plaku, 2012].
a supplier, pick it up, and store it in its correct location.           Our work has focused on the specification of temporally
In order to serve the client, the clerk has to find the pack-        extended goals as rich LTL path constraints, which is in
age, pick it up, and sell it. The LTL goal of the problem is         keeping with the planning community’s integration of a sub-
   (active request ! (item served _ item stored )).                  set of LTL into the Planning Domain Definition Language
   The results of experiments are summarized in Figure               (PDDL). Nevertheless, some goals are best described pro-
3. In Waldo problems, the planner run times using BAA-               cedurally, using regular expressions. The EAGLE goal lan-
based translations are situated between the run times with           guage [Lago, Pistore, and Traverso, 2002] and variants of
PLG13seq and PLG13par. In Lift problems, the BAA trans-              the Golog language (e.g., [Baier, Fritz, and McIlraith, 2007;
lations demonstrate significantly greater scalability. The Lift      Fritz, Baier, and McIlraith, 2008]) are prominent among sev-
problems contain a (increasing) large number of conjunc-             eral attempts to support planning with action-centric proce-
tive LTL goals. We conjecture that the poor scalability with         dural control/goals using regular expressions. Shaparau, Pi-
PLG13seq (runs out of time) and PLG13par (runs out of                store, and Traverso (2008) extend EAGLE to combine both
memory) translations is due to the bad handling of conjunc-          declarative and procedural goals in a non-deterministic plan-
tive goals, that results in a exponentially large number of dif-     ning setting, while more recently Triantafillou, Baier, and
ferent state transitions. On the other hand, the PRP handles         McIlraith (2015) combine LTL with regular expression to
conjunctive goals much better in the BAA translations thanks         plan with goals specified in linear dynamic logic (LDL).
to the AA progression of the LTL formula. In the Clerk prob-            We evaluated the effectiveness of our FOND compilations
lems, PRP scales slightly worse with the BAA translation             using the state-of-the-art FOND planner, PRP. An interest-
than with the PLG13seq and PLG13par translations, which              ing observation is that our worst-case exponential NFA-based
can solve 1 and 2 more problems respectively. The run times          translations run faster and return smaller policies than the
with all translations seem to show the same exponential trend,       AA-based linear translations. This seems to be due to the
and differ in a small offset that corresponds to the increase in     larger number of fluents (and actions) required in the AA-
problem complexity.                                                  based translations. Compared to the existing approach of [Pa-
   Figure 3d compares the size of the policies found by PRP          trizi, Lipovetzky, and Geffner, 2013], experiments indicate
to problems compiled with BAA and PLG13par translations.             that our approaches scale up better.
PLG13seq translations resulted in slightly larger policies, due         Finally, we observe that LTL-FOND is related to the prob-
to separate world and sync action phases. We account only            lem of LTL synthesis [Pnueli and Rosner, 1989]. Infor-
for world actions, excluding synchronization actions from the        mally, it is the problem of computing a policy that satisfies
an LTL formula, assuming that an adversary (which we can              on LTL on finite traces: Insensitivity to infiniteness. In
associate to the non-deterministic environment) may change            Proceedings of the 28th AAAI Conference on Artificial
some fluents after the execution of each action. Recently             Intelligence (AAAI), 1027–1033.
De Giacomo and Vardi [2015] showed how to map a finite             [Doherty and Kvarnström, 2001] Doherty, P., and Kvarn-
LTL-FOND problem into a synthesis problem. Sardiña and               ström, J. 2001. TALplanner: A temporal logic-based plan-
D’Ippolito [2015] go further, showing how FOND plans can              ner. AI Magazine 22(3):95–102.
be synthesized using LTL synthesis algorithms. An open
question is whether any existing planning technology can be        [Edelkamp, 2006] Edelkamp, S. 2006. Optimal symbolic
used for LTL synthesis as well. LTL synthesis is not an in-           PDDL3 planning with MIPS-BDD. In 5th International
stance of strong cyclic FOND planning since synthesis adver-          Planning Competition Booklet (IPC-2006), 31–33.
saries are not fair.                                               [Fritz, Baier, and McIlraith, 2008] Fritz, C.; Baier, J. A.; and
Acknowledgements: We gratefully acknowledge funding                   McIlraith, S. A. 2008. ConGolog, sin Trans: Compil-
from the Natural Sciences and Engineering Research Council            ing ConGolog into basic action theories for planning and
of Canada and from Fondecyt grant number 1150328.                     beyond. In Proceedings of the 11th International Confer-
                                                                      ence on Knowledge Representation and Reasoning (KR),
References                                                            600–610.
[Albarghouthi, Baier, and McIlraith, 2009] Albarghouthi,           [Geffner and Bonet, 2013] Geffner, H., and Bonet, B. 2013.
  A.; Baier, J. A.; and McIlraith, S. A. 2009. On the use of          A Concise Introduction to Models and Methods for Auto-
  planning technology for verification. In Proceedings of             mated Planning. Synthesis Lectures on Artificial Intelli-
  Validation and Verification of Planning and Scheduling              gence and Machine Learning. Morgan & Claypool Pub-
  Systems Workshop (VVPS).                                            lishers.
[Bacchus and Kabanza, 2000] Bacchus, F., and Kabanza, F.           [Gerevini and Long, 2005] Gerevini, A., and Long, D. 2005.
  2000. Using temporal logics to express search control               Plan constraints and preferences for PDDL3. Technical
  knowledge for planning. AI Magazine 16:123–191.                     Report 2005-08-07, Department of Electronics for Au-
                                                                      tomation, University of Brescia, Brescia, Italy.
[Baier and McIlraith, 2006] Baier, J. A., and McIlraith, S. A.
  2006. Planning with first-order temporally extended goals        [Ghallab, Nau, and Traverso, 2004] Ghallab, M.; Nau, D.;
  using heuristic search. In Proceedings of the 21st National         and Traverso, P. 2004. Automated planning: theory &
  Conference on Artificial Intelligence (AAAI), 788–795.              practice. Elsevier.
[Baier, Fritz, and McIlraith, 2007] Baier, J. A.; Fritz, C.; and   [Grastien et al., 2007] Grastien, A.; Anbulagan; Rintanen, J.;
  McIlraith, S. 2007. Exploiting procedural domain control            and Kelareva, E. 2007. Diagnosis of discrete-event sys-
  knowledge in state-of-the-art planners. In Proceedings of           tems using satisfiability algorithms. In Proceedings of the
  the 17th International Conference on Automated Planning             22nd AAAI Conference on Artificial Intelligence (AAAI),
  and Scheduling (ICAPS), 26–33.                                      305–310.
[Cresswell and Coddington, 2004] Cresswell, S., and Cod-           [Haslum, 2012] Haslum, P. 2012. Narrative planning: Com-
  dington, A. M. 2004. Compilation of LTL goal formulas               pilations to classical planning. Journal of Artificial Intelli-
  into PDDL. In Proceedings of the 16th European Confer-              gence Research 44:383–395.
  ence on Artificial Intelligence (ECAI), 985–986.                 [Kabanza, Barbeau, and St.-Denis, 1997] Kabanza, F.; Bar-
[De Giacomo and Vardi, 2013] De Giacomo, G., and Vardi,               beau, M.; and St.-Denis, R. 1997. Planning control rules
  M. Y. 2013. Linear temporal logic and linear dynamic                for reactive agents. Artificial Intelligence 95(1):67–11.
  logic on finite traces. In Proceedings of the 23rd Interna-      [Kupferman and Rosenberg, 2010] Kupferman, O., and
  tional Joint Conference on Artificial Intelligence (IJCAI),         Rosenberg, A. 2010. The blowup in translating LTL
  854–860.                                                            to deterministic automata. In van der Meyden, R., and
[De Giacomo and Vardi, 2015] De Giacomo, G., and Vardi,               Smaus, J., eds., 6th International Workshop on Model
  M. Y. 2015. Synthesis for LTL and LDL on finite traces.             Checking and Artificial Intelligence (MoChArt), volume
  In Proceedings of the 24th International Joint Conference           6572 of Lecture Notes in Computer Science, 85–94.
  on Artificial Intelligence (IJCAI), 1558–1564.                      Springer.
[De Giacomo et al., 2014] De Giacomo, G.; Masellis, R. D.;         [Lago, Pistore, and Traverso, 2002] Lago, U. D.; Pistore,
  Grasso, M.; Maggi, F. M.; and Montali, M. 2014. Mon-                M.; and Traverso, P. 2002. Planning with a language
  itoring business metaconstraints based on LTL and LDL               for extended goals. In Proceedings of the 18th National
  for finite traces. In Proceedings of the 12th International         Conference on Artificial Intelligence (AAAI), 447–454.
  Conference on Business Process Management (BPM), vol-            [Muise, McIlraith, and Beck, 2012] Muise, C.; McIlraith,
  ume 8659 of Lecture notes in Computer Science, 1–17.                S. A.; and Beck, J. C. 2012. Improved Non-deterministic
  Springer.                                                           Planning by Exploiting State Relevance. In Proceedings of
[De Giacomo, Masellis, and Montali, 2014] De Giacomo,                 the 22th International Conference on Automated Planning
  G.; Masellis, R. D.; and Montali, M. 2014. Reasoning                and Scheduling (ICAPS), 172–180.
[Patrizi et al., 2011] Patrizi, F.; Lipovetzky, N.; De Giacomo,      24th International Joint Conference on Artificial Intelli-
   G.; and Geffner, H. 2011. Computing infinite plans for            gence (IJCAI), 1696–1703.
   LTL goals using a classical planner. In Proceedings of the     [Triantafillou, Baier, and McIlraith, 2015] Triantafillou, E.;
   22nd International Joint Conference on Artificial Intelli-        Baier, J.; and McIlraith, S. A. 2015. A unifying framework
   gence (IJCAI), 2003–2008.                                         for planning with ltl and regular expressions. In Workshop
[Patrizi, Lipovetzky, and Geffner, 2013] Patrizi, F.; Lipovet-       on Model-Checking and Automated Planning (MOCHAP)
   zky, N.; and Geffner, H. 2013. Fair LTL synthesis for             at ICAPS.
   non-deterministic systems using strong cyclic planners. In     [Uras and Erdem, 2010] Uras, T., and Erdem, E. 2010.
   Proceedings of the 23rd International Joint Conference on         Genome rearrangement: A planning approach. In Pro-
   Artificial Intelligence (IJCAI), 2343–2349.                       ceedings of the 24th AAAI Conference on Artificial Intelli-
[Pistore and Traverso, 2001] Pistore, M., and Traverso, P.           gence (AAAI).
   2001. Planning as model checking for extended goals in         [Vardi and Wolper, 1994] Vardi, M. Y., and Wolper, P. 1994.
   non-deterministic domains. In Proceedings of the 17th In-         Reasoning about infinite computations. Information and
   ternational Joint Conference on Artificial Intelligence (IJ-      Computation 115(1):1–37.
   CAI), 479–484.
                                                                  [Vardi, 1995] Vardi, M. Y. 1995. An automata-theoretic ap-
[Plaku, 2012] Plaku, E. 2012. Planning in discrete and con-          proach to linear temporal logic. In Banff Higher Order
   tinuous spaces: From LTL tasks to robot motions. In Ad-           Workshop, volume 1043 of Lecture notes in Computer Sci-
   vances in Autonomous Robotics - Joint Proceedings of the          ence, 238–266. Springer.
   13th Annual TAROS Conference and the 15th Annual FIRA
   RoboWorld Congress, Bristol, UK, August 20-23, 2012,
   331–342.
[Pnueli and Rosner, 1989] Pnueli, A., and Rosner, R. 1989.
   On the synthesis of a reactive module. In Proceedings of
   the 16th ACM SIGPLAN-SIGACT Symposium on Princi-
   ples of Programming Languages, 179–190.
[Pnueli, 1977] Pnueli, A. 1977. The temporal logic of pro-
   grams. In Proceedings of the 18th IEEE Symposium on
   Foundations of Computer Science (FOCS), 46–57.
[Razavi, Farzan, and McIlraith, 2014] Razavi, N.; Farzan,
   A.; and McIlraith, S. A. 2014. Generating effective tests
   for concurrent programs via AI automated planning tech-
   niques. International Journal on Software Tools for Tech-
   nology Transfer 16(1):49–65.
[Rintanen, 2000] Rintanen, J. 2000. Incorporation of tem-
   poral logic control into plan operators. In Proceedings
   of the 14th European Conference on Artificial Intelligence
   (ECAI), 526–530.
[Sardiña and D’Ippolito, 2015] Sardiña, S., and D’Ippolito,
   N. 2015. Towards fully observable non-deterministic plan-
   ning as assumption-based automatic synthesis. In Pro-
   ceedings of the 24th International Joint Conference on Ar-
   tificial Intelligence (IJCAI), 3200–3206.
[Shaparau, Pistore, and Traverso, 2008] Shaparau, D.; Pis-
   tore, M.; and Traverso, P. 2008. Fusing procedural and
   declarative planning goals for nondeterministic domains.
   In Proceedings of the 23rd AAAI Conference on Artificial
   Intelligence (AAAI), 983–990.
[Sohrabi, Baier, and McIlraith, 2010] Sohrabi, S.; Baier,
   J. A.; and McIlraith, S. A. 2010. Diagnosis as planning
   revisited. In Proceedings of the 12th International Confer-
   ence on the Principles of Knowledge Representation and
   Reasoning (KR), 26–36.
[Torres and Baier, 2015] Torres, J., and Baier, J. A. 2015.
   Polynomial-time reformulations of LTL temporally ex-
   tended goals into final-state goals. In Proceedings of the