=Paper= {{Paper |id=Vol-3284/1530 |storemode=property |title=LTLf Synthesis Under Environment Specifications |pdfUrl=https://ceur-ws.org/Vol-3284/1530.pdf |volume=Vol-3284 |authors=Antonio Di Stasio |dblpUrl=https://dblp.org/rec/conf/ictcs/000122 }} ==LTLf Synthesis Under Environment Specifications== https://ceur-ws.org/Vol-3284/1530.pdf
LTL𝑓 Synthesis Under Environment Specifications
(Short Paper)
Antonio Di Stasio
Sapienza University of Rome, Italy


                                      Abstract
                                      In this communication we present recent advances in the field of synthesis for agent goals specified in
                                      Linear Temporal Logic on finite traces under environment specifications. In synthesis, environment
                                      specifications are constraints on the environments that rule out certain environment behavior. To solve
                                      synthesis of LTL𝑓 goals under environment specifications, we could reduce the problem to LTL synthesis.
                                      Unfortunately, while synthesis in LTL𝑓 and in LTL have the same worst-case complexity (both are
                                      2EXPTIME-complete), the algorithms available for LTL synthesis are much harder in practice than those
                                      for LTL𝑓 synthesis. We report recent results showing that when the environment specifications are in
                                      form of fairness, stability, or GR(1) formulas, we can avoid such a detour to LTL and keep the simplicity
                                      of LTL𝑓 synthesis. Furthermore, even when the environment specifications are specified in full LTL we
                                      can partially avoid this detour.

                                      Keywords
                                      Linear Temporal Logic on Finite Traces, Synthesis, Automata-Theoretic Approach




1. Introduction
Program synthesis is one of the most ambitious and challenging problem of CS and AI. Reactive
synthesis is a class of program synthesis problems which aims at synthesizing a program for
interactive/reactive ongoing computations [1, 2]. We have two sets of Boolean variables 𝒳 and
𝒴, controlled by the environment and the agent, respectively, and a specification πœ™ over 𝒳 βˆͺ 𝒴
in terms of Linear Temporal Logic (ltl) [3]. The synthesis has to generate a program, aka a
strategy, for the agent such that for every environment strategy the simultaneous execution of
the two strategies generate a trace that satisfies πœ™ [2, 4, 5].
   Recently, the problem of reactive synthesis has been studied in the case the specification is
expressed in ltl over finite traces (ltl𝑓 ) and its variants [6]. This logic is particularly fitting for
expressing temporally-extended goals in Planning since it retains the fact that ultimately the
goal must be achieved and the plan terminated [7].
   In the classical formulation of the problem of reactive synthesis the environment is free to
choose an arbitrary move at each step, but in AI typically we have a model of world, i.e., of the
environment behavior, e.g., encoded in a planning domain [8, 9, 10], or more generally directly in
temporal logic [11, 12, 13, 14]. In other words, we are interested in synthesis under environment
specifications [15, 16, 17, 18, 19, 20, 21], which can be reduced to standard synthesis of the
Proceedings of the 23rd Italian Conference on Theoretical Computer Science, Rome, Italy, September 7-9, 2022
" distasio@diag.uniroma1.it (A. Di Stasio)
 0000-0001-5475-2978 (A. Di Stasio)
                                    Β© 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
 CEUR
 Workshop
 Proceedings
               http://ceur-ws.org
               ISSN 1613-0073
                                    CEUR Workshop Proceedings (CEUR-WS.org)
implication: 𝐸𝑛𝑣 β†’ πΊπ‘œπ‘Žπ‘™ where 𝐸𝑛𝑣 is the specification of the environment (the environment
specification) and πΊπ‘œπ‘Žπ‘™ is the specification of the task of the agent [13, 16]. The agent has
to realize its task πΊπ‘œπ‘Žπ‘™ only on those traces that satisfy the environment specification 𝐸𝑛𝑣.
Specifically, we focus on synthesis under environment specifications for ltl𝑓 goals. However,
while it is natural to consider the task specification πΊπ‘œπ‘Žπ‘™ as an ltl𝑓 formula, requiring that also
𝐸𝑛𝑣 is an ltl𝑓 formula is often too strong, since accomplishing the agent task may require
an unbounded number of environment moves and such number is decided by the agent that
determines when its task is finished. As a result 𝐸𝑛𝑣 typically needs to be expressed over ltl
not ltl𝑓 [17, 21]. So, even when focusing on ltl𝑓 , what we need to study is the case where we
have the task πΊπ‘œπ‘Žπ‘™ expressed in ltl𝑓 and the environment specification 𝐸𝑛𝑣 expressed in ltl.
   One way to handle this case is to translate πΊπ‘œπ‘Žπ‘™ into ltl [22] and then do ltl synthesis for
𝐸𝑛𝑣 β†’ πΊπ‘œπ‘Žπ‘™, see e.g. [17]. But, while synthesis in ltl𝑓 and in ltl have the same worst-case
complexity, being both 2exptime-complete [2, 6], the algorithms available for ltl synthesis are
much harder in practice than those for ltl𝑓 synthesis as experimentally shown [23, 24, 25].
   For these reasons, several specific forms of ltl environment specifications have been consid-
ered. In this communication we reports recent results showing how to avoid the detour to LTL
synthesis in cases where the environment specifications are in the form of fairness or stability
[21], or gr(1) formulas [20], or specified in both ltl𝑓 (e.g., for representing nodeterministic
planning domains or other safety properties) and ltl (e.g., for liveness/fairness), but separating
the contributions of the two by limiting the second one as much as possible [18].


2. ltl and ltl𝑓
ltl is one of the most popular logics for temporal properties [3]. Given a set of propositions 𝒫,
the formulas of ltl are generated by the following grammar:

                          πœ™ ::= 𝑝 | (πœ™1 ∧ πœ™2 ) | (Β¬πœ™) | (β—‹πœ™) | (πœ™1 𝒰 πœ™2 )

where 𝑝 ∈ 𝒫. We use common abbreviations for eventually β™’πœ™ ≑ true 𝒰 πœ™ and always as
β–‘πœ™ ≑ Β¬β™’Β¬πœ™.
   ltl formulas are interpreted over infinite traces πœ‹ ∈ (2𝒫 )πœ” . A trace πœ‹ = πœ‹0 πœ‹1 . . . is a
sequence of propositional interpretations (sets), where for every 𝑖 β‰₯ 0, πœ‹π‘– ∈ 2𝒫 is the 𝑖-th
interpretation of πœ‹. Intuitively, πœ‹π‘– is interpreted as the set of propositions that are π‘‘π‘Ÿπ‘’π‘’ at instant
𝑖. Given πœ‹, we define when an ltl formula πœ™ holds at position 𝑖, written as πœ‹, 𝑖 |= πœ™, inductively
on the structure of πœ™, as follows:

    β€’ πœ‹, 𝑖 |= 𝑝 iff 𝑝 ∈ πœ‹π‘– (for 𝑝 ∈ 𝒫);

    β€’ πœ‹, 𝑖 |= πœ™1 ∧ πœ™2 iff πœ‹, 𝑖 |= πœ™1 and πœ‹, 𝑖 |= πœ™2 ;

    β€’ πœ‹, 𝑖 |= Β¬πœ™ iff πœ‹, 𝑖 ΜΈ|= πœ™;

    β€’ πœ‹, 𝑖 |= β—‹πœ™ iff πœ‹, 𝑖 + 1 |= πœ™;

    β€’ πœ‹, 𝑖 |= πœ™1 𝒰 πœ™2 iff there exists 𝑖 ≀ 𝑗 such that πœ‹, 𝑗 |= πœ™2 , and for all π‘˜, 𝑖 ≀ π‘˜ < 𝑗 we
      have that πœ‹, π‘˜ |= πœ™1 .
We say πœ‹ satisfies πœ™, written as πœ‹ |= πœ™, if πœ‹, 0 |= πœ™.
   ltl𝑓 is a variant of ltl interpreted over finite traces instead of infinite traces [22]. The syntax
of ltl𝑓 is the same as the syntax of ltl. We define πœ‹, 𝑖 |= πœ™, stating that πœ™ holds at position 𝑖,
as for ltl, except that for the temporal operators we have:

    β€’ πœ‹, 𝑖 |= β—‹πœ™ iff 𝑖 < last(πœ‹) and πœ‹, 𝑖 + 1 |= πœ™;

    β€’ πœ‹, 𝑖 |= πœ™1 𝒰 πœ™2 iff there exists 𝑗 such that 𝑖 ≀ 𝑗 ≀ last(πœ‹) and πœ‹, 𝑗 |= πœ™2 , and for all
      π‘˜, 𝑖 ≀ π‘˜ < 𝑗 we have that πœ‹, π‘˜ |= πœ™1 .

where last(πœ‹) denotes the last position (i.e., index) in the finite trace πœ‹. In addition, we define
the weak next operator βˆ™ as abbreviation of βˆ™πœ™ ≑ Β¬β—‹Β¬πœ™. Note that, over finite traces, it does
not holds that Β¬β—‹πœ™ ̸≑ β—‹Β¬πœ™, but instead Β¬β—‹πœ™ ≑ βˆ™Β¬πœ™. We say that a trace satisfies an ltl𝑓
formula πœ™, written as πœ‹ |= πœ™, if πœ‹, 0 |= πœ™.


3. ltl𝑓 Synthesis Under Environment Specifications
Let 𝒳 and 𝒴 Boolean variables, controlled by the environment and the agent, respectively.
An agent strategy is a function πœŽπ‘Žπ‘” : (2𝒳 )* β†’ 2𝒴 , and an environment strategy is a function
πœŽπ‘’π‘›π‘£ : (2𝒴 )+ β†’ 2𝒳 . A trace is a sequence (𝑋0 βˆͺ π‘Œ0 )(𝑋1 βˆͺ π‘Œ1 ) Β· Β· Β· ∈ (2𝒳 βˆͺ𝒴 )πœ” . A trace
(𝑋𝑖 βˆͺ π‘Œπ‘– )𝑖 is consistent with an agent strategy πœŽπ‘Žπ‘” if πœŽπ‘Žπ‘” (πœ–) = π‘Œ0 and πœŽπ‘Žπ‘” (𝑋0 𝑋1 Β· Β· Β· 𝑋𝑗 ) =
π‘Œπ‘—+1 for every 𝑗 β‰₯ 0. A trace (𝑋𝑖 βˆͺ π‘Œπ‘– )𝑖 is consistent with and environment strategy if
πœŽπ‘’π‘›π‘£ (π‘Œ0 π‘Œ1 Β· Β· Β· π‘Œπ‘— ) = 𝑋𝑗 for every 𝑗 β‰₯ 0. For an agent strategy πœŽπ‘Žπ‘” and an environment
strategy πœŽπ‘’π‘›π‘£ let play(πœŽπ‘Žπ‘” , πœŽπ‘’π‘›π‘£ ) denote the unique trace induced by both πœŽπ‘Žπ‘” and πœŽπ‘’π‘›π‘£ , and
playπ‘˜ (πœŽπ‘Žπ‘” , πœŽπ‘’π‘›π‘£ ) = (𝑋0 βˆͺ π‘Œ0 ), . . . , (π‘‹π‘˜ βˆͺ π‘Œπ‘˜ ) be the finite trace up to π‘˜.
   Let πΊπ‘œπ‘Žπ‘™ be an ltl𝑓 formula over 𝒳 βˆͺ 𝒴. An agent strategy πœŽπ‘Žπ‘” realizes πΊπ‘œπ‘Žπ‘™ if for every
environment strategy πœŽπ‘’π‘›π‘£ there exists π‘˜ β‰₯ 0, chosen by the agent, such that the finite trace
playπ‘˜ (πœŽπ‘Žπ‘” , πœŽπ‘’π‘›π‘£ ) |= πΊπ‘œπ‘Žπ‘™, i.e., πΊπ‘œπ‘Žπ‘™ is agent realizable.
   In standard synthesis the environment is free to choose an arbitrary move at each step, but
in AI typically the agent has some knowledge of how the environment works, which it can
exploit in order to enforce the goal, specified as an ltl𝑓 formula πΊπ‘œπ‘Žπ‘™. Here, we specify the
environment behaviour by an ltl formula Env and call it environment specification. In particular,
Env specifies the set of environment strategies that enforce Env [16]. Moreover, we require
that Env must be environment realizable, i.e., the set of environment strategies that enforce Env
is not empty. Formally, given an ltl formula πœ™, we say that an environment strategy enforces πœ™,
written πœŽπ‘’π‘›π‘£ β–· πœ™, if for every agent strategy πœŽπ‘Žπ‘” we have play(πœŽπ‘Žπ‘” , πœŽπ‘’π‘›π‘£ ) |= πœ™.
   The problem of ltl𝑓 synthesis under environment specifications is to find an agent strategy
πœŽπ‘Žπ‘” such that
                          βˆ€πœŽπ‘’π‘›π‘£ β–· Env : βˆƒπ‘˜.playπ‘˜ (πœŽπ‘Žπ‘” , πœŽπ‘’π‘›π‘£ ) |= πΊπ‘œπ‘Žπ‘™.
As shown in [16], this can be reduced to solving the synthesis problem for the implication
Env β†’ 𝐿𝑇 𝐿(πΊπ‘œπ‘Žπ‘™) where 𝐿𝑇 𝐿(πΊπ‘œπ‘Žπ‘™) being a suitable ltl𝑓 -to-ltl transformation [22], which
is 2exptime-complete [2].
3.1. GR(1) Environment Specifications
There have been two success stories with ltl synthesis, both having to do with the form of the
specification. The first is the GR(1) approach: use safety conditions to determine the possible
transitions in a game between the environment and the agent, plus one powerful notion of
fairness, Generalized Reactivity(1), or GR(1). The second, inspired by AI planning, is focusing
on finite-trace temporal synthesis, with ltl𝑓 as the specification language. In [20] we take
these two lines of work and bring them together, devising an approach to solve ltl𝑓 synthesis
under gr(1) environment specification. In more details, to solve the problem we observe that
the agent’s goal is to satisfy Β¬Env 𝐺𝑅(1) ∨ πΊπ‘œπ‘Žπ‘™, while the environment’s goal is to satisfy
Env 𝐺𝑅(1) ∧ Β¬πΊπ‘œπ‘Žπ‘™. Moreover, πΊπ‘œπ‘Žπ‘™ can be represented by a deterministic finite automata (dfa)
[6]. Then, focusing on the environment point of view, we show that the problem can be reduced
into a gr(1) game in which the game arena is the complement of the dfa for πΊπ‘œπ‘Žπ‘™ and Env 𝐺𝑅(1)
is the gr(1) winning condition. Since we want a winning strategy for the agent, we need to deal
with the complement of the gr(1) game to obtain a winning strategy for the antagonist.
   This framework can be enriched by adding safety conditions for both the environment and
the agent, obtaining a highly expressive yet still scalable form of ltl synthesis. These two kinds
of safety conditions differ, since the environment needs to maintain its safety indefinitely (as
usual for safety), while the agent has to maintain its safety conditions only until s/he fulfills its
ltl𝑓 goal, i.e., within a finite horizon. Again, the problem can be reduced to a gr(1) game in
which the game arena is the product of the dfa for the environment safety condition and the
complement of the dfa obtained by the product of deterministic automaton of the agent safety
condition and the one for the agent task.
Tool. The two approaches were implemented in a so-called tool GFSynth which is based on
two tools: Syft [25] for the the construction of the corresponding dfas and Slugs [26] to solve
and compute a strategy in a gr(1) game.

3.2. Fairness and Stability Environment Specifications
We now consider two different basic forms of environment specifications: a basic form of
fairness ░♒𝛼 (always eventually 𝛼), and a basic form of stability ♒░𝛼 where in both cases
the truth value of 𝛼 is under the control of the environment, and hence the environment
specifications are trivially realizable by the environment. Note that due to the existence of ltl𝑓
goals, synthesis under both kinds of environment specifications does not fall under known easy
forms of synthesis, such as gr(1) formulas [27]. For these kinds of environment specifications,
[18] develops a specific algorithm based on using the dfa for the ltl𝑓 goal as the arena and
then computing 2-nested fixpoint properties over such arena.
   The algorithm proceeds as follows. First, translate the ltl𝑓 πΊπ‘œπ‘Žπ‘™ into a dfa 𝒒. Then, in case
of fairness environment specifications, solve the fair dfa game 𝒒, i.e., a game over the dfa 𝒒, in
which the environment (resp. the agent) winning condition is to remain in a region (resp., to
avoid the region) where 𝛼 holds infinitely often, meanwhile the accepting states are forever
avoidable, by applying a nested fixed-point computation on 𝒒.
   Likewise, for stability environment specifications, solve the stable dfa game 𝒒, in which the
environment (resp. the agent) winning condition is to reach a region (resp., to avoid the region)
where 𝛼 holds forever, meanwhile the accepting states are forever avoidable, by applying a
nested fixed-point computation on 𝒒.
Tool. The fixpoint-based techniques for solving ltl𝑓 synthesis under fainerss and stability
environment specifications are implemented in two tools called FSyft and StSyft, both based on
Syft, a tool for solving symbolic ltl𝑓 synthesis.

3.3. General ltl Environment Specifications
We now consider the general case where the environment specifications are expressed in both
ltl𝑓 and ltl. For this case, in [18] we develop two-stage technique to effectively handle general
ltl environment specifications. This technique takes advantage of the simpler way to handle
ltl𝑓 goals in the first stage and confines the difficulty of handling ltl environment specification
to the bare minimum in stage 2. In particular, given an ltl environment specification and an
ltl𝑓 formula πΊπ‘œπ‘Žπ‘™ that specifies the agent goal, the problem is to find a strategy for the agent
πœŽπ‘Žπ‘” that realizes πΊπ‘œπ‘Žπ‘™ under ltl environment specification Env 𝐿𝑇 𝐿 . The algorithm proceeds
by taking the following stages.

    β€’ Stage 1: Build the corresponding deterministic finite automaton π’œπΊπ‘œπ‘Žπ‘™ of πΊπ‘œπ‘Žπ‘™ and solve
      the reachability game for the agent over π’œπΊπ‘œπ‘Žπ‘™ . If the agent has a winning strategy in
      π’œπΊπ‘œπ‘Žπ‘™ , then return it.

    β€’ Stage 2: If not, computes the following steps:
         1. remove from π’œπΊπ‘œπ‘Žπ‘™ the agent winning region, obtaining π’œβ€²πΊπ‘œπ‘Žπ‘™ ;
         2. do the product of π’œβ€²πΊπ‘œπ‘Žπ‘™ with the corresponding deterministic parity automaton
            (dpa) of 𝐸𝑛𝑣𝐿𝑇 𝐿 π’Ÿ, obtaining ℬ = π’œβ€² Γ— π’Ÿ, and solve the parity game for the
            environment over it [28, 29, 30];
         3. if the agent has a winning strategy in ℬ then the synthesis problem is realizable
            and hence return the agent winning strategy as a combination of the agent winning
            strategies in the two stages.

   An interesting observation in [18] is that when the part of the environment specifications are
expressed in ltl𝑓 , i.e., the environment specifications have the form 𝐸𝑛𝑣 = πΈπ‘›π‘£βˆž ∧ 𝐸𝑛𝑣𝑓 ,
where πΈπ‘›π‘£βˆž can be expressed as ltl formula and 𝐸𝑛𝑣𝑓 as an ltl𝑓 formula. In this case the
synthesis problem 𝐸𝑛𝑣 β†’ πΊπ‘œπ‘Žπ‘™ becomes (πΈπ‘›π‘£βˆž ∧ 𝐸𝑛𝑣𝑓 ) β†’ πΊπ‘œπ‘Žπ‘™ which is equivalent to
πΈπ‘›π‘£βˆž β†’ (𝐸𝑛𝑣𝑓 β†’ πΊπ‘œπ‘Žπ‘™) where (𝐸𝑛𝑣𝑓 β†’ πΊπ‘œπ‘Žπ‘™) is expressible in ltl𝑓 . In this way 𝐸𝑛𝑣𝑓
does not contribute the resulting dpa and can be handled during stage 1 instead of 2 of our
technique. Specifically, it builds a dfa as the union of the dfa π’œπΈπ‘›π‘£π‘“ , i.e., the complement of
the dfa for 𝐸𝑛𝑣𝑓 , and the dfa π’œGoal for the goal.
Tool. The two-stage technique was implemented in the tool 2SLS which is based on two tools:
Syft [25] for building the corresponding dfas and OWL [31] a tool for translating LTL into
different types of automata, and thus dpas.
Acknowledgments
We thank our co-authors on the publications mentioned in this communication: Giuseppe De
Giacomo, Lucas Tabajara, Moshe Y. Vardi and Shufang Zhu. This work is partially supported
by the ERC Advanced Grant WhiteMech (No. 834228), by the EU ICT-48 2020 project TAILOR
(No. 952215), by the PRIN project RIPER (No. 20203FFYLK), and by the JPMorgan AI Faculty
Research Award "Resilience-based Generalized Planning and Strategic Reasoning”.


References
 [1] A. Church, Logic, arithmetics, and automata, in: Proc. Int. Congress of Mathematicians,
     1963.
 [2] A. Pnueli, R. Rosner, On the synthesis of a reactive module, in: POPL, 1989.
 [3] A. Pnueli, The temporal logic of programs, in: FOCS, 1977, pp. 46–57.
 [4] R. Ehlers, S. Lafortune, S. Tripakis, M. Y. Vardi, Supervisory control and reactive synthesis:
     a comparative introduction, Discrete Event Dynamic Systems 27 (2017) 209–260.
 [5] B. Finkbeiner, Synthesis of Reactive Systems., Dependable Software Systems Eng. 45 (2016)
     72–98.
 [6] G. De Giacomo, M. Y. Vardi, Synthesis for LTL and LDL on finite traces, in: IJCAI, 2015.
 [7] J. A. Baier, S. A. McIlraith, Planning with first-order temporally extended goals using
     heuristic search, in: AAAI, 2006, pp. 788–795.
 [8] G. De Giacomo, S. Rubin, Automata-theoretic foundations of FOND planning for LTL𝑓
     and LDL𝑓 goals, in: IJCAI, 2018, pp. 4729–4735.
 [9] H. Geffner, B. Bonet, A Coincise Introduction to Models and Methods for Automated
     Planning, Morgan&Claypool, 2013.
[10] C. Green, Theorem proving by resolution as basis for question-answering systems, in:
     Machine Intelligence, volume 4, American Elsevier, 1969, pp. 183–205.
[11] R. Bloem, R. Ehlers, S. Jacobs, R. KΓΆnighofer, How to handle assumptions in synthesis, in:
     SYNT, 2014.
[12] B. Bonet, G. De Giacomo, H. Geffner, S. Rubin, Generalized planning: Non-deterministic
     abstractions and trajectory constraints, in: IJCAI, 2017, pp. 873–879.
[13] K. Chatterjee, T. A. Henzinger, B. Jobstmann, Environment assumptions for synthesis, in:
     CONCUR, 2008, pp. 147–161.
[14] N. D’Ippolito, N. RodrΓ­guez, S. SardiΓ±a, Fully observable non-deterministic planning as
     assumption-based reactive synthesis, J. Artif. Intell. Res. 61 (2018) 593–621.
[15] B. Aminof, G. De Giacomo, A. Murano, S. Rubin, Synthesis under assumptions, in: KR,
     2018, pp. 615–616.
[16] B. Aminof, G. De Giacomo, A. Murano, S. Rubin, Planning under LTL environment
     specifications, in: ICAPS, 2019, pp. 31–39.
[17] A. Camacho, M. Bienvenu, S. A. McIlraith, Finite LTL synthesis with environment assump-
     tions and quality measures, in: KR, 2018, pp. 454–463.
[18] G. De Giacomo, A. Di Stasio, M. Y. Vardi, S. Zhu, Two-stage technique for LTL𝑓 synthesis
     under LTL assumptions, in: KR 2020, 2020, pp. 304–314.
[19] G. De Giacomo, A. Di Stasio, G. Perelli, S. Zhu, Synthesis with mandatory stop actions, in:
     KR 2021, 2021, pp. 237–246.
[20] G. D. Giacomo, A. D. Stasio, L. M. Tabajara, M. Y. Vardi, S. Zhu, Finite-trace and generalized-
     reactivity specifications in temporal synthesis, in: IJCAI 2021, 2021, pp. 1852–1858.
[21] S. Zhu, G. De Giacomo, G. Pu, M. Vardi, LTL𝑓 synthesis with fairness and stability
     assumptions, in: AAAI, 2020.
[22] G. De Giacomo, M. Y. Vardi, Linear temporal logic and linear dynamic logic on finite traces,
     in: IJCAI, 2013, pp. 854–860.
[23] S. Bansal, Y. Li, L. M. Tabajara, M. Y. Vardi, Hybrid compositional reasoning for reactive
     synthesis from finite-horizon specifications, in: AAAI, 2020.
[24] A. Camacho, J. A. Baier, C. J. Muise, S. A. McIlraith, Finite LTL synthesis as planning, in:
     ICAPS, 2018, pp. 29–38.
[25] S. Zhu, L. M. Tabajara, J. Li, G. Pu, M. Y. Vardi, Symbolic LTL𝑓 synthesis, in: IJCAI, 2017,
     pp. 1362–1369.
[26] R. Ehlers, V. Raman, Slugs: Extensible GR(1) synthesis, in: CAV, 2016, pp. 333–339.
[27] R. Bloem, B. Jobstmann, N. Piterman, A. Pnueli, Y. Sa’ar, Synthesis of reactive(1) designs,
     J. Comput. Syst. Sci. 78 (2012) 911–938.
[28] A. Di Stasio, A. Murano, G. Perelli, M. Vardi, Solving parity games using an automata-based
     algorithm, in: CIAA 2016, 2016, pp. 64–76.
[29] M. Jurdzinski, Small progress measures for solving parity games, in: STACS 2000, 2000,
     pp. 290–301.
[30] A. D. Stasio, A. Murano, M. Y. Vardi, Solving parity games: Explicit vs symbolic, in: CIAA
     2018, 2018, pp. 159–172.
[31] J. KretΓ­nskΓ½, T. Meggendorfer, S. Sickert, Owl: A library for πœ”-words, automata, and LTL,
     in: ATVA, 2018, pp. 543–550.