<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A game-based model for human-robots interaction∗</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Aniello Murano</string-name>
          <email>aniello.murano@unina.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>I. ABSTRACT</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Universita` degli Studi di Napoli Federico II</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <fpage>17</fpage>
      <lpage>19</lpage>
      <abstract>
        <p>Game theory has exhibited to be a fruitful metaphor to reason about multi-player systems. Two kinds of games are mainly studied and adopted: turn-based and concurrent. They differ on the way the players are allowed to move. However, in real scenarios, there are very simple interplays among players whose modeling does not fit well in any of these settings. In this paper we introduce a novel game-based framework to model and reason about the interaction between robots and humans. This framework combines all positive features of both turn-based and concurrent games. Over this game model we study the reachability problem. To give an evidence of the usefulness of the introduced framework, we use it to model the interaction between a human and a team of two robots, in which the former tries to run away from the latter. We also provide an algorithm that decides in polynomial time whether at least one robot catches the human.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>II. INTRODUCTION</title>
      <p>In recent years, game theory has exhibited to be a
fruitful metaphor in multi-agent system modeling and reasoning,
where the evolution of the entire system emerges from the
coordination of moves taken by all agents being involved [1],
[20], [21], [22], [15], [23], [7], [8]. In the simplest setting,
we consider finite-state games consisting of just two players
(or agents), conventionally named Player 1 and Player 2.
Depending on the possible interactions among the players,
games can be either turn-based or concurrent. In the former
case, the moves of the players are interleaved. In the latter
case, instead, the players move simultaneously. In a
turnbased game, the states of the game are partitioned into those
belonging to Player 1 and those belonging to Player 2. When
the game is at a state of Player i, only Player i determines
the next state. In a concurrent game, instead, the two players
choose, simultaneously and independently, their moves and the
next state of the game depends on the combination of such
moves.</p>
      <p>A game consists of two main objects, the arena and the
winning condition. The arena is used to describe the players,
the game states (configurations), and the possible evolution of
the game in accordance to the moves the players can take. The
winning condition is used to express the objective the players
aim to achieve.</p>
      <p>∗ This work is partially supported by the FP7 EU project 600958-SHERPA.</p>
      <p>Solving a game corresponds to check whether a designed
player has a winning strategy in the game, i.e. a sequence of
moves that let him satisfy the winning condition no matter
how the other players behave. In the literature, we distinguish
between the case the condition is given as an “external entity”,
for example via a formula of a logic [1], [14], or internally
as a condition over the states of the arena. While external
conditions offer some modularity and allow to formalize very
intricate targets, they require solutions with a very high
complexity [12]. Internal conditions instead offer a good balance
between expressiveness and complexity and this is the setting
we consider in this paper.</p>
      <p>A very simple and largely used (internal) winning condition
consists of labeling some states of the arena as “ good” and
then consider as target the reachability of at least one of them.
Properly speaking, the resulting setting is called a reachability
game. These games have been exploited in both the
(twoplayer) turn-based and (multi-player) concurrent settings and
fruitfully applied in several interesting real scenarios.
However, there are specific situations that cannot be casted in any
of these settings. In particular, this happens when we want
to model the interplay among agents with a different essence.
This is the case, for example in human-robot interaction. To
give an evidence of this necessity, we discuss along the paper
a scenario involving the interaction between a human and two
robots. The shape of the arena is a maze and the three players
are initially placed in three different positions. The goal of
the human is to run away from the robots, while the robots
have the opposite goal. Therefore, looking at the game from
the robots side, the good states are those in which at least
one of the robots and the human sit in the same place in the
same moment. We assume that whenever the human decides to
move, none of the robots can interfere and vice-versa. In other
words the human uses the game on its side as turn-based while
the robots use it as being concurrent. We introduce a novel
model framework that is able to represent efficiently such a
scenario and provide a solution algorithm that can decide in
polynomial time whether the robots have a winning strategy,
i.e., they have a sequence of moves that, no matter how the
human behaves, they reach a desired state.</p>
      <p>Related work. Robotic technology is quickly advancing and
this rapid progress inevitably is having a huge effect over
the people. The interaction between human and robots is
a complex issue that challenges both engineering and
cognitive science. In several settings, such an interaction has
been modeled in terms of a suitable interplay between all
actors involved (see [5] for a survey). Several models in this
context take inspiration from the field of open-system formal
verifications [9], [19], [11], [16]. A system is considered open
if it has an ongoing interaction with an external unpredictable
environment and the system behavior fully depends on this
interaction. To verify an open system means to check that
the system is correct no matter how the environment behaves.
Several models based on two-player games (system vs.
environment) have been proposed in order to model such an
interaction as well as suitable algorithms to check whether
the system is correct (i.e. wins the game) [3], [24]. In this
context, multi-agent games have been also proposed in order
to model and reason about multiple players able to play in a
cooperative or adversarial manner [1], [14].</p>
      <p>The game setting we consider in this paper has several
connections with planning problems as well [2], [4], [6]. Indeed,
planning can be rephrased as the problem of synthesizing a
strategy (the plan) for an agent to achieve a determined task
within an environment. Often the environment is hostile and
consists of an aggregation of several entities working together.
By casting such a scenario in our model setting one can see
the environment as the team of cooperative agents working
against the one aiming for the planning.</p>
      <p>Since the environment can be seen as an adversarial player
the correlation the our setting follows</p>
    </sec>
    <sec id="sec-2">
      <title>III. CASE STUDY</title>
      <p>In this section we introduce a simple but effective
humanrobot interaction scenario that will be used along the paper as
an application for the game-model framework we introduce.
The scenario is described in the following and depicted in
Figure 1.</p>
      <p>The interaction takes place in a maze and involves tree
players: a human H and two robots R1 and R2. The goal of
the human is to escape from both the robots. The robots work
in team and aim just the opposite. For simplicity we assume
the maze divided in square rooms and we start by considering
that the players sit all in different rooms. All players from
every room can access to an adjacent one unless there is a
wall (drawn with a bold line in the figure). We assume to
have an interleaving of moves between the human and the
team of robots. Hence, the robots move simultaneously. We
assume that all players can move in the four directions up,
down, left, and right, unless the shape of the maze forbids it.</p>
      <p>Let us now discuss how such an human-robot interaction can
be rephrased in terms of a game. We make this reasoning more
formal in the next section. Starting from an initial position in
the maze, all players by taking the allowed moves can change
their position. Each possible placing of the players can be seen
as a configuration of the game and the starting one is usually
denoted the initial configuration . Clearly, we can move in one
step from one configuration to another only if we have moves
that allow it. In particular, as seen from the human, moves are
interleaved in a turn-based fashion, while they are taken in a
concurrent way as seen by the robots. All legal moves can be
collected by an opportune data structure or a table. Following
the target of the robots, we have that the human loses if along
0
1
2
3
4
0
H
R2
1
2
3</p>
      <p>4
R1
an interplay, the game reaches a configuration in which both
he and one of the robots sit in the same room. Accordingly,
we label all such configurations as good ones (as seen by the
robots). Thus, deciding whether the team of robots can defeat
the human corresponds to solve a reachability game.</p>
      <p>It is important to note that the scenario we have discussed
is neither (just) turn-based, as the robots move simultaneously,
nor concurrent, as the human moves independently from the
robots. Moreover, the discussed game involves three players
and it is not trivially reducible to a two-player one because of
the particular target: at least one of the robots has to catch the
human.</p>
      <p>To model this scenario, we introduce in the next section
a novel game model in which all players, except a designed
one, work in team and move simultaneously. The designed
one instead will move alternately and independently from the
other players.</p>
    </sec>
    <sec id="sec-3">
      <title>IV. THE CONCEIVED MODEL</title>
      <p>In this section we introduce a novel multi-player
gamebased framework that is suitable to represent, under a unique
structure, both the players moving turn-based and those acting
concurrently. This framework, which we call hybrid,
opportunely combines and extends the main features behind
concurrent game structures [1] and two-player turn-based games
(see [18] for an introduction).</p>
      <p>Definition 1 (Hybrid Game models): A hybrid reachability
game structure is a tuple G =&lt; Ag, Ac, St, s0, tr, Stf &gt;,
where Ag is a finite non-empty set of agents, partitioned into
two teams Ag0 and Ag1. Ac and St are enumerable
nonempty sets of actions and states, respectively, and s0 ∈ St
is a designated initial state. The set of states is partitioned
in St0 and St1 as those states belonging to the teams Ag0
or Ag1, respectively. For i ∈ {0, 1}, let Dci = Agi * Ac
to be the set of decisions of team i, i.e., partial functions
describing the choices of an action by all agents in Agi. Then,
tr : Dci×Sti → St1−i denotes the transition function mapping
every pair of decisions δ ∈ Dci and state s ∈ Sti for the team
i to a successive state over the deterministic graph belonging
the the adversary team. Finally, we define Stf ⊆ St the subset
of terminal (or accepting) states.</p>
      <p>H
R2
0</p>
      <p>R1
1</p>
      <p>2
Fig. 2. A Reduced Maze Game.</p>
      <p>We now show how the case study we have presented in
the previous section can be easily and formally described by
means of a hybrid game model G.</p>
      <p>For a sake of clarity, instead of considering the scenario
depicted in Figure 1, we consider a reduced one as reported in
Figure 2. Also, we allow the robots to move only horizontally
(left and right), while the human is still able to move in all
directions. While this new scenario may look too naive, it will
avoid a bunch of tedious calculations in the sequel. We model
the human-robots interaction over this maze by setting Ag0 as
the team consisting of the unique player human and Ag1 as the
team of robots R1 and R2 . We consider the maze as a
gridboard made by K × J positions, with K = {0, 1} and J =
{0, 1, 2}. In each position, zero, one, or more than one player
can sit. States are just a set of positions of the three players
plus a flag we use to recall which team is playing in that state.</p>
      <p>Formally, we have as set of states St = ((K × J )3) × {0, 1}.</p>
      <p>Sti contains those states having the flag equal to i. The initial
state is just the initial position of the players.</p>
      <p>
        Accordingly to the picture depicted in Figure 2,
assuming R1 and R2 are the next to move, the state is
((0, 0), (
        <xref ref-type="bibr" rid="ref1">1, 0</xref>
        ), (
        <xref ref-type="bibr" rid="ref2">0, 2</xref>
        ), 1). We assume in our example that this
is the initial state. The possible actions for the robots are
set to r for right and l for left. For the possible actions of
the human we set u for up, d for down, l for left, and r for
right. Decisions are defined accordingly and must respect the
limitations imposed by the shape of the maze. As far as the
set of accepting states concerns, recall that the target of the
robots is to reach a configuration where at least one of them
catches the human, being both siting in the same position.
      </p>
      <p>This means that Stf must contain all those states in which
the first pair of coordinates (corresponding to the position of
the human) is equal to the second or third pair. Formally,
Stf = {((a, b), (c, d), (e, f ), i) |(a = c and b = d) OR
(a = e and b = f )}. Finally, it remains to define the transition
relation. For the sake of exposition, we only report the part
corresponding to the team Ag0. Note that this is coherent with
the shape of the maze.</p>
      <p>tr(δ, ((i, j)(k, l)(m, n)), 0) =
(((i − 1, j)(k, l)(m, n)), 1),


(((i, j − 1)(k, l)(m, n)), 1),
(((i, j + 1)(k, l)(m, n)), 1),

(((i + 1, j)(k, l)(m, n)), 1),
if δ = u and i &gt; 0;
if δ = l and j &gt; 0;
if δ = r and j &lt; 2;
if δ = d and i &lt; 1.</p>
      <p>
        To better clarify the meaning of the above formalization, let
us discuss some examples over the maze. From the initial state,
which belongs to the team Ag1, by using the decision lr, the
game moves to the state (((0, 0)(
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        )(
        <xref ref-type="bibr" rid="ref1 ref1">1, 1</xref>
        )), 0). In accordance
with the flag, this is now a state belonging to the team Ag0
and thus this team (the human) takes the turn to move. From
this state, the human agent has two available moves, that
are d and r. In the first case the game moves in the state
(((
        <xref ref-type="bibr" rid="ref1">1, 0</xref>
        )(
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        )(
        <xref ref-type="bibr" rid="ref1 ref1">1, 1</xref>
        )), 1) and in the second case it moves in the
state (((0, 0)(
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        )(
        <xref ref-type="bibr" rid="ref1 ref1">1, 1</xref>
        )), 1), both belonging to the players in
the coalition Ag1. And so on. In Figure 3, we report the
computations of the game. It is easy to observe that the accepting
states are ((
        <xref ref-type="bibr" rid="ref1">1, 0</xref>
        ), (
        <xref ref-type="bibr" rid="ref2">0, 2</xref>
        )(
        <xref ref-type="bibr" rid="ref1">1, 0</xref>
        )), 0), ((
        <xref ref-type="bibr" rid="ref1">1, 0</xref>
        ), (0, 0)(
        <xref ref-type="bibr" rid="ref1">1, 0</xref>
        )), 0) and
((
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ), (
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        )(
        <xref ref-type="bibr" rid="ref1 ref1">1, 1</xref>
        )), 1) since one of the two robots and the
human are both in the room.
      </p>
      <p>
        ((0, 0), (
        <xref ref-type="bibr" rid="ref2">0, 2</xref>
        )(
        <xref ref-type="bibr" rid="ref1">1, 0</xref>
        )), 1)
      </p>
      <p>
        lr
((0, 0), (
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        )(
        <xref ref-type="bibr" rid="ref1 ref1">1, 1</xref>
        )), 0)
      </p>
      <p>
        r
d
ll
((
        <xref ref-type="bibr" rid="ref1">1, 0</xref>
        ), (
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        )(
        <xref ref-type="bibr" rid="ref1 ref1">1, 1</xref>
        )), 1)
      </p>
      <p>
        ((
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ), (
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        )(
        <xref ref-type="bibr" rid="ref1 ref1">1, 1</xref>
        )), 1)
rl
((
        <xref ref-type="bibr" rid="ref1">1, 0</xref>
        ), (
        <xref ref-type="bibr" rid="ref2">0, 2</xref>
        )(
        <xref ref-type="bibr" rid="ref1">1, 0</xref>
        )), 0)
      </p>
      <p>
        ((
        <xref ref-type="bibr" rid="ref1">1, 0</xref>
        ), (0, 0)(
        <xref ref-type="bibr" rid="ref1">1, 0</xref>
        )), 0)
      </p>
      <p>Under the conceived model, we can handle all possible
targets that can be represented in terms of reachability, i.e.,
the players in the coalition Ag1 set some configurations of the
game as “good” and aim to reach them. These configurations
are those represented by states Stf in the model. The coalition
Ag1 wins the game if its players have a winning strategy, i.e.,
they can force the game, by means of a sequence of moves,
to reach a state in Stf , no matter how the players in the team
Ag1 behave. Deciding a game means deciding whether the
coalition Ag1 wins the game.</p>
      <p>In this section we provide an algorithm to decide a game
under the hybrid framework we have introduced. This
algorithm aims to find the set of states of the model from which
the players in the coalition Ag1 win the game, that is the set of
states reach1(Stf ). As the complement of this set contains the
states from which players in the coalition Ag0 win the game,
as a corollary we obtain that our game model is zero-sum (i.e.
from each node only one team can win the game).</p>
      <p>The algorithm proceeds as follows. We start from the
set Stf containing all winning states for players in Ag1.</p>
      <p>Then, in a backward manner, we recursively build the set
reachi1(Stf ) containing all states s ∈ St such that players usefulness of the introduced framework by means of a case
in Ag1 can force a visit from s to a state in the set Stf in study.
less than i steps. Formally, we have that reachi1(Stf ) = {s ∈ This work opens to several future directions. First, it would
St |Ag1 can force a visit from s to Stf in less than i moves}. be interesting to reasoning about quantitative aspects about the
Formally, we have the following. human-robots interaction. For example, to determine what is
the best strategy to perform.
reach01(Stf ) = Stf ; Another direction would be to consider other possible
winning conditions. In particular, one could import some winning
reachi1+1(Stf ) = reachi1(Stf ) ∪ conditions studied in formal verification such as the B u¨chi
{s ∈ St1|∃s0 ∈ St : tr(Dc1, s) = s0 ∧ s0 ∈ reachi1(Stf )} ∪ and the parity ones (see [17], [10] for an introduction) or
{s ∈ St0|∀s0 ∈ St : tr(Dc0, s) = s0 ∧ s0 ∈ reachi1(Stf )}. external winning conditions given in terms of formulas of a
suitable logic. Some scenarios along these lines have been
already investigated in the case of turn-based and concurrent
In words, from the set reachi1(Stf ) we select all states that
game settings [9], [19] and showed to have some useful
applihave incident edges in this set. From each of these states,
cations [13], [14]. We plan, as future work, to investigate them
say s, if it belongs to the coalition Ag1, then, this state
in the settings of multi-robots and human-robots interactions.
is immediately added to reachi1+1(Stf ) (as we may move
from s to reachi1(Stf ) and thus reach Stf ). Conversely, if
to reachi1(Stf )).
s is a state belonging to the coalition Ag0, then it is added
to reachi+1(Stf ) only if all outgoing edges from s fall in</p>
      <p>1
reachi1(Stf ) (i.e., from s, players in Ag0 are forced to move</p>
    </sec>
    <sec id="sec-4">
      <title>Finally we have that</title>
      <p>reach1(Stf ) = reach|1St|(Stf ).</p>
      <p>As the calculation of reach1(Stf ) requires a number of
iterations linear in number of states St, we have that the whole
algorithm requires at most quadratic time in the size of the
model, as reported in the following theorem.</p>
      <p>Theorem 1: Given a hybrid reachability game, it can be
decided in quadratic time.</p>
      <p>By a matter of calculation, one can see that by applying
the algorithm above over our reduced case study, the coalition
Ag1 wins the game from every state. In fact, for each state
of the model, there exists always a winning strategy for the
players in the team Ag1.</p>
    </sec>
    <sec id="sec-5">
      <title>VI. DISCUSSION AND FUTURE WORKS</title>
      <p>In the last years, human-robots interaction is receiving large
attention in several research fields. An important aspect in this
study is to come up with appropriate models to design and
reasoning about how such interactions can take place and how
they affect the future behavior of the involved actors. In this
setting, game theory is a powerful framework that is able to
formalize the interplay between the human and the robots in
a very natural way.</p>
      <p>In this paper, we have introduced a game model framework
that allows to represent and reasoning about scenarios in
which the interaction between the humans and the robots
results in a hybrid two-team game: the game between the
two teams is turn-based while all players in each team play
concurrently among them. We have observed that classic
turnbased and concurrent games are not powerful enough to
model such a setting. Over the conceived model, we study the
reachability problem and show that it is solvable in quadratic
time. Therefore, with no extra cost with respect to classic
turn-based and concurrent games. We give an evidence of the</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R.</given-names>
            <surname>Alur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Henzinger</surname>
          </string-name>
          , and
          <string-name>
            <given-names>O.</given-names>
            <surname>Kupferman</surname>
          </string-name>
          .
          <article-title>Alternating-Time Temporal Logic</article-title>
          .
          <source>Journal of the ACM</source>
          ,
          <volume>49</volume>
          (
          <issue>5</issue>
          ):
          <fpage>672</fpage>
          -
          <lpage>713</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D.</given-names>
            <surname>Calvanese and G. De Giacomo and M.Y Vardi</surname>
          </string-name>
          .
          <article-title>Reasoning about actions and planning in LTL action theories</article-title>
          .
          <source>KR</source>
          ,
          <volume>2</volume>
          :
          <fpage>593</fpage>
          -
          <lpage>602</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3] L. de Alfaro, T. Henzinger, and
          <string-name>
            <given-names>O.</given-names>
            <surname>Kupferman</surname>
          </string-name>
          .
          <article-title>Concurrent reachability games</article-title>
          .
          <source>In Foundations of Computer Science</source>
          ,
          <year>1998</year>
          .
          <source>Proceedings. 39th Annual Symposium on</source>
          , pages
          <fpage>564</fpage>
          -
          <lpage>575</lpage>
          . IEEE,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>De Giacomo</surname>
          </string-name>
          and
          <string-name>
            <given-names>M. Y.</given-names>
            <surname>Vardi</surname>
          </string-name>
          .
          <article-title>Automata-theoretic approach to planning for temporally extended goals</article-title>
          . pages
          <fpage>226</fpage>
          -
          <lpage>238</lpage>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Goodrich</surname>
          </string-name>
          and
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Schultz</surname>
          </string-name>
          .
          <article-title>Human-robot interaction: a survey. Foundations and trends in human-computer interaction, 1(3</article-title>
          ):
          <fpage>203</fpage>
          -
          <lpage>275</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>H.</given-names>
            <surname>Geffner</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Bonet</surname>
          </string-name>
          .
          <article-title>A concise introduction to models and methods for automated planning</article-title>
          .
          <source>Synthesis Lectures on Artificial Intelligence and Machine Learning</source>
          ,
          <volume>8</volume>
          (
          <issue>1</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>141</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Gutierrez</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Wooldridge</surname>
          </string-name>
          .
          <article-title>Equilibria of concurrent games on event structures</article-title>
          .
          <source>In Proceedings of the Joint Meeting of the TwentyThird EACSL Annual Conference on Computer Science Logic (CSL) and the Twenty-Ninth Annual ACM/IEEE Symposium on Logic in Computer Science (LICS)</source>
          ,
          <source>page 46. ACM</source>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J. Van</given-names>
            <surname>Benthem</surname>
          </string-name>
          .
          <article-title>Logic games: From tools to models of interaction. Institute for Logic, Language and Computation (ILLC)</article-title>
          , University of Amsterdam,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>W.</given-names>
            <surname>Jamroga</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Murano</surname>
          </string-name>
          .
          <article-title>On Module Checking and Strategies</article-title>
          . In Autonomous Agents and MultiAgent Systems'
          <volume>14</volume>
          , pages
          <fpage>701</fpage>
          -
          <lpage>708</lpage>
          . International Foundation for Autonomous Agents and
          <string-name>
            <given-names>Multiagent</given-names>
            <surname>Systems</surname>
          </string-name>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>O.</given-names>
            <surname>Kupferman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vardi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Wolper</surname>
          </string-name>
          .
          <article-title>An Automata Theoretic Approach to Branching-Time Model Checking</article-title>
          .
          <source>Journal of the ACM</source>
          ,
          <volume>47</volume>
          (
          <issue>2</issue>
          ):
          <fpage>312</fpage>
          -
          <lpage>360</lpage>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>O.</given-names>
            <surname>Kupferman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vardi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Wolper</surname>
          </string-name>
          .
          <source>Module Checking. Information and Computation</source>
          ,
          <volume>164</volume>
          (
          <issue>2</issue>
          ):
          <fpage>322</fpage>
          -
          <lpage>344</lpage>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>F.</given-names>
            <surname>Mogavero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Murano</surname>
          </string-name>
          , G. Perelli, and
          <string-name>
            <given-names>M.</given-names>
            <surname>Vardi</surname>
          </string-name>
          .
          <source>Reasoning About Strategies: On the Model-Checking Problem. Transactions On Computational Logic</source>
          ,
          <volume>15</volume>
          (
          <issue>4</issue>
          ):
          <volume>34</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>42</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>F.</given-names>
            <surname>Mogavero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Murano</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L.</given-names>
            <surname>Sorrentino</surname>
          </string-name>
          .
          <article-title>On Promptness in Parity Games</article-title>
          .
          <source>In Logic for Programming Artificial Intelligence and Reasoning'13, LNCS 8312</source>
          , pages
          <fpage>601</fpage>
          -
          <lpage>618</lpage>
          . Springer,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>F.</given-names>
            <surname>Mogavero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Murano</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Vardi</surname>
          </string-name>
          .
          <article-title>Reasoning About Strategies</article-title>
          .
          <source>In Foundations of Software Technology and Theoretical Computer Science'10, LIPIcs 8</source>
          , pages
          <fpage>133</fpage>
          -
          <lpage>144</lpage>
          . Leibniz-Zentrum fuer Informatik,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Parsons</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Wooldridge</surname>
          </string-name>
          .
          <article-title>Game theory and decision theory in multi-agent systems</article-title>
          .
          <source>Autonomous Agents and Multi-Agent Systems</source>
          ,
          <volume>5</volume>
          (
          <issue>3</issue>
          ):
          <fpage>243</fpage>
          -
          <lpage>254</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>P.</given-names>
            <surname>Schobbens</surname>
          </string-name>
          .
          <article-title>Alternating-Time Logic with Imperfect Recall</article-title>
          .
          <volume>85</volume>
          (
          <issue>2</issue>
          ):
          <fpage>82</fpage>
          -
          <lpage>93</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>W.</given-names>
            <surname>Thomas</surname>
          </string-name>
          . Automata on infinite objects.
          <source>Handbook of theoretical computer science</source>
          ,
          <volume>2</volume>
          ,
          <year>1990</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>W.</given-names>
            <surname>Thomas</surname>
          </string-name>
          .
          <source>Monadic Logic and Automata: Recent Developments</source>
          .
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>W.</given-names>
            <surname>Jamroga</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Murano</surname>
          </string-name>
          .
          <article-title>Module checking of strategic ability</article-title>
          . pages
          <fpage>227</fpage>
          -
          <lpage>235</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>M.</given-names>
            <surname>Wooldridge</surname>
          </string-name>
          .
          <article-title>Intelligent Agents</article-title>
          . In G. Weiss, editor,
          <source>Multiagent Systems. A Modern</source>
          Approach to Distributed Artificial Intelligence . MIT Press: Cambridge, Mass,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>M.</given-names>
            <surname>Wooldridge</surname>
          </string-name>
          .
          <article-title>Computationally Grounded Theories of Agency</article-title>
          .
          <source>In International Conference on MultiAgent Systems'00</source>
          , pages
          <fpage>13</fpage>
          -
          <lpage>22</lpage>
          . IEEE Computer Society,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>M.</given-names>
            <surname>Wooldridge</surname>
          </string-name>
          .
          <article-title>An Introduction to Multi Agent Systems</article-title>
          . John Wiley &amp; Sons,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Jiang</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Hu</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Lin</surname>
          </string-name>
          .
          <article-title>Decision making of networked multiagent systems for interaction structures</article-title>
          .
          <source>Systems, Man and Cybernetics</source>
          ,
          <string-name>
            <surname>Part</surname>
            <given-names>A</given-names>
          </string-name>
          :
          <article-title>Systems and Humans</article-title>
          , IEEE Transactions on,
          <volume>41</volume>
          (
          <issue>6</issue>
          ):
          <fpage>1107</fpage>
          -
          <lpage>1121</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>W.</given-names>
            <surname>Zielonka</surname>
          </string-name>
          .
          <article-title>Infinite Games on Finitely Coloured Graphs with Applications to Automata on Infinite Trees</article-title>
          .
          <source>Theoretical Computer Science</source>
          ,
          <volume>200</volume>
          (
          <issue>1-2</issue>
          ):
          <fpage>135</fpage>
          -
          <lpage>183</lpage>
          ,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>