<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Towards a Formal Verification of Attack Graphs</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Davide Catta</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jean Leneutre</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vadim Malvone</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>LTCI, Télécom Paris, Institut Polytechnique de Paris</institution>
          ,
          <addr-line>Paris</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this perspective paper, we propose diferent formalizations of games that are played over Attack Graphs between an Attacker and a Defender. In all such games we propose a formal approach (such as logics and automata theory) to check whether the Attacker has a strategy to win the game. Modern systems are inherently complex and security plays a crucial role. The main challenge in developing a secure system is to come up with tools able to detect vulnerability and unexpected behaviors at a very early stage of its life-cycles. These methodologies should be able to measure the grade of resilience to external attacks. Crucially, the cost of repairing a system flaw during maintenance is at least two order of magnitude higher, compared to a fixing at an early design stage [1]. In the past fifty years several solutions have been proposed and a story of success is the use of formal methods techniques [1]. They allow checking whether a system is correct by formally checking whether a mathematical model of it meets a formal representation of its desired behavior. Recently, classic approaches such as model checking and automata-theoretic techniques, originally developed for monolithic systems [2, 3], have been meaningfully extended to handle multi-agent systems [4, 5, 6, 7, 8]. These are systems that encapsulate the behavior of two or more rational agents interacting among them in a cooperative or adversarial way, aiming at a designed goal [9]. In system security checking, a malicious attack can be seen as an attempt of an Attacker to gain an unauthorized resource access or compromise the system integrity. In this setting, Attack Graph [10] is one of the most prominent attack model developed and receiving much attention in recent years. This encompasses a graph where each state represents an Attacker at a specified network location and edges represent state transitions, i.e., attack actions by the Attacker. In this paper, we sketch three diferent methodologies to reason about Attack Graphs by introducing game models in which a player (called Attacker) travels through adjacent edges of an Attack Graph and tries to reach a certain target state, and another player (called Defender)</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Attack Graphs</kwd>
        <kwd>Sabotage Games</kwd>
        <kwd>Logics in Games</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>tries to prevent to Attacker to reach his goal by dynamically deploying countermeasures. All
the game models that we will introduce can be handled by means of formal methods, i.e., to
reason about such games we will use logical methods or automata-theoretic approaches.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Attack Graphs</title>
      <p>
        The term Attack Graph has been first introduced by Phillips and Swiler [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Attack Graphs
represent the possible sequence of attacks in a system as a graph. An Attack Graph may be
generated by using the following information: a description of the system architecture (topology,
configurations of components, etc.), the list of the known vulnerabilities of the system and the
Attacker’s profile (his capabilities, password knowledge, privileges etc.) and attack templates
(Attacker’s atomic action, including preconditions and postconditions). An attack path in the
graph corresponds to a sequence of atomic attacks. Several works have developed this approach
and each work introduces its own Attack Graph model with its specificity, and thus there is no
standard definition of an Attack Graph, see [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] for a survey. However, all introduced models
can be mapped into a canonical Attack Graph as introduced in [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. It is a labelled oriented
graph, where:
1. each node represents both the state of the system (including existing vulnerabilities) and
the state of the Attacker including constants (Attacker skills, financial resources, etc.) and
variables (knowledge on the network topology, privilege level, obtained credentials, etc.);
2. each edge represents an action of the Attacker (a scan of the network, the execution of an
exploit based on a given vulnerability, access to a device, etc.) that changes the state of
the network or the states of the Attacker; an edge is labelled with the name of the action
(several edges of the Attack Graph may have the same label).
      </p>
      <p>We recall that a rooted Kripke structure  is given by a non-empty set  of states, a designated
initial state 0, a finite-non empty set Σ of labels, a ternary relation  ⊆  × Σ ×  and a
labeling function  that maps any element of  to a set of atomic proposition. By abstracting
all the data in the above discussion, one can define an Attack Graph as follows.
Definition 1. Let Σ be a finite set of labels, and  a finite set of atomic propositions, such that
win ∈ . An Attack Graph is a tuple  = ⟨, 0, Σ , , ,  ⟩ where:
– ⟨, 0, Σ , ,  ⟩ is a rooted Kripke structure where the set  of states is finite and  () ⊆ 
for any  ∈ ;
–  is a non-empty subset of  such that win ∈  () for all  ∈  .</p>
      <p>The set  represents the set of target states of the Attacker.</p>
      <p>Let us make things more concrete. Consider the following scenario: an enterprise has a local
area network (LAN) that features a Server, a Workstation and two databases A and B. The LAN
also provides a Web Server. Internet’s access to the LAN are controlled by a firewall. Such
scenario is depicted in Figure 1. Suppose that we know some vulnerabilities and that we have
established that a malevolent user can make the attacks listed in the Table of Figure 1, e.g., by
making 2, an Attacker can exploit a vulnerability related to the Server: as a precondition the
Attacker needs to have root access to the Web Server and, as a postcondition, he will obtain
root access to the Server. Then we can construct an Attack Graph built from this set of atomic
attacks and collecting possible attack paths as depicted in Figure 11. The Attacker’s initial state
is a node in the Attack Graph. Let us suppose that the Attacker is in state 1 and wants to
reach state 4. To get to this target, he can perform the sequences of atomic attacks 2, 4
or 3, 2, 4.</p>
      <p>Attack</p>
      <p>Location</p>
      <p>Precondition</p>
      <p>Postcondition
1
2
3
4
5</p>
      <p>Web Server</p>
      <p>Server
Workstation
Database A
Database B
_ : 
_ : 
 : 
 : ∧
 : 1234
_ :</p>
      <p>: 
 : 1234
 : 
 :</p>
      <p>Counter
measure
_
2
_
4
5
1This Attack Graph is not complete w.r.t. our previous description, since some possible sequences of atomic attacks
are not listed: for instance 1, 2, 3, 5 are not taken into account.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Attack Graph Games</title>
      <p>In the previous section, we saw that an Attack Graph represents a sequence of possible actions
made by an Attacker to reach a specific goal. Let us add another character to this story,
the Defender, whose objective is to counter the attack. Suppose that she has the power to
dynamically deploy a predefined set of countermeasures: for instance by reconfiguring the
ifrewall filtering rules, or patching some vulnerabilities, that is by removing one or several
preconditions of an atomic attack. A given countermeasure  will prevent the Attacker from
longing a given attack . One can imagine a game played over the Attack Graph in which the
Attacker tries to reach one of its goal states, and the Defender tries to prevent the Attacker from
reaching a goal state by deploying countermeasures. Given such a game, one natural question
to ask is the following: is there a winning Attacker Strategy? We will now define three possible
settings in which such games can be formalized.</p>
      <sec id="sec-3-1">
        <title>3.1. Attack Graph Games &amp; (Subset) Sabotage Modal Logic</title>
        <p>
          Above, we introduced a Defender, whose power is to deploy certain countermeasures, and we
introduced the concept of game over an Attack Graph. To make things less abstract, consider
a two player turn-based game over Attack Graph in which: the Defender starts the game by
selecting a certain countermeasure . By choosing such a countermeasure, she deletes a subset
of edges of the Attack Graph: all the edges that are labeled by . The Attacker takes his turn
and moves from 0 to one of its successor along the edges that have not being erased (if any).
The game evolves following this pattern. The Attacker wins if he can reach one of the states
in W in a finite number of moves, the Defender wins otherwise. A run in such a game is
nothing more than a run in a insidious sabotage game. Sabotage games were introduced by
van Benthem in 2005 [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], with the aim of studying the computational complexity of a special
class of graph-reachability problems. Namely, graph reachability problems in which the set of
edges of the graph became thinner and thinner as long as a path of the graph is constructed. To
reason about sabotage games, van Benthem introduced Sabotage Modal Logic. Such Logic is
obtained by adding to the ♢ -modality of classical modal logic another modality ♦ .
        </p>
        <p>
          Let  be a directed graph and  one of its vertex (or states); the intended meaning of a formula
♦  is “ ♦  is true at a state  of  if  is true at  in the graph obtained by  by erasing an
edge ". Our sabotage games difers from the one introduced by van Benthem because one of
the players can erase an entire subset of edges of a given graph. To reason about such games,
in [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] we introduce a variant of Sabotage Modal Logic that we call, for lack of wit, Subset
Sabotage Modal Logic (SSML, for short). The logic SSML is obtained by adding a modality
♦ ⊂ to the language of classical modal logic. Being more precise, let  be a non-empty set of
atomic propositions and Σ a finite, non empty set of labels. Formulas of SSML are defined by
the following grammar
        </p>
        <p>::=  | ⊤ | ¬ |  ∧  | ♢  | ♦ ⊂ 
where  is any atomic proposition and  any label. One can define the boolean connectives
∨ and → in the usual way, □  as ¬♢ ¬ , and ■ ⊂  as ¬♦ ⊂ ¬ .</p>
        <p>The intended meaning of a formula ♦ ⊂  is “ ♦ ⊂  is true at a state  of a directed graph 
if  is true at  in the graph ′ that is obtained from  by erasing a non-empty set of edges
that are labeled by ". We now precisely define the semantics of SSML. If  = ⟨, 0, Σ , ,  ⟩
is a Kripke structure and  ⊆ , we write  ∖  to denote the Kripke structure obtained by
erasing the subset  from the relation . If  = (, , ′) ∈  we say that  is a labeled edge or
simply an edge, and that  is the label of . We denote by  the subset of labeled edges of 
whose label is , that is  = { ∈  |  ∈  × { } × }.</p>
        <p>The notion of satisfaction of a formula  at a given state  of a Kripke structure  (written
,  |=  ) is inductively defined as follows:
,  |= ⊤ for all  ∈ ;
,  |=  if  ∈  ();
,  |= ¬ if not ,  |=  (notation ,  ̸|=  );
,  |=  1 ∧  1 if ,  |=  1 and ,  |=  2;
,  |= ♢   if there is a ′ ∈  such that (, , ′) ∈  and , ′ |=  ;
,  |= ♦ ⊂  if there is a non-empty  ⊆  such that  ∖ ,  |=  .</p>
        <p>We say that a formula  is true in a rooted Kripke structure  (written  |=  ) if  is true
at the initial state of  . Remark that ,  |= ■ ⊂  if and only if for any non-empty subset
 of  we have that  ∖ ,  |=  ,  included. This implies that if ,  |= ■ ⊂  then
 ∖ ,  |=  .</p>
        <p>A strategy is a plan of action. As it is logical, the plan is winning when it leads me to victory,
whatever my opponent’s plan of action. Thus, a winning strategy can be expressed as an
alternance of universally quantified sentences and existentially quantified sentences “for all
actions of my Opponent, there is an action that I can make that leads me to victory". Let us put
ourselves in the villain’s shoes: suppose that we are the Attacker, and that, by playing, we have
reached a certain state  of an Attack Graph . It is now Defender’s turn. If I have a winning
strategy, I must be able to reach a successor state ′ of  and this for any subset that is removed by
the Defender. Said diferently, we must have that ,  |= ■ ⊂ ♢  = ( ■ ⊂ ♢  )∧· · ·∧ ( ■ ⊂ ♢  )
for some formula  that expresses the winning condition. First, we define:
■ ⊂  =. ⋀︁ ■ ⊂ 
∈Σ
♢  =. ⋁︁ ♢  
∈Σ
We can now define a family of formulas expressing the fact the the Attacker as a winning
strategy to win a Subset Sabotage Game played over an Attack Graph.</p>
        <p>Definition 2 (Winning Formulas). The family { }∈N of Winning formulas is defined by
induction on  as follows:
  =
{︃win
♢ ⊤ ∧ ■ ⊂ ♢ ( − 1 ∨ win) otherwise
if  = 0
(1)
The following thereom can be proved by induction on the size of the Attack Graph.
Theorem 1. For any attack Graph  = ⟨, 0, Σ , , ,  ⟩ such that || =  we have that
 |= win ∨  − 1 if and only if there is a Winning Attacker Strategy for the Subset Sabotage
Game played over .</p>
        <p>The model-checking problem for SSML, consist in deciding if  |=  given a finite rooted
Kripke Structure  and a SSML formula  . The proof of the following theorem is obtained by
defining a translation (parametrized by a given Kripke structure) from SSML formulas to SML
formulas.</p>
        <p>Theorem 2. The model checking problem for SSML is decidable: if  is a finite rooted Kripke
Structure and  an SSML formula, we can decide whether  |=  or not.</p>
        <p>As a consequence of the two above theorems, we can decide if there is a winning Attacker
Strategy for the Subset Sabotage Game played over , by checking if  is a formal model of
the formula  − 1 where  is the size of .</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Attack Graph Games &amp; Automata</title>
        <p>
          In the SSML setting that we sketched in the previous section, we can decide if there is a winning
Attacker Strategy over the subset sabotage game played over , by checking if  is a formal
model of a certain SSML formula  . Such a solution is quite ad-hoc: we have designed the
logic SSML exactly to solve our problem. To reason about the kind of games that we described
in the previous section, we can use another technique. We have presented such a technique
in [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] by working together with other colleagues.
        </p>
        <p>We define a game  as a structure ⟨,  , , ⟩ where  = 1 ∪ 2 is a set of vertex such
that 1 ∩ 2 = ∅, 0 ∈ 1 is the initial vertex,  = 1 ∪ 2 is a transition relation such that
1 ⊆ 1 × 2 and 2 ⊆ 2 × 1 and  ⊆ 2 is a non-empty subset representing the set of
Attacker’s goal states. Given an Attack Graph  = ⟨, 0, Σ , , ,  ⟩ together with a set of
countermeasures  (as showed in Figure 1), we can easily convert the Attack Graph in a game 
by using an algorithm. We briefly describe the idea of this algorithm with the following points:
1. Each vertex  of the game produced will be a pair ⟨, ⟩, where  is an -state and  is
a list of edge labels.
2. We start by setting as initial state of the game the pair ⟨0,  ⟩ and for each ′, such that
(0, , ′) ∈  for some  ∈ Σ we create an edge (⟨0,  ⟩, ⟨′,  ⟩) ∈ 1.
3. If one of the ′ is in  we create a sink and we add ⟨′,  ⟩ ∈ . Otherwise, for each
countermeasure  ∈  we create a vertex ⟨′, ⟩ ∈ 2 and we add a transition (⟨′,  ⟩, ⟨′, ⟩) ∈ 2.
4. If in the Attack Graph  there is an edge labeled with  ̸=  such that (′, , ′′) ∈ 
we add a vertex ⟨′′, ⟩ ∈ 1 and an edge (⟨′, ⟩, ⟨′′, ⟩) ∈ 1, otherwise we create a
sink (⟨′, ⟩, ⟨′, ⟩) ∈ 1.</p>
        <p>We repeat this procedure until there are states in . It is clear that if the Attack graph 
does not contain cycles, then the game  given by the procedure is a finite, labeled tree whose
root is labeled by  = ⟨0,  ⟩. We can thus define a winning strategy for the Attacker as a
pruning of  in which each state in 1 has at most one child and whose leaves are all in , i.e.,
we can define winning strategies as follows.</p>
        <p>Definition 3. An attacker strategy for a game ⟨,  , , ⟩ is a finite tree whose root is  , whose
branches are plays over  and that satisfy the following properties:
1. For each node  of the strategy: if  ∈ 1 then  has at most one child.
2. For each node  of the strategy: if  ∈ 2 and  ∈/  then  has as many child as there are
nodes ′ such that (, ′) ∈ 2.</p>
        <p>An attacker strategy is winning whenever each leaf of the strategy belongs to .</p>
        <p>Finite trees can be recognized by finite tree automaton .</p>
        <p>Definition 4. A nondeterministic tree automaton ( , for short) is a tuple  =
⟨, Σ , 0, ,  ⟩, where:  is a set of states, 0 ∈  is an initial state,  :  × Σ → 2 is a
transition function mapping pairs of states and symbols to a set of states, and  ⊆  is a set of
the accepting states.</p>
        <p>A    recognizes trees and works as follows. For a node tree labelled by  and  being
in a state , it sends diferent copies of itself to successors in accordance with  . Given a
non-deterministic tree automaton  (NTA, for short), we can check in linear time if the set
of trees accepted by the automaton is empty. If such set is empty, there will be no winning
strategies for the Attacker. Since the size (i.e. number of vertices) of a tree constructed by the
NTA does not exceed the size of the game, we obtain the following result.</p>
        <p>Theorem 3. Given a game  it is possible to decide in linear time whether the Attacker has a
strategy to win the game.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Attack Graph Games &amp; Interpreted System</title>
        <p>
          The games we described in the two previous subsections are two-player turn-based games with
perfect information. The two players move in turns and they have perfect information about
each other’s actions. These two assumptions are quite unrealistic. In particular, it is unrealistic
to think that a Defender always have perfect information about Attacker’s actions. Moreover,
while in both models we have a clear view of what an Attacker is — we can identify him with
the Attack Graph itself— the Defender is relegated to a subordinate role. Such a character does
not have a clear ontological status has it is identified with a set of countermeasures that are
given independently of the graph structure. If we want to think in terms of multi-agent system,
we should try to give a clear agent’s status to the Defender as well. To tackle all these problems
we propose to think about Attacker and Defender(s) in a game played over an Attack Graph as
Agents of an Interpreted System [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. An Interpreted Systems (IS) is constructed starting from
a finite set of Agents  = {1, . . . , }, each agent  is specified by giving:
• a set  of local states;
• a set  of agent’s action;
• a function  associating to each local state a non-empty set of agent’s  actions;
• a transition function  that takes an agent  state  and an action  for each agent in
 as argument and outputs an agent  state. The function (, 1, . . . , ) is defined if
and only if we have that  ∈ ().
        </p>
        <p>
          If  is a set of  agents, a global state  is a tuple  = ⟨1, . . . , ⟩ where each  is an
agent  state. The set of global states is denoted by  . Finally an Interpreted System is
a tuple  = ⟨, 0, , Π ⟩ where  is a set of agents, 0 ∈  is the global initial state,
 :  ×  →  is the global transition function such that  (, 1, . . . , ) = ′ if for
every  ∈ , (, ) = ′, and Π is a labeling function, associating to any global states a
subset of atomic propositions. Interpreted systems can be used together with logics for the
strategic reasoning such as Alternating-time temporal logic (ATL)[
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] and Strategy Logic (SL)[
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
Moreover, interpreted systems came with a natural concept of imperfect information: if  is a
global state, we denote by [] its -component. Two global states  and ′, are equivalent for
the agent  whenever [] = ′[]. We denote such notion by  ∼  ′. Two histories, i.e. finite
sequences of global states, ℎ and ℎ′, are equivalent for the agent  whenever they have the same
length  and ℎ ∼  ℎ′ for any 0 ≤  ≤ .
        </p>
        <p>Given an Attack Graph  we can define an Attacker Agent whose states are the states of
the attack graph, whose actions are the labels of the Attack Graph’s edges (and eventually the idle
action), and in which given two states  and ′ of the  if (, , ′) ∈  then (, (, ⃗)) = ′ for
some tuple of other’s agent actions ⃗. The natural question is: how can we define the Defender?
As we can see from the table in Figure 1, each attack in the Attack Graph, is identified with a
set of preconditions and postconditions. The latter is nothing more than atomic propositions
labeling states. If the Attacker is in a state  that is labeled with some preconditions, he can
make an attack and move to a state ′ labeled by the postconditions of the attack. We can
suppose that the Defender’s set  of countermeasures is defined symmetrically: each Defender
countermeasure is identified with a pair of sets ⟨, ⟩ of atomic propositions that labels
some states in the Attack Graph. Now, let  = ⟨, 0, Σ , , ,  ⟩ be an attack graph. By
definition,  () is a set of atomic propositions, for any  ∈ . In Section 2, we saw that
the atomic propositions labeling a state represent two heterogeneous types of information:
information about the system (vulnerabilities, state of the system, etc.), and information about
the Attacker (his skills, knowledge of the network, access to device, etc.). We can imagine
that a Defender has partial knowledge, e.g., she does not know exactly what are the skills of
the Attacker, or she does not know to which devices the Attacker can access, or she has only
information about the system or, even, of some subset of states of the system. We can thus
imagine that the pairs ⟨, ⟩ in the set of countermeasures vary over the knowledge
of the defender, i.e.  ⊆  . Consider the directed graph  = ⟨, ⟩ where the set
of states  is the set of equivalence classes of states of the Attack Graph modulo  (the
knowledge of the defender). That is,  ∈  if  is an equivalence class of states modulo the
relation  ≡  ⇐⇒  () ∩  =  ( ) ∩ , where ,  ∈ . The set of edges  is
given by:
1. (, ′) ∈  if (, ,  ) ∈  for some states of the ,  belonging to the equivalence
class  and  belonging to the equivalence class ′ or,
2. (, ′) ∈  if  () =  and  ( ) =  for some state of the ,  belonging
to the equivalence class  and  belonging to the equivalence class ′.</p>
        <p>Over such a structure, we can construct a Defender Agent, whose set of states is ,
having an action for each edge in  and in which the local transition functions mimics the
relation . We can impose that whenever the defender is in a state that is a precondition of a
countermeasure, if she plays a certain action then the local transition functions will output the
associated postconditions no matter what are the actions of the other agents.</p>
        <p>Remark that by using such a construction, we can easily consider -diferent defenders: we
can imagine that each defender knows a diferent subset of  and as a consequence generates
diferent defender’s agent. This approach paves the way to the introduction of the strategic
reasoning typical of the ATL logics into the Attack Graphs frame. In the two previous subsections,
we only asked whether, given an  one can find a winning Attacker strategy, i.e., we were
obliged to consider only reachability goals for the Attacker. Thanks to the fact that Interpreted
Systems are a model of the logic ATL, we can specify the strategic properties that we are
interested to check via such a logic, and e.g., verify properties about the strategy ability of a
coalition of Defenders.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>We have sketched three possible formalization of Attacker-Defender(s) games that are played
over an Attack Graph. All the approaches make use of formal methods techniques to estimate
whether one of the players can win the proposed games. Two of the formalization consider
two-players turn-based reachability games with perfect information, and are obtained by the
means of, respectively, modal logic and automata theory. The third conceptualization, which in
our opinion is the more promising, paves the way to a formalization of such games in terms
of concurrent -agents games with imperfect information and the use of formal verification
techniques, e.g., model checking for the various strategic logics, to reason about such games.</p>
      <p>
        The amount of work that still needs to be done to make such formalization attractive is
enormous. However, we take the liberty of pointing out a research direction that we feel is a
priority. It is known that under the hypothesis of uniformity, the model checking problem for
ATL⋆ with memoryfull strategies and imperfect information is undecidable [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. Thus, if we want
to use efectively the Interpreted System framework we sketched, we should restrict ourselves to
use partial solutions to the problem, such as abstractions either on the information [
        <xref ref-type="bibr" rid="ref19">19, 20, 21, 22</xref>
        ]
or on the strategies [23, 24, 25, 26, 27]. In the last decade, ATL has been extended to deal with
the concept of strategies to obtain an objective under a limited bound of resources [28]. Such a
concept is attractive for the security scenario that we are trying to take into account: an Attacker
could have a limited amount of resources to make an attack and, equally, a Defender could
have a limited amount of resources to make a countermeasure. To the best of our knowledge,
quantitative extensions of ATL with imperfect information have not been studied in the literature.
Furthermore, in this context, we can also study if an Attacker has some backup strategies to
achieve his objectives by following the line on graded modalities as done in [29, 30, 31]. We
believe that it would be interesting and useful for our goals to study such variants.
strategic properties in multi-agent systems with imperfect information, in: IAAI 2019,
2019, pp. 6030–6037. URL: https://doi.org/10.1609/aaai.v33i01.33016030. doi:10.1609/
aaai.v33i01.33016030.
[20] F. Belardinelli, V. Malvone, A three-valued approach to strategic abilities under imperfect
information, in: KR 2020, 2020, pp. 89–98. URL: https://doi.org/10.24963/kr.2020/10. doi:10.
24963/kr.2020/10.
[21] A. Ferrando, V. Malvone, Towards the combination of model checking and runtime
verification on multi-agent systems, in: PAAMS 2022, 2022, pp. 140–152. URL: https:
//doi.org/10.1007/978-3-031-18192-4_12. doi:10.1007/978-3-031-18192-4\_12.
[22] A. Ferrando, V. Malvone, Towards the verification of strategic properties in multi-agent
systems with imperfect information, AAMAS (2023) (to appear).
[23] W. Jamroga, V. Malvone, A. Murano, Natural strategic ability, Artif. Intell. 277 (2019). URL:
https://doi.org/10.1016/j.artint.2019.103170. doi:10.1016/j.artint.2019.103170.
[24] W. Jamroga, V. Malvone, A. Murano, Natural strategic ability under imperfect information,
in: AAMAS 2019 , 2019, pp. 962–970. URL: http://dl.acm.org/citation.cfm?id=3331791.
[25] F. Belardinelli, A. Lomuscio, A. Murano, S. Rubin, Verification of multi-agent systems
with public actions against strategy logic, Artif. Intell. 285 (2020) 103302. URL: https:
//doi.org/10.1016/j.artint.2020.103302. doi:10.1016/j.artint.2020.103302.
[26] R. Berthon, B. Maubert, A. Murano, S. Rubin, M. Y. Vardi, Strategy logic with imperfect
information, ACM Trans. Comput. Log. 22 (2021) 5:1–5:51. URL: https://doi.org/10.1145/
3427955. doi:10.1145/3427955.
[27] F. Belardinelli, A. Lomuscio, V. Malvone, E. Yu, Approximating perfect recall when model
checking strategic abilities: Theory and applications, J. Artif. Intell. Res. 73 (2022) 897–932.
      </p>
      <p>URL: https://doi.org/10.1613/jair.1.12539. doi:10.1613/jair.1.12539.
[28] N. Alechina, B. Logan, H. N. Nguyen, F. Raimondi, L. Mostarda, Symbolic model-checking
for resource-bounded ATL, in: AAMAS 2015, 2015, pp. 1809–1810.
[29] M. Faella, M. Napoli, M. Parente, Graded alternating-time temporal logic, Fundam.</p>
      <p>Informaticae 105 (2010) 189–210. URL: https://doi.org/10.3233/FI-2010-363. doi:10.3233/
FI-2010-363.
[30] B. Aminof, V. Malvone, A. Murano, S. Rubin, Graded modalities in strategy logic, Inf.</p>
      <p>Comput. 261 (2018) 634–649. URL: https://doi.org/10.1016/j.ic.2018.02.022. doi:10.1016/
j.ic.2018.02.022.
[31] V. Malvone, F. Mogavero, A. Murano, L. Sorrentino, Reasoning about graded strategy
quantifiers, Information and Computation 259 (2018) 390 – 411.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Clarke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Grumberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Peled</surname>
          </string-name>
          , Model Checking, The MIT Press, Cambridge, Massachusetts,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>E.</given-names>
            <surname>Clarke</surname>
          </string-name>
          , E. Emerson,
          <article-title>Design and Synthesis of Synchronization Skeletons Using BranchingTime Temporal Logic</article-title>
          .,
          <source>in: LP</source>
          <year>1981</year>
          ,
          <year>1981</year>
          , pp.
          <fpage>52</fpage>
          -
          <lpage>71</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>O.</given-names>
            <surname>Kupferman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Wolper</surname>
          </string-name>
          ,
          <article-title>An Automata Theoretic Approach to Branching-Time ModelChecking</article-title>
          .,
          <source>Journal of the ACM</source>
          <volume>47</volume>
          (
          <year>2000</year>
          )
          <fpage>312</fpage>
          -
          <lpage>360</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>O.</given-names>
            <surname>Kupferman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Wolper</surname>
          </string-name>
          , Module Checking.,
          <source>Information and Computation</source>
          <volume>164</volume>
          (
          <year>2001</year>
          )
          <fpage>322</fpage>
          -
          <lpage>344</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Alur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Henzinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kupferman</surname>
          </string-name>
          ,
          <article-title>Alternating-Time Temporal Logic.</article-title>
          ,
          <source>Journal of the ACM</source>
          <volume>49</volume>
          (
          <year>2002</year>
          )
          <fpage>672</fpage>
          -
          <lpage>713</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Lomuscio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Qu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Raimondi</surname>
          </string-name>
          ,
          <string-name>
            <surname>MCMAS:</surname>
          </string-name>
          <article-title>A model checker for the verification of multi-agent systems</article-title>
          ,
          <source>in: (CAV09)</source>
          ,
          <year>2009</year>
          , pp.
          <fpage>682</fpage>
          -
          <lpage>688</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>F.</given-names>
            <surname>Mogavero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Murano</surname>
          </string-name>
          , G. Perelli,
          <string-name>
            <given-names>M. Y.</given-names>
            <surname>Vardi</surname>
          </string-name>
          ,
          <article-title>Reasoning about strategies: On the model-checking problem</article-title>
          ,
          <source>ACM Trans. Comput. Log</source>
          .
          <volume>15</volume>
          (
          <year>2014</year>
          )
          <volume>34</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>34</lpage>
          :
          <fpage>47</fpage>
          . URL: https: //doi.org/10.1145/2631917. doi:
          <volume>10</volume>
          .1145/2631917.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>W.</given-names>
            <surname>Jamroga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Murano</surname>
          </string-name>
          ,
          <article-title>Module checking of strategic ability</article-title>
          ,
          <source>in: AAMAS</source>
          <year>2015</year>
          ,
          <year>2015</year>
          , pp.
          <fpage>227</fpage>
          -
          <lpage>235</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>N. R.</given-names>
            <surname>Jennings</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wooldridge</surname>
          </string-name>
          ,
          <article-title>Application of intelligent agents</article-title>
          , in: Agent Technology: Foundations, Applications, and Markets, Springer-Verlag,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>R. P.</given-names>
            <surname>Lippmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. W.</given-names>
            <surname>Ingols</surname>
          </string-name>
          ,
          <article-title>An annotated review of past papers on attack graphs (</article-title>
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>C.</given-names>
            <surname>Phillips</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. P.</given-names>
            <surname>Swiler</surname>
          </string-name>
          ,
          <article-title>A graph-based system for network-vulnerability analysis</article-title>
          ,
          <source>in: NSPW98</source>
          ,
          <year>1998</year>
          , pp.
          <fpage>71</fpage>
          -
          <lpage>79</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>K.</given-names>
            <surname>Kaynar</surname>
          </string-name>
          ,
          <article-title>A taxonomy for attack graph generation and usage in network security</article-title>
          ,
          <source>J. Inf. Secur. Appl</source>
          .
          <volume>29</volume>
          (
          <year>2016</year>
          )
          <fpage>27</fpage>
          -
          <lpage>56</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>T.</given-names>
            <surname>Heberlein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bishop</surname>
          </string-name>
          , E. Ceesay,
          <string-name>
            <given-names>M.</given-names>
            <surname>Danforth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Senthilkumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Stallard</surname>
          </string-name>
          ,
          <article-title>A taxonomy for comparing attack-graph approaches</article-title>
          , [Online] http://netsq. com/Documents/AttackGraphPaper. pdf (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>J. van Benthem</surname>
          </string-name>
          ,
          <source>An Essay on Sabotage and Obstruction</source>
          , Springer Berlin Heidelberg,
          <year>2005</year>
          , pp.
          <fpage>268</fpage>
          -
          <lpage>276</lpage>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>540</fpage>
          -32254-2_
          <fpage>16</fpage>
          . doi:
          <volume>10</volume>
          .1007/ 978-3-
          <fpage>540</fpage>
          -32254-2_
          <fpage>16</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>D.</given-names>
            <surname>Catta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Leneutre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Malvone</surname>
          </string-name>
          ,
          <article-title>Subset sabotage games &amp; attack graphs</article-title>
          ,
          <source>in: WOA</source>
          <year>2022</year>
          ,
          <year>2022</year>
          , pp.
          <fpage>209</fpage>
          -
          <lpage>218</lpage>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3261</volume>
          /paper16.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>D.</given-names>
            <surname>Catta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Di</given-names>
            <surname>Stasio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Murano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Leneutre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Malvone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A Game</given-names>
            <surname>Theoretic Approach</surname>
          </string-name>
          to Attack Graphs,
          <year>2023</year>
          . To appear
          <source>in ICAART</source>
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>R.</given-names>
            <surname>Fagin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Halpern</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Moses</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vardi</surname>
          </string-name>
          ,
          <source>Reasoning about Knowledge., MIT</source>
          ,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>C.</given-names>
            <surname>Dima</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Tiplea</surname>
          </string-name>
          ,
          <article-title>Model-checking ATL under imperfect information and perfect recall semantics is undecidable</article-title>
          ,
          <source>CoRR abs/1102</source>
          .4225 (
          <year>2011</year>
          ). URL: http://arxiv.org/abs/1102.4225.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>F.</given-names>
            <surname>Belardinelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lomuscio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Malvone</surname>
          </string-name>
          ,
          <article-title>An abstraction-based method for verifying</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>