<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <article-id pub-id-type="doi">10.1007/978-3-031-18192-4\_12</article-id>
      <title-group>
        <article-title>Towards a Coalition Refinement Approach in the Strategic Verification of Multi-Agent Systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Angelo Ferrando</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Informatics</institution>
          ,
          <addr-line>Bioengineering</addr-line>
          ,
          <institution>Robotics and Systems Engineering, University of Genoa</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>13616</volume>
      <fpage>7</fpage>
      <lpage>9</lpage>
      <abstract>
        <p>In the context of formal verification of Multi-Agent Systems, it is common to check whether a subset of agents (also called a coalition) can achieve specific goals of interest, usually expressed as temporal properties. However, each coalition is hand-picked, and there is no guarantee that the agents within it will actually cooperate during system execution. This creates a gap between what the agents are assumed to achieve statically and what they can achieve in practice dynamically. In this paper, we explore and lay the foundation for an engineering approach to guide a coalition refinement technique. Multi-Agent Systems are first statically verified with respect to certain coalitions and then revised based on the actual dynamic behaviour of the agents during runtime.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Multi-Agent Systems</kwd>
        <kwd>Model Checking</kwd>
        <kwd>Runtime Monitoring</kwd>
        <kwd>Strategic Reasoning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Multi-Agent Systems (MAS) are distributed systems composed of intelligent components,
deifned as agents. Nowadays, software systems are at the centre of our lives and are becoming
more ubiquitous, decentralised, and complex. MAS are a good abstraction and engineering
methodology to both tackle the theory and practice of nowadays software systems [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
Nonetheless, as it is hard for monolithic software systems, it is even harder for distributed ones to
guarantee correctness. The process of testing [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], debugging [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], and verifying [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] such systems
can be quite complex.
      </p>
      <p>
        In the research area of MAS, formal verification is especially used to check whether a subset
of agents (also called coalition) is capable of achieving a set of goals (usually specified through
temporal logics, like Alternating-Time Temporal Logic (ATL) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]). In such scenarios, we talk
about formal verification of strategic properties; since we are interested (in general) in the
existence of strategies1 for the agents to achieve their own goals. This is usually considered for
a specific set of agents, that are assumed to cooperate. Nonetheless, it is not always possible to
predict which agents will cooperate in reality. Because of that, even though we can verify that
by collaborating in a coalition, agents can indeed achieve their own goals; it is not guaranteed
whether they will decide to do so at execution time (when the system will be efectively executed).
This choice afects the knowledge we have of the system (as well as the agents in it) and can
guide a refinement of which (if any) coalition of agents can be used.
      </p>
      <p>In this paper, we present an approach to guide the refinement of coalition of agents for the
strategic verification of MAS. We outline the foundational steps required and focus on the
monitoring process used to detect when agents stop cooperating at runtime. By doing so, we put
the basis for further works on the combination of runtime monitoring and strategic verification
of MAS.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Among the logics for strategic reasoning, we may find Strategy Logic (SL). SL is a powerful
formalism for strategic reasoning, extensively covered in the work of Mogavero et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. SL
treats strategies as first-order objects, employing existential ( ∃) and universal (∀) quantifiers
to denote the existence of a strategy () and the consideration of all strategies () in the reasoning
process.
      </p>
      <p>Strategic reasoning encompasses strategy classification into memoryless and memoryful
categories, where memoryless strategies depend solely on the current game state, and memoryful
strategies take into account the entire game history. To connect strategies with specific agents,
SL employs an explicit binding operator (, ).</p>
      <p>
        Despite its expressiveness, SL’s computational complexity presents challenges. It has been
shown that the model-checking problem for SL becomes non-elementary complete [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], and
its satisfiability becomes undecidable [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. To mitigate this, researchers have explored various
fragments of SL.
      </p>
      <p>
        One such fragment is Strategy Logic with Simple-Goals (SL-SG) [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], where strategic operators,
binding operators, and temporal operators are combined. Importantly, SL-SG is demonstrated
to strictly subsume ATL and shares a P-Complete model checking problem with ATL [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>Shifting focus to agents’ information, we diferentiate between perfect and imperfect
information games [10]. In perfect information games, agents possess complete knowledge of the
game. However, real-world scenarios often involve agents making decisions without access to
all relevant information, akin to situations where some system variables are private [11, 12]. In
game modeling, imperfect information is typically addressed by defining an indistinguishability
relation over game states [11, 10, 13].</p>
      <p>The presence of imperfect information significantly impacts model checking complexity. For
instance, with imperfect information and memoryful strategies, ATL becomes undecidable [14].
To address these challenges, researchers have developed various approaches, including
approximations to perfect information [15, 16, 17], notions of bounded memory [18, 19], and hybrid
techniques [20, 21, 22].</p>
      <p>To the best of our knowledge, the closest work to this paper is [23], where the authors discuss
how to abstract the notion of coalitions from ATL specifications. In [ 23], the coalitions are
not hard-coded by the user, but instead are automatically synthesised. This is related to the
work presented herein because, in some sense, we can see [23] as a static coalition refinement
method, while in this paper, we focus on a more dynamic refinement approach. Moreover,
in [23], no consideration is done on the actual implementation of the MAS under analysis, while
we consider its execution and guide the coalition refinement consequently.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Preliminaries</title>
      <p>
        In this section we recall some preliminary notions. Given a set  ,  denotes its complement.
We denote the length of a tuple  as ||, its -th element as  , and its last element || as ().
For  ≤ | |, let ≥  be the sufix  , . . . , || of  starting from  and ≤  the prefix 1, . . . , 
of .
3.1. Model
We start by showing a formal model for Multi-Agent Systems via concurrent game structures
with imperfect information [
        <xref ref-type="bibr" rid="ref6">6, 24</xref>
        ].
      </p>
      <p>Definition 1. A Concurrent Game Structure with imperfect information (iCGS) is a tuple  =
⟨, , ,  , {}∈, {∼ }∈, , ,  ⟩ such that:
•  = {1, . . . , } is a nonempty finite set of agents.
•  is a nonempty finite set of atomic propositions (atoms).
•  ̸= ∅ is a finite set of states, with initial state  ∈ .
• For every  ∈ ,  is a nonempty finite set of actions. Let  = ⋃︀∈  be the
set of all actions, and  = ∏︀∈  the set of all joint actions.
• For every  ∈ , ∼  is a relation of indistinguishability between states. That is, given
states , ′ ∈ ,  ∼  ′ if  and ′ are indistinguishable for agent .
• The protocol function  :  ×  → (2 ∖ {∅}) defines the availability of actions so that
for every  ∈ ,  ∈ , (i) (, ) ⊆  and (ii)  ∼  ′ implies (, ) = (, ′).
• The transition function  :  ×  →  assigns a successor state ′ =  (, ⃗) to each
 ∈ , for every joint action ⃗ ∈  such that  ∈ (, ) for every  ∈ .
•  :  → 2 is the labelling function.</p>
      <p>
        According to Definition 1, a Concurrent Game Structure with imperfect information (iCGS)
characterises how a collection of agents denoted as  interact. This interaction originates from
an initial state  ∈  and follows the guidance of a transition function  . The behaviour of this
function is confined by the feasible actions available to agents, which are determined by the
protocol function . Additionally, we make the assumption that agents might possess incomplete
information about the game. Consequently, in any given state , agent  regards all states ′ that
are indistinguishable from  with respect to agent , as being epistemically possible [25]. When
each relation ∼  reduces to the identity, meaning that  ∼  ′ only when  = ′, the outcome is
a conventional Concurrent Game System (CGS) exhibiting perfect information [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>A history ℎ ∈ + is a finite (non-empty) sequence of states. The indistinguishability relations
are extended to histories in a synchronous, point-wise way, i.e., histories ℎ, ℎ′ ∈ + are
indistinguishable for agent  ∈ , or ℎ ∼  ℎ′, if (i) |ℎ| = |ℎ′| and (ii) for all  ≤ | ℎ|, ℎ ∼  ℎ′ .</p>
      <sec id="sec-3-1">
        <title>3.2. Syntax</title>
        <p>
          We use ATL* [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] to reason about the strategic abilities of agents.
        </p>
        <p>Definition 2. State ( ) and path ( ) formulas in ATL* are defined as follows:


::=  | ¬ |  ∧  | ⟨⟨Γ⟩⟩
::=  | ¬ |
 ∧  |  | ( 
)
where  ∈  and Γ ⊆ .</p>
        <p>Formulas in ATL* are all and only the state formulas.</p>
        <p>As usual, a formula ⟨⟨Γ⟩⟩Φ is read as “the agents in coalition Γ have a strategy to achieve Φ”.
The meaning of temporal operators next  and until  is standard [26]. Operators [[Γ]], release
, eventually  , and globally  can be introduced as usual.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.3. Semantics</title>
        <p>We assume that agents employ uniform strategies [24], i.e., they perform the same action
whenever they have the same information.
state  is denoted as (, ΣΓ).</p>
        <sec id="sec-3-2-1">
          <title>Definition 3.</title>
          <p>that for all histories ℎ, ℎ′ ∈ +, (i)  (ℎ) ∈ (, (ℎ)) and (ii) ℎ ∼  ℎ′ implies  (ℎ) =  (ℎ′).</p>
          <p>A uniform perfect recall strategy for agent  ∈  is a function   : +
→ such</p>
          <p>As per Definition 3, any strategy adopted by agent  necessitates the selection of actions that
are valid for that specific agent. Additionally, whenever two histories appear indistinguishable
to agent , the same action is expected to be chosen. It is worth noting that in cases involving
perfect information, condition (ii) is met by any strategy  . Moreover, memoryless (or imperfect
recall) strategies can be achieved by considering the domain of   within ; in other words,
  :  → . In the context of an iCGS  , a “path”  signifies an unending sequence of states.
The collection of such paths over  is denoted as . Given a collective strategy ΣΓ, which
includes an individual strategy for each agent within the coalition Γ, a path  is considered
ΣΓ-compatible if, for each  ≥</p>
          <p>1,  +1 =  (  , ⃗) for some joint action ⃗, where for every  ∈ Γ,
 =  ( ≤  ) and for each  ∈ Γ,  ∈ (,   ). The set of all ΣΓ-compatible paths starting from</p>
          <p>Now, we have all the ingredients to give the semantics of ATL* .</p>
        </sec>
        <sec id="sec-3-2-2">
          <title>Definition 4.</title>
          <p>The satisfaction relation |= for an iCGS  , state  ∈ , path 
 ∈  , and ATL* formula  is defined as (clauses for Boolean connectives are immediate and
∈ , atom
thus omitted):
(, ) |= 
(, ) |= ⟨⟨Γ⟩⟩
(,  ) |= 
(,  ) |= 
if
if
if
 ∈  ()
(,  1) |= 
(,  ≥ 2) |= 
if for some joint strategy</p>
          <p>ΣΓ,
for all  ∈ (, ΣΓ), (,  ) |= 
(,  ) |=   ′ if for some  ≥</p>
          <p>1, (,  ≥ ) |=  ′, and
for all 1 ≤  &lt; , (,  ≥  ) |= 
We say that formula  is true in an iCGS  , or  |=  , if (,  ) |=  . Now, we state the
model checking problem.</p>
          <p>Definition 5. Given an iCGS  and a formula  , the model checking problem concerns
determining whether  |=  .</p>
          <p>
            Given that the interpretation presented in Definition 4 corresponds to the conventional
understanding of ATL* [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ], it is a recognised fact that verifying ATL* through model checking
against iCGS characterised by imperfect information and perfect recall is an undecidable
problem [27]. Since in this paper we are interested in proposing an engineering approach to
revise the coalitions used in the verification of ATL * formulas, we assume to be always in the
decidable fragments. That is, CGSs with perfect recall strategies, or, iCGSs with imperfect recall
strategies.
          </p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Towards a coalition refinement approach</title>
      <p>In this section, we overview our envisaged engineering methodology. It consists of the following
steps (as depicted in Figure 1). First, an iCGS (representing a MAS) is verified against one (or
multiple) ATL* properties. Naturally, this requires choosing the coalition we assume the agents
will be in. Afterwards, by verifying these properties, we extract the strategies employed by the
agents in the coalitions (i.e., the strategies that, if properly enacted, enable the agents to achieve
their temporal goals). Once these strategies are extracted, we can synthesise the corresponding
monitors to check at execution time whether the agents adhere to the winning strategies or not.
If that is the case, then the approach concludes. However, if at least one agent is not following
any of the winning strategies as intended (i.e., such an agent is not collaborating with the agents
in its coalition), then two outcomes need to be reported. Firstly, the temporal objective related to
the compromised coalition is no longer guaranteed to be achieved (this information is available
at runtime while the system is still operational and can be used to trigger fail-safe behaviours).
Secondly, the coalition that has been compromised at runtime is consistently updated for further
verification rounds.</p>
      <p>Remark 1. Note that, when solving the model checking problem, we only receive a Boolean result,
indicating whether the formal specification of interest is satisfied in the model under analysis or
not. However, in our proposed approach, we envision utilising model checking to extract strategies,
similar to what can be accomplished with tools like the MCMAS model checker. This approach
extends beyond conventional formal verification and moves towards formal synthesis. Consequently,
if formal verification is employed, an additional engineering step becomes necessary to reconstruct
the actual joint winning strategy, rather than solely relying on the Boolean verification outcome.</p>
      <p>Please note that in Figure 1 and in the rest of the section, we consider only strategic properties
with a single strategic operator. This choice is made to enhance readability and serves as a
foundation for a more comprehensive approach (which will require further study, as we will
discuss in Section 5).</p>
      <p>We outline the steps envisioned for our approach, deferring detailed analysis for future
exploration and research.</p>
      <p>ATL* formulas
 1,  2, . . . ,</p>
      <p>iCGS
Model Checking</p>
      <p>⊤
ΣΓ1, ΣΓ2, . . . , ΣΓ</p>
      <p>Multi-Agent System
⊥
ΣΓ1
ΣΓ1
. . .</p>
      <p>ΣΓ</p>
      <p>Monitors
Actions and Propositions
Refine Coalitions
⊥</p>
      <p>Winning Joint-Strategies</p>
      <p>1. Step i: Formal Verification. The initial stage of our methodology involves formal
verification. Specifically, we verify one or multiple ATL * properties against an iCGS, representing
the model of the Multi-Agent System under analysis. This verification is accomplished by
solving the corresponding model checking problem, as defined in Definition 5.
2. Step ii: Formal Synthesis. Once the verification step is completed, the joint winning
strategies Σ can be extracted and analysed, a task that can be performed using the
Γ</p>
      <p>MCMAS model checker.
3. Step iii: Strategy Violation Detection. With the set Σ of winning strategies in hand, we
Γ
synthesise corresponding monitors to assess the agents’ runtime conformance. These
monitors check whether the agents adhere to their winning strategies, as extracted in the
previous step.
4. Step iv: Coalition Revision. In case a violation is detected, this information can be
leveraged to adjust the agent coalitions employed during the static verification phase. This
adjustment is based on the observation that certain agents may not have adhered to their
winning strategies, potentially jeopardising the attainment of the temporal goals.</p>
      <p>Up to this point, our primary focus has been on verifying the MAS under analysis and
assessing whether the agents (integral to the verification) adhere to expected behaviour ( i.e.,
whether they enact winning strategies or not). However, there are two crucial aspects that
require consideration. Firstly, we need to address how the monitors will gather information
about the MAS. Secondly, once a monitor reports a violation, we must determine the appropriate
course of action.</p>
      <p>Firstly, the first aspect is quite pragmatic, as it pertains to the practical verification of the
MAS runtime execution. To accomplish this, we require a method to map the state of the MAS
to the iCGS state, as well as a means to track the actions executed by the agents within the MAS.
The latter is a straightforward process and can be achieved by logging every time an agent
performs an action during runtime. This logging can be implemented by instrumenting the
software system, much like in runtime verification practices [ 28], where additional instructions
(typically for logging) are added to the source code. Consequently, when the instrumented
MAS is executed, it produces additional information that the monitors utilise to assess the
agents’ adherence to any winning strategies. Instrumentation can also be employed to gather
information about the agents’ states, such as their beliefs. This information can be mapped to
the corresponding atomic propositions represented in the iCGS at the verification stage. This
alignment allows the runtime execution of the MAS to align with its abstract representation (the
model). It is worth noting that this mapping step may depend on domain-specific knowledge
and may necessitate at least partial hard-coding.</p>
      <p>Secondly, the second aspect we need to address is the coalition revision process. This is, in
our opinion, the most significant and challenging aspect of the approach. Indeed, the revision
of coalitions can impact both the verification of the MAS and the agents’ behaviour during
runtime. In this paper, we have laid the foundation and outlined a potential road-map for this
approach, with much more to be developed. However, we firmly believe that by combining
runtime monitoring and formal verification for strategic reasoning, we can achieve highly
lfexible and reliable MAS. Runtime monitoring can detect violations of (winning) strategies
and, consequently, of agent coalitions. This information can be used to revise the coalitions
employed during static verification of strategic properties in the MAS. Furthermore, the presence
of monitors that assess agent behaviour during runtime enhances the system’s reliability [29, 30].
Indeed, when strategy violations are detected, we can trigger fail-safe behaviours to assist the
agents in still achieving their respective temporal goals.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and Future Work</title>
      <p>In this paper, we have highlighted the steps of a potential approach to utilise runtime monitoring
to guide coalition revision in the formal verification of strategic properties in MAS. Our primary
focus has been on presenting the core idea and emphasising the significance of the resulting
approach. We have introduced the high-level concept and outlined its steps.</p>
      <p>Given that this paper serves as a foundational step toward the development of a coalition
refinement methodology, we hope that the insights presented herein can provide a valuable
starting point. Our envisioned future work involves the actual design, implementation and
further investigation of the implications of coalition refinement on MAS verification. In this
concise paper, we have only scratched the surface, but we firmly believe that deeper exploration
will be beneficial for advancing the formal verification of strategic properties in MAS.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>I would like to express my heartfelt gratitude to Vadim Malvone for his invaluable
contributions to this paper. His insightful discussions and thoughtful comments have played a
pivotal role in shaping the content and direction of this work. Vadim’s expertise and dedication
have been instrumental, and I am truly grateful for his support throughout this research
endeavour. His input has enriched the quality of this paper, and I am honoured to acknowledge
his significant role in its development.
simple goals: Tractable reasoning about strategies, in: S. Kraus (Ed.), Proceedings of the
Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao,
China, August 10-16, 2019, ijcai.org, 2019, pp. 88–94. URL: https://doi.org/10.24963/ijcai.
2019/13. doi:10.24963/ijcai.2019/13.
[10] J. H. Reif, The complexity of two-player games of incomplete information, J. Comput.</p>
      <p>Syst. Sci. 29 (1984) 274–301.
[11] O. Kupferman, M. Y. Vardi, Module checking revisited, in: CAV’97, Springer, 1997, pp.</p>
      <p>36–47.
[12] R. Bloem, K. Chatterjee, S. Jacobs, R. Könighofer, Assume-guarantee synthesis for
concurrent reactive programs with partial information, in: TACAS, 2015, pp. 517–532.
[13] A. Pnueli, R. Rosner, Distributed reactive systems are hard to synthesize, in: FOCS, 1990,
pp. 746–757.
[14] C. Dima, F. Tiplea, Model-checking ATL under Imperfect Information and PerfectRecall</p>
      <p>Semantics is Undecidable., Technical Report, 2011.
[15] F. Belardinelli, A. Lomuscio, V. Malvone, An abstraction-based method for verifying
strategic properties in multi-agent systems with imperfect information, in: The
ThirtyThird AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative
Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium
on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA,
January 27 - February 1, 2019, AAAI Press, 2019, pp. 6030–6037. URL: https://doi.org/10.
1609/aaai.v33i01.33016030. doi:10.1609/aaai.v33i01.33016030.
[16] F. Belardinelli, V. Malvone, A three-valued approach to strategic abilities under imperfect
information, in: D. Calvanese, E. Erdem, M. Thielscher (Eds.), Proceedings of the 17th
International Conference on Principles of Knowledge Representation and Reasoning, KR
2020, Rhodes, Greece, September 12-18, 2020, 2020, pp. 89–98. URL: https://doi.org/10.
24963/kr.2020/10. doi:10.24963/kr.2020/10.
[17] F. Belardinelli, A. Ferrando, V. Malvone, An abstraction-refinement framework for verifying
strategic properties in multi-agent systems with imperfect information, Artif. Intell. 316
(2023). URL: https://doi.org/10.1016/j.artint.2022.103847. doi:10.1016/j.artint.2022.
103847.
[18] F. Belardinelli, A. Lomuscio, V. Malvone, Approximating perfect recall when model
checking strategic abilities, in: M. Thielscher, F. Toni, F. Wolter (Eds.), Principles of
Knowledge Representation and Reasoning: Proceedings of the Sixteenth International
Conference, KR 2018, Tempe, Arizona, 30 October - 2 November 2018, AAAI Press, 2018,
pp. 435–444. URL: https://aaai.org/ocs/index.php/KR/KR18/paper/view/18010.
[19] F. Belardinelli, A. Lomuscio, V. Malvone, E. Yu, Approximating perfect recall when model
checking strategic abilities: Theory and applications, J. Artif. Intell. Res. 73 (2022) 897–932.</p>
      <p>URL: https://doi.org/10.1613/jair.1.12539. doi:10.1613/jair.1.12539.
[20] A. Ferrando, V. Malvone, Strategy RV: A tool to approximate ATL model checking
under imperfect information and perfect recall, in: F. Dignum, A. Lomuscio, U. Endriss,
A. Nowé (Eds.), AAMAS ’21: 20th International Conference on Autonomous Agents
and Multiagent Systems, Virtual Event, United Kingdom, May 3-7, 2021, ACM, 2021,
pp. 1764–1766. URL: https://www.ifaamas.org/Proceedings/aamas2021/pdfs/p1764.pdf.
doi:10.5555/3463952.3464230.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>R. De Benedictis</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Castiglioni</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Ferraioli</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Malvone</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Maratea</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Scala</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Serafini</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <string-name>
            <surname>Serina</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Tosello</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Umbrico</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Vallati</surname>
          </string-name>
          ,
          <article-title>Preface to the italian workshop on planning and scheduling, rcra workshop on experimental evaluation of algorithms for solving problems with combinatorial explosion, and spirit workshop on strategies, prediction, interaction, and reasoning in italy (ips-rcra-spirit</article-title>
          <year>2023</year>
          ),
          <source>in: Proceedings of the Italian Workshop on Planning and Scheduling</source>
          , RCRA Workshop on
          <article-title>Experimental evaluation of algorithms for solving problems with combinatorial explosion, and</article-title>
          SPIRIT Workshop on Strategies, Prediction, Interaction, and
          <article-title>Reasoning in Italy (IPS-RCRA-SPIRIT 2023) co-located with 22th International Conference of the Italian Association for Artificial Intelligence (AI* IA</article-title>
          <year>2023</year>
          ),
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bergenti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.-P.</given-names>
            <surname>Gleizes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Zambonelli</surname>
          </string-name>
          ,
          <article-title>Methodologies and software engineering for agent systems: the agent-oriented software engineering handbook</article-title>
          , volume
          <volume>11</volume>
          ,
          <string-name>
            <surname>Springer</surname>
            <given-names>Science</given-names>
          </string-name>
          &amp; Business
          <string-name>
            <surname>Media</surname>
          </string-name>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Winikof</surname>
          </string-name>
          ,
          <article-title>BDI agent testability revisited</article-title>
          ,
          <source>Auton. Agents Multi Agent Syst</source>
          .
          <volume>31</volume>
          (
          <year>2017</year>
          )
          <fpage>1094</fpage>
          -
          <lpage>1132</lpage>
          . URL: https://doi.org/10.1007/s10458-016-9356-2. doi:
          <volume>10</volume>
          .1007/ s10458-016-9356-2.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Winikof</surname>
          </string-name>
          ,
          <article-title>Debugging agent programs with why?: Questions</article-title>
          , in
          <source>: Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems</source>
          , AAMAS 2017,
          <string-name>
            <given-names>São</given-names>
            <surname>Paulo</surname>
          </string-name>
          , Brazil, May 8-
          <issue>12</issue>
          ,
          <year>2017</year>
          , ACM,
          <year>2017</year>
          , pp.
          <fpage>251</fpage>
          -
          <lpage>259</lpage>
          . URL: http://dl.acm.org/citation. cfm?id=
          <fpage>3091166</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Dennis</surname>
          </string-name>
          , M. Fisher,
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Webster</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. H.</given-names>
            <surname>Bordini</surname>
          </string-name>
          ,
          <article-title>Model checking agent programming languages</article-title>
          ,
          <source>Autom. Softw. Eng</source>
          .
          <volume>19</volume>
          (
          <year>2012</year>
          )
          <fpage>5</fpage>
          -
          <lpage>63</lpage>
          . URL: https://doi.org/10.1007/ s10515-011-0088-x. doi:
          <volume>10</volume>
          .1007/s10515-011-0088-x.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R.</given-names>
            <surname>Alur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. A.</given-names>
            <surname>Henzinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kupferman</surname>
          </string-name>
          ,
          <article-title>Alternating-time temporal logic</article-title>
          ,
          <source>J. ACM</source>
          <volume>49</volume>
          (
          <year>2002</year>
          )
          <fpage>672</fpage>
          -
          <lpage>713</lpage>
          . URL: https://doi.org/10.1145/585265.585270. doi:
          <volume>10</volume>
          .1145/585265.585270.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>F.</given-names>
            <surname>Mogavero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Murano</surname>
          </string-name>
          , G. Perelli,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vardi</surname>
          </string-name>
          ,
          <article-title>Reasoning about strategies: On the modelchecking problem</article-title>
          ,
          <source>ACM Trans. Comp. Log</source>
          .
          <volume>15</volume>
          (
          <year>2014</year>
          )
          <volume>34</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>34</lpage>
          :
          <fpage>47</fpage>
          . URL: http://doi.acm.
          <source>org/ 10</source>
          .1145/2631917. doi:
          <volume>10</volume>
          .1145/2631917.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>F.</given-names>
            <surname>Mogavero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Murano</surname>
          </string-name>
          , G. Perelli,
          <string-name>
            <given-names>M. Y.</given-names>
            <surname>Vardi</surname>
          </string-name>
          ,
          <article-title>Reasoning about strategies: on the satisfiability problem</article-title>
          ,
          <source>Log. Methods Comput. Sci</source>
          .
          <volume>13</volume>
          (
          <year>2017</year>
          ). URL: https://doi.org/10.23638/ LMCS-
          <volume>13</volume>
          (
          <issue>1</issue>
          :9)
          <year>2017</year>
          . doi:
          <volume>10</volume>
          .23638/LMCS-
          <volume>13</volume>
          (
          <issue>1</issue>
          :9)
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>F.</given-names>
            <surname>Belardinelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Jamroga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kurpiewski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Malvone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Murano</surname>
          </string-name>
          , Strategy logic with
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>