<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Pisa, Italy.
* Corresponding author.
$ ddevinco@unisa.it (D. D. Vinco); alessia.antelmi@unito.it (A. Antelmi); cspagnuolo@unisa.it (C. Spagnuolo);
vitsca@unisa.it (V. Scarano)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>High-Performance Computation on a Rust-based distributed ABM engine</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Daniele De Vinco</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Tranquillo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alessia Antelmi</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Carmine Spagnuolo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vittorio Scarano</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Università degli Studi di Salerno</institution>
          ,
          <addr-line>Salerno</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Università degli Studi di Torino</institution>
          ,
          <addr-line>Torino</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>An agent-based model (ABM) is a computational model for simulating autonomous agents' actions and interactions to understand a system's behavior and what governs its outcomes. When the data or number of agents grow or multiple runs are necessary, agent-based simulations are generally computationally costly. Therefore, adopting diferent computing paradigms, such as the distributed one, is essential to manage long-running simulations. The main problem with this approach is finding a way to distribute and balance the simulation field so that the agents can move from one machine to another with the least amount of synchronization overhead. Based on our experiences, we present a Rust-based ABM engine capable of distributing models on high-performance computing resources, gaining remarkable speedup against the sequential version.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;High-performance computing</kwd>
        <kwd>Agent-based modeling</kwd>
        <kwd>Distributed computing</kwd>
        <kwd>Simulation</kwd>
        <kwd>Complex systems</kwd>
        <kwd>Computational social science</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Modeling real-world phenomena is incredibly challenging due to the intricate interactions among
numerous interconnected elements. Understanding these systems is nearly impossible when they are
viewed in isolation. Consequently, such systems are often referred to as complex systems, though a
precise definition of complexity remains elusive [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. These systems typically share features such as
non-linearity, decentralized control, and feedback mechanisms. In recent years, Computational Science
has leveraged data-intensive computing and analysis to study such issues. ABMs ofer a bottom-up
approach for analyzing complex systems, allowing modelers to design the behaviors of individual
agents (e.g., members of a population) and the environments in which they operate. The interactions
among these agents in the simulated environment produce emergent properties and phenomena that
the modeler aims to examine and understand. These three components (behavior, environment, and
interactions) are the building blocks of an ABM and have been proven to be very efective in modeling
diferent scenarios across a vast corpora of fields [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        ABMs are also a recurring theme in High-Performance Computing (HPC), since these models are
designed to mimic social interactions, the global economy, or natural phenomena. In addition, they can
help predict potential outcomes involving numerous entities. However, when the number of agents
consistently grows, traditional ABM engines fail because the computational power required by a single
execution becomes an unbearable limitation. The literature states that ABMs can intersect with HPC
through two distinct ways: the outer and inner loops [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The former usually describes optimization
techniques such as model parameters sweeping [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The latter, which is also the focus of this paper,
typically involves distributing a model across diferent computational nodes using de facto standards
such as MPI (Message Protocol Interface) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] or OpenMP (Open Multiprocessing) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        This paper presents the early stages of implementing a distributed version of krABMaga, an ABM
engine written entirely in Rust. We have employed our experiences in ABMs and the distributed
computing field to enhance the capabilities of the krABMaga engine, assessing the possibilities and
opportunities for deploying a high-optimized tool to an HPC cluster with minimal efort and achieving
good results [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        ABMs have been extensively studied and applied across various domains, providing valuable insights
into complex systems through simulations [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ]. However, while there is a substantial body of work
focused on ABMs in general, research into their distributed versions is relatively limited, with tools
outdated or not supported anymore.
      </p>
      <p>
        Generally, distributed ABMs involve partitioning the simulation across multiple computational nodes
to handle larger-scale models or to achieve higher performance. This approach can significantly improve
the scalability and eficiency of simulations by leveraging parallel processing and distributed computing
resources. Despite its potential, the distributed version of ABMs presents additional challenges, such
as managing communication between nodes, ensuring data consistency, and efectively balancing the
computational load. Several frameworks have explored a distributed approach, such as:
Mason [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Mason is a Java-based ABM simulation toolkit. A distributed version of this library, known
as D-Mason [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], was developed to enhance performance. It uses a Quad-Tree data structure to manage
the simulation field.
      </p>
      <p>
        Flame [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Flame is an ABM system designed to run on a wide range of heterogeneous computing
platforms. It ofers a formal framework for model creation using the XXML language, a variant of XML,
along with automatic parallel code generation.
      </p>
      <p>
        Flame-GPU [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Flame-GPU extends Flame to support GPU translation. It simplifies GPU
programming by using an API that leverages the FLAME template to generate CUDA code for target GPU devices,
eliminating the need for users to engage directly with GPU programming languages or optimization
techniques.
      </p>
      <p>Pandora [13]. Pandora is an ABM framework for large-scale distributed simulations. It provides
two identical programming interfaces in diferent programming languages, one of them including the
automatic generation of parallel and distributed code.</p>
      <p>Repast-HPC [14]. Repast-HPC is a component of the Repast suite, specifically designed for
largescale simulations on C++-based systems. It is tailored for execution on large computing clusters and
supercomputers.</p>
      <p>
        This work revolves around a distributed version of krABMaga1, an open-source discrete events
simulation engine written in Rust for developing ABM simulations [
        <xref ref-type="bibr" rid="ref4">4, 15</xref>
        ]. The distributed engine,
as described in the next sections, simplifies the simulation development process by abstracting the
complexities involved in distributed computing. Therefore, this implementation allows modelers to
focus on the core logic of their simulations without getting bogged down by technical issues and
communication layers. KrABMaga aims to be an intuitive toolbox inspired by the popular MASON
library, particularly its modular design that separates the simulation and visualization subsystems. The
Rust programming language underpins this approach [16], thanks to its main principles:
• Performance: Rust ofers both speed and memory eficiency. Its memory model eliminates the
need for a garbage collector at runtime, making it well-suited for critical services, embedded
devices, and seamless integration with other languages.
• Reliability: It exploits its ownership and borrowing system to guarantee memory and thread
safety, removing many classes of bugs at compile time.
• Productivity: The language is shipped with great documentation, a friendly compiler with useful
error messages, and a fast-growing community that has written a large corpus of high-optimized
crates (libraries in the Rust ecosystem).
      </p>
      <p>Rust stands out by performance similar to C, which can shorten the duration of a single simulation
and by a unique programming approach that ensures no memory-related bugs occur throughout
long-running experiments, hence enabling simulation reliability [17]. Every ABM engine needs each
simulation to have at least two important components: a state and an agent. The state describes the
environment of the simulation, which contains diferent elements, such as a field and its dimensions,
the number of steps, the initial number of agents and others, while the agent contains the behavior
of the population inside the simulation. Thanks to this decoupled structure, the simulation field
can be modified without impacting other parts of the simulation, such as the agent behavior or the
simulation parameters. Finally, the framework provides additional functionalities, such as monitoring,
reproducibility, and visualization tools.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>A key aspect of ABM computation is the communication between agents. Typically, each simulation
agent needs to identify its neighbors to exchange information and perform its tasks. In sequential
execution, this process is routine and has a moderate impact on performance. However, in distributed
execution, where neighbors may be located on diferent machines, discovering and interacting with
them can become a significant performance bottleneck. Moreover, when an agent moves between
partitions or is removed from the simulation, the process must proceed seamlessly as if all agents were
in a single field. Addressing these challenges is crucial when developing an eficient distributed system
that can manage multiple partitions of the same field across remote machines.</p>
      <p>To facilitate the distribution of the simulation, we began by modifying an existing field in our
framework. Our first attempt takes as a reference the Field2D implementation on the krABMaga
repository2, which is the standard field where an agent can move on a continuous 2D space.
Sequential structure. The Field2D uses a two-dimensional toroidal grid as a simulation field
characterized by an origin point (x;y), a width, and a height. Each grid coordinate identifies a cell in which an
agent can reside and interact with other agents.</p>
      <p>Distributed structure. K-Dimensional Tree (K-D Tree) data structures are widely used in distributed
computing, particularly for tasks like clustering and closest neighbors search when a scalable solution
is required [18]. A K-D Tree is a tree structure in which each node has exactly two children and can be
split until the desired number of nodes is reached. For each pair of children created, the parent node
keeps references to them, allowing us to reach any leaf, starting from the root, in a short time [19]. We
implemented a customized version of a K-D Tree, where each child node stores references to all other
nodes created in the tree, referred to as blocks. Each block represents a segment of the simulation field
and includes an ID corresponding to the process rank it will be assigned to, its origin point (x, y), as
well as its width and height, as shown in Figure 1. By maintaining references to every child node, each
node gains access to comprehensive information about all nodes, facilitating eficient neighbor search
and synchronization operations. Although this modification increases the physical space required, the
resulting eficiency in communication justifies this trade-of. Moreover, this approach remains practical
since the growth rate of machines — and thus partitions — does not scale as rapidly as the number of
agents.</p>
      <p>After the main process has computed each block, it sends a reference to each block to all other
processes. These processes then receive and store the reference in local memory, allowing them to
communicate using the associated ID when necessary.
2https://github.com/krABMaga/krABMaga/blob/main/src/engine/fields/field_2d.rs</p>
      <p>When an agent changes position during the simulation and exits from a block’s border, it must be
sent to another block. Since every block knows the exact size and ID of all the others, when an agent
moves outside its border, it can easily calculate the ID of the processor that will be responsible for the
agent based on its position. To make this exchange possible, the object is saved into an array whenever
an agent moves outside the border of its process field. At the end of every step, every process sends all
the agents who have moved of their partition to their respective neighbors and receives all agents sent
by their neighbors. This phase is preceded by a message exchange phase, where each process sends
the number of agents it will exchange to each neighbor and receives the value from all its neighbors.
When this process is complete, each process allocates the bufer with adequate slots for the incoming
agents. This communication phase is handled using MPI collective operations, such as scatter and
gather, combined with non-blocking send and receive operations to facilitate data exchange across
processors eficiently.</p>
      <p>Additionally, in many simulations, an agent needs to be aware of nearby agents within a specific
area of interest (AOI), usually defined by a fixed-size radius (see Figure 2). This can be particularly
challenging in distributed simulations because an agent’s AOI may be divided across diferent processes.
To accomplish this task, it is essential to identify first agents that could be neighbors of agents from
other processes. This process is facilitated by Halo Regions, of which an example is illustrated in
Figure 2. A Halo Region is a fixed-length zone located near the borders. When an agent moves inside
the field and enters a Halo region, a copy of the agent is placed inside an array of agents, indexed by the
Halo Region that keeps it. At the end of every step, these agents are sent to the potential neighbors and
received from each one. This operation uses the same principles of the exchange between processors.
Once all agents have been received, the process can calculate the neighbors of each agent, considering
both the local agents and the received agents that are in the AOI.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>To evaluate the proposed implementation, we have chosen Flockers as the main benchmark model [20].
Tests are based on known parameters for the model [15]. The number of steps for each simulation
is fixed to 200. The model is evaluated using the configurations for the number of agents and field
dimensions listed in Table 1.</p>
      <p>The code to reproduce the model is available on the GitHub repository3. We have built a cluster on
Microsoft Azure4 to make the benchmark as fair as possible, eliminating the noise from background
3https://github.com/krABMaga/examples/tree/main/flockers_mpi
4https://azure.microsoft.com
activities on local machines. Each virtual machine on the Azure cluster was created within a Proximity
Placement Group5 and has the following specifications:
• Size: standard_DS1_v2
• Number vCPU : 1
• CPU family: Intel Xeon Platinum 8370C (or similar6).
• Memory: 3.5 GiB
• S.O.: Ubuntu Server 22.04 LTS
• Disk: 8 GB Standard SSD</p>
      <p>All numerical results obtained with an average execution time of 10 runs for each setup are displayed
in Table 2. The curve trend of this model is displayed when varying the number of processors in Figure 3.
It is evident that when computation is the main task of the distributed system, every configuration
performs eficiently, closely approaching the optimal curve. However, it is also apparent that the
curves tend to slow down at specific thresholds and can sometimes deteriorate. The primary culprit
is the communication phase, which becomes a bottleneck when many halo regions are filled with
agents and need to exchange information at each step. The simulation with 1M agents running on 8
processing nodes reveals a speed-up exceeding 8, which is an unexpected result that warrants further
investigation. This anomaly could be due to near-perfect system load balancing or optimal memory
alignment with the machine’s cache. However, these explanations seem improbable, considering the
benchmark was executed multiple times with randomized seed values. These results highlight the need
for load balancing to optimize the size of each partition, thereby reducing the number of agents that
need to be exchanged each step.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>The results demonstrate that the current implementation achieves significant speed-up with an
increasing number of processors, outperforming the sequential algorithm even with just 2 processes.
Additionally, this implementation allows each simulation to utilize the modified K-D Tree with minimal
adjustments required to the sequential model. These changes are related to the state and the launching
parameters of the simulation and involve the necessity, until a new version of the library is released, to
define the Equivalence trait on the agent to make the communication with MPI possible.</p>
      <p>This work comes with two main limitations. First, the field structure does not implement any
loadbalancing system. The lack of this feature explains the degradation of the model’s performance as the
number of agents increases. An example of how the load balance should work is shown in Figure 4.
Second, the tests have been evaluated only on the krABMaga engine, not against diferent engines. The
next step should include a more extensive study comparing it with other distributed ABM engines.</p>
      <p>Future work should concentrate on balancing the distribution of agents across partitions, ensuring
each processor shares a fair amount of work. To provide a more comprehensive assessment, we will
evaluate and compare the engine’s performance against that of current state-of-the-art open-source
engines with available distributed versions. Finally, after a more polished version of the framework is
released, we should focus on abstracting the distributed field as much as possible from the modeler.
This will make it easier to write a new model from scratch without dealing with the complex layer of
the distributed computing paradigm.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This work has been partially supported by the spoke “FutureHPC &amp; BigData" of the ICSC – Centro
Nazionale di Ricerca in High-Performance Computing, Big Data and Quantum Computing funded by
European Union – NextGenerationEU.
[13] X. Rubio-Campillo, Pandora: a versatile agent-based modelling platform for social simulation,</p>
      <p>Proceedings of SIMUL (2014) 29–34.
[14] N. Collier, M. North, Repast HPC: A Platform for Large-Scale Agent-Based Modeling, Large-Scale</p>
      <p>Computing (2011) 81–109.
[15] A. Antelmi, G. Cordasco, G. D’Ambrosio, D. De Vinco, C. Spagnuolo, Experimenting with
Agent</p>
      <p>Based Model Simulation Tools, Applied Sciences 13 (2023).
[16] W. Bugden, A. Alahmar, Rust: The programming language for safety and performance, arXiv
preprint arXiv:2206.05503 (2022).
[17] R. Jung, J.-H. Jourdan, R. Krebbers, D. Dreyer, RustBelt: Securing the foundations of the Rust
programming language, Proceedings of the ACM on Programming Languages 2 (2017) 1–34.
[18] A. Chakravorty, W. S. Cleveland, P. J. Wolfe, Scalable -d trees for distributed data, arXiv preprint
arXiv:2201.08288 (2022).
[19] R. A. Brown, Building a Balanced -d Tree in ( log ) Time, Journal of Computer Graphics</p>
      <p>Techniques (JCGT) 4 (2015) 50–68.
[20] C. W. Reynolds, Flocks, herds and schools: A distributed behavioral model, in: Proceedings of the
14th annual conference on Computer graphics and interactive techniques, 1987, pp. 25–34.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Ladyman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lambert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Wiesner</surname>
          </string-name>
          ,
          <article-title>What is a complex system?</article-title>
          ,
          <source>European Journal for Philosophy of Science</source>
          <volume>3</volume>
          (
          <year>2013</year>
          )
          <fpage>33</fpage>
          -
          <lpage>67</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>L.</given-names>
            <surname>Xue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.-P.</given-names>
            <surname>Liu</surname>
          </string-name>
          , W. Hu,
          <article-title>All-in-One Framework for Design, Simulation, and Practical Implementation of Distributed Multiagent Control Systems</article-title>
          ,
          <source>IEEE Transactions on Systems, Man, and Cybernetics: Systems</source>
          (
          <year>2024</year>
          )
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>N. T.</given-names>
            <surname>Collier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ozik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. R.</given-names>
            <surname>Tatara</surname>
          </string-name>
          ,
          <article-title>Experiences in Developing a Distributed Agent-based Modeling Toolkit with Python</article-title>
          ,
          <source>in: 2020 IEEE/ACM 9th Workshop on Python for High-Performance and Scientific Computing (PyHPC)</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Antelmi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Caramante</surname>
          </string-name>
          , G. Cordasco,
          <string-name>
            <surname>G. D'Ambrosio</surname>
          </string-name>
          ,
          <string-name>
            <surname>D. De Vinco</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Foglia</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Postiglione</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Spagnuolo</surname>
          </string-name>
          ,
          <article-title>Reliable and Eficient Agent-Based Modeling and Simulation</article-title>
          ,
          <source>Journal of Artificial Societies and Social Simulation</source>
          <volume>27</volume>
          (
          <year>2024</year>
          )
          <article-title>4</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5] Message Passing Interface Forum,
          <article-title>MPI: A Message-Passing Interface Standard Version 4</article-title>
          .1,
          <year>2023</year>
          . URL: https://www.mpi-forum.
          <source>org/docs/mpi-4.1/mpi41-report.pdf.</source>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R.</given-names>
            <surname>Chandra</surname>
          </string-name>
          ,
          <article-title>Parallel programming in OpenMP</article-title>
          , Morgan kaufmann,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>C. C.</given-names>
            <surname>Kerr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Stuart</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Mistry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. G.</given-names>
            <surname>Abeysuriya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Rosenfeld</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. R.</given-names>
            <surname>Hart</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. C.</given-names>
            <surname>Núñez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Cohen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Selvaraj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hagedorn</surname>
          </string-name>
          , et al.,
          <article-title>Covasim: an agent-based model of COVID-19 dynamics and interventions</article-title>
          ,
          <source>PLOS Computational Biology</source>
          <volume>17</volume>
          (
          <year>2021</year>
          )
          <article-title>e1009149</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Lohmer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Bugert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Lasch</surname>
          </string-name>
          ,
          <article-title>Analysis of resilience strategies and ripple efect in blockchaincoordinated supply chains: An agent-based simulation study</article-title>
          ,
          <source>International journal of production economics 228</source>
          (
          <year>2020</year>
          )
          <fpage>107882</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Luke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ciofi-Revilla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Panait</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Sullivan</surname>
          </string-name>
          , G. Balan,
          <article-title>Mason: A multiagent simulation environment</article-title>
          ,
          <source>Simulation</source>
          <volume>81</volume>
          (
          <year>2005</year>
          )
          <fpage>517</fpage>
          -
          <lpage>527</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>G.</given-names>
            <surname>Cordasco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Scarano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Spagnuolo</surname>
          </string-name>
          ,
          <article-title>Distributed mason: A scalable distributed multi-agent simulation environment</article-title>
          ,
          <source>Simulation Modelling Practice and Theory</source>
          <volume>89</volume>
          (
          <year>2018</year>
          )
          <fpage>15</fpage>
          -
          <lpage>34</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kiran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Richmond</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Holcombe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. S.</given-names>
            <surname>Chin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Worth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Greenough</surname>
          </string-name>
          ,
          <article-title>Flame: simulating large populations of agents on parallel hardware architectures</article-title>
          ,
          <source>in: Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1-</source>
          Volume
          <volume>1</volume>
          ,
          <year>2010</year>
          , pp.
          <fpage>1633</fpage>
          -
          <lpage>1636</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>P.</given-names>
            <surname>Richmond</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Walker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Coakley</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          <article-title>Romano, High performance cellular level agent-based simulation with FLAME for the GPU</article-title>
          ,
          <source>Briefings in bioinformatics 11</source>
          (
          <year>2010</year>
          )
          <fpage>334</fpage>
          -
          <lpage>347</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>