<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>WOA</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Simulating the Law in a Multi-Agent System</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Matteo Cristani</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesco Olivieri</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Guido Governatori</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gabriele Buriola</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Artificial Intelligence and Cyber Futures Institute, Charles Sturt University</institution>
          ,
          <addr-line>Bathurst</addr-line>
          ,
          <country country="AU">Australia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Dept. of Computer Science, University of Verona</institution>
          ,
          <addr-line>Verona</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Independent researcher</institution>
          ,
          <addr-line>Brisbane</addr-line>
          ,
          <country country="AU">Australia</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>25</volume>
      <fpage>8</fpage>
      <lpage>10</lpage>
      <abstract>
        <p>In this paper we define a Multiple Agent System able to simulate an artificial society to be paired with a normative background. The purpose of this architecture shall be the simulation of a law to devise its impact. We analise some existent architecture (GAMA) that has already been used for simulating MAS, with BDI agents. In fact, GAMA technology is insuficient to guarantee certain validity properties and actual computational efectiveness that could instead be provided if we manage the rule system to interpret Defeasible Deontic Logic, a logic framework that satisfies the aforementioned properties. As a ifrst step of an experimental endeavour aiming at law simulation by design, we provide here a theoretical model of the MAS which simulates the society.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Defeasible Deontic Logic</kwd>
        <kwd>Multiple Agent Systems</kwd>
        <kwd>Law Simulation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Simulating collective behaviour is a challenging topic of the recent past. There is a rising
demand that emerges from a variety of contexts, including the production of legislation at broad,
for the simulation paradigm can be useful for determining the actual efects of introducing a
new norm on a given society. Therefore, the drafters, namely those people who are in charge
of producing a norm, either when designing a new norm for the general population, or when
providing a norm background on the actual domain of a restricted society (a company, an
association) or again to govern direct multiparty relationships (as in contracts) consider helpful
to apply the norms into a simulated society for evaluating the impact the new norm has.</p>
      <p>We then need a method to describe a society, in a way that generates a cycle of multiple agent
system evolution steps able to support the impact evaluation we mentioned above.</p>
      <p>This concept is part of a series of investigation that are conducted with the purpose of
developing a complex system for law evaluation in diverse point of the production process: by
design (as in the current investigation), after the enforcement, in the application phase. The</p>
      <sec id="sec-1-1">
        <title>Normative background specification</title>
        <p>etGAMA</p>
      </sec>
      <sec id="sec-1-2">
        <title>Consequence generation - state of affairs</title>
        <p>Houdini
NORM
CHANGES</p>
      </sec>
      <sec id="sec-1-3">
        <title>MAS Simulation construction</title>
        <p>Houdini 2.0</p>
      </sec>
      <sec id="sec-1-4">
        <title>Consequence generation - MAS configuration</title>
      </sec>
      <sec id="sec-1-5">
        <title>MAS Simulationexecution</title>
        <p>etGAMA</p>
        <p>Houdini</p>
        <p>LegalRuleML
schema of the application system is illustrated in Figure 1. There are essentially three phases of
the application:
1. MAS Simulation - construction: a society is generated, possibly based on real-world
data such as sociological evidence.
2. Normative background - specification: the existing normative background of the
society simulated in Phase 1 is defined within an engine for Legal Reasoning, in particular,
for the implementation of the aforementioned projects, we use the DDL reasoner Houdini
[1, 2].
3. MAS Simulation - execution: the MAS system defined in Phase 1 is run, and it generates,
technically, bunches of facts for a Deontic Defeasible Theory, implemented to devise the
normative background, as described in Phase 2.
4. Consequence generation - state of afairs: the DDL system generates the simulated
efects of the applications of the Normative Background towards the MAS system. This
changes the current state of afairs of the MAS.
5. Consequence generation - MAS configuration: the MAS system employs a negative
feedback á la Rosenblueth, Wiener, Bigelow to resettle the MAS while configuring again
the parameters in two levels of feedback layers:
• As a consequence of the normative changes, by modifying the behavioural parameters
of the agents (for instance when an action which is permitted becomes forbidden
the probability of individuals willing to do that action may decrease);
• As a consequence of the application of the law because some agents could have
been punished, and therefore they may have been limited in their permits, or
superimposed obligations and prohibitions they did not have before.</p>
        <p>The above mentioned sequence is repeated in cycles with occasional events of actions
performed in the MAS and rules changed in the normative background. Also we leave the evaluation
part for further investigation.</p>
        <p>
          A key aspect in modeling human behavior with respect to law compliance is given by the
nondeterministic component of actions. The stochastic nature of reality shows up in at least
two moments: the actual violation of a norm by a person and the chance that this violation
is discovered and prosecuted by the authorities. The former phenomenon is formalized in
the model we discuss here through an equation (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) pairing together the expected utility from
the violation of a norm, compared with the one relative to the respect of that norm, and the
tendency to be compliance with the law, encoded in the model discussed in this paper by a
non-negative real number. The latter would call for a probabilistic application of DDL rules
and, given its non trivial formalization, is postponed to a further and more specific paper.
        </p>
        <p>The research plan is therefore as follows:
• Build the theoretical model that is documented in the current paper.
• Implement it within an actual simulation system. In this case we have chosen to employ
GAMA [3, 4, 5] that has also been showed to adapt to the context of simulating ethically
relevant behaviours [3].
• Develop an extension to the markup language GAML++ that expresses the aspects we
discuss in this paper, and correspondingly extend GAMA to etGAMA in order to allow
simulation of the law.
• Extend Houdini technology to allow law change specific management, as well as the
utility functions we shall discuss in this paper.
• Build the whole system, where the communication components are managed via json/jquery
components in order to align the current java implementation of both Gama and Houdini.</p>
        <p>With this research plan in agenda, we now deal with the theoretical model, in the rest of this
paper.</p>
        <p>For what concerns the structure: Sec. 2 exposes the population model, in particular how it
changes during time; Sec. 3 presents the salient characteristics which we focus on, such as age,
gender and job; Sec. 4 is dedicated to model law compliance in this in vitro society; finally, Sec. 5
summarizes the whole paper.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Population model</title>
      <p>In this section we provide the population model. First of all, for what concerns time, we adopt a
discrete time  starting with  = 0 and indexed by natural numbers which may be interpreted
as years. Let P be the countably infinite set of all the people who may, at some point, be part of
the population model. At any moment, which in our discrete time means a specific year, the
population of the model is given by a finite subset of P; the function  :  → (P)1 associates
to every year  a finite subset of P, the current population () in that year denoted by . The
starting population 0, as well as its characteristics, see Sec. 3, is given; whereas the population
in  + 1, i.e. +1, depends on four factors: births, deaths, immigration and emigration. Births
are given by a birth function  :  ∖ {0} → (P) which, for every year  except the first one,
selects the finite subset of the new born denoted by 0; the function  satisfies reasonable
constraints related to births, in particular the following two:
• if  ̸= ′, then () ∩ (′) = ∅, i.e. every person can born at most one time;
• |0+1| =  · | |, where  is the adult population at time  (see Sec. 3) and
 is the birth rate in the year ; namely, the number of new born depends on that one
of adults via a coeficient.</p>
      <p>Moreover, the birth function establishes the gender of the new born; namely there is a function
 : 0 → {,  } assigning to each new born in 0 its gender, see below for
more details.</p>
      <p>Deaths are modeled simply by the process that every person lives exactly 80 years. Thus,
if 80 denotes the subset of  of people in their eighties, then moving from  to +1 we
simply remove 80; see later regarding how age is encoded in the model.</p>
      <p>Immigration and emigration are treated similarly, namely we have two functions  :  →
(P) and  :  → (P) selecting for every year  who is immigrating, , and who is
emigrating, . As before we have some constraints for these functions, in particular:
• +1 ⊆  , only people in the current population can emigrate the next year;
•  ∩  = ∅, a person can not immigrate and emigrate the same year.
Moreover, the immigrate function  establishes also the age and the gender of the
immigrates; namely, there are two functions  :  → {1, . . . , 80} and  :  →
{,  } assigning to each person in  its age and its gender, see below for more
details.</p>
      <p>All in all, with respect to  population satisfies the following equation:</p>
      <p>+1 = (︀  ∖ 80 ∖ +1︀) ∪ 0+1 ∪ +1.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Individual Characteristics</title>
      <p>Every person  ∈ P has some individual characteristics, such as age, gender and so on. In this
paper the following are considered:
1(P) denotes the powerset of P.</p>
      <p>1. Age: this is a natural number between 1 and 80.
2. Age status: there are three age status, young, adult, elderly depending on the age.
3. Gender: this preliminary model has two genders male and female which are fixed from
birth.
4. Marital status: there are three marital status, bachelor, married and divorced.2
5. Job status: there are four job status, unemployed, public job, private job and retired.
6. Ethical tendency (also called in places inclination): there are three tendencies, legalist,
neutral and opportunist; related to tendency there is also a parameter  , see later for more
detail.</p>
      <p>We see these characteristics as sets, e.g. Age= {1, 2, . . . , 80} and Age status= {, ,
}, presenting them one by one after some general considerations.</p>
      <p>Excluding the ethical tendency, devoted to modeling law compliance, the other status have
been chosen since they represent three of the main social categories producing common and
widespread rights and duties, e.g. age for penal responsibility, as well as three of the main
characteristics used in demographic and social studies.</p>
      <p>Except from gender, all the status may change during time, thus each status depend on both
the person  ∈  and the current year  ∈  . Let ℒ =  ×   ×  ×
   ×    × ℎ  be the set of all possible status array, then
for every year  ∈  there is a function ℓ :  → ℒ which associates to each person their status.
Moreover, ℓ has, as function, diferent components one for each status and, for sake of readability,
the function determining each status is denoted with an abbreviation of the status itself; thus for a
person  and a given year , ℓ() = ((), (), (),  (),  (),
()). We consider now each status to present it showing how it may change during time.
Age
The age function  :  → {1, . . . , 80} assigns to every person  in the current population
 its age (). Obviously there is a strict connection between  and +1, more
precisely the latter as the following definition:
+1() :=
⎧1
⎪
⎨</p>
      <p>() + 1 if  ∈ ,
⎪⎩+1() if  ∈ +1.</p>
      <p>if  ∈ 0+1,
The age function 0 : 0 → {1, . . . , 80} for the starting population 0 is given. For what
concerns notation, given 1 ⩽  ⩽ 80, we denote  := { ∈  | () = }.
2For sake of simplicity we join together divorced and widowhood and use bachelor for both males and females.</p>
      <sec id="sec-3-1">
        <title>Age status</title>
        <p>The age status for a person  ∈  depends only on the current age of , more precisely:
() :=
⎧⎪ if 1 ⩽ () ⩽ 20,
⎨</p>
        <p>if 21 ⩽ () ⩽ 60,
⎪⎩ if 61 ⩽ () ⩽ 80.</p>
        <p>:= { ∈  | () = } and similarly
We adopt the following notation 
for  . Other characteristics or properties may depend on the age status, in
 and 
particular:
• the transition functions (see below) from one year to the next one for Marital status and</p>
        <p>Job status;
• the number of new born in the next year, |0+1|, depends on the current number of adult
people, ||.</p>
      </sec>
      <sec id="sec-3-2">
        <title>Gender</title>
      </sec>
      <sec id="sec-3-3">
        <title>Marital status</title>
        <p>Since gender is fixed, it depends only on the initial conditions for the people in 0, on the birth
gender function  for new born and on the immigrate gender function  for who
immigrates.</p>
        <p>There are three marital status: bachelor, married, divorced. Young people, i.e. people for which
the current age is less than 21, have only one marital status available, namely bachelor; thus
 () = ℎ
.
for all  ∈</p>
        <p>In order to model during years the marital status, as well as the job status, for adults and
elderly people (who may have other status beside bachelor) we use Markov chains; more
precisely diferent Markov chains depending on the age status. Excepted for people who have
just became adult, i.e. 21 year old people, who are automatically bachelor and people who have
just became elderly, i.e. 61 year old people, who preserve the previous status,3 the transitions
between these status are given by a stochastic process with diferent probabilities for adults and
elderly people. Denoting with , ,  being respectively bachelor, married and divorced and
setting to 99%, 95%, 5%, 1% the probabilities involved (which have been arbitrary chosen for
this paper but, being editable parameters, could be instantiated with real values coming from
statistical investigations), the transition schemes for adults and elderly people are the following:</p>
        <p>This means that if at time  a 35-year adult is bachelor there is 95% chance that he is still
bachelor at  + 1 and a 5% chance that he got married. If the current population is suficiently
3Formally, this means that if () = 21 than  () = ℎ and if +1() = 61, then
 +1() =  ().
95%

5%
95%

5%
5%
95%

99%

1%
99%

1%
1%
99%

large these probabilities can be seen as the fractions of the current population changing their
status; namely every year the 95% of married adult remain married whereas the 5% divorced.</p>
        <p>Moreover, this setting allows for an easy introduction of further constraints. For example, if
there is a waiting period of at least  years between the declaration of divorce and a subsequent
marriage, this can be encoded in the marital status in the following way: if  () =
 and  +1 = , then  +1+() =  for every  ∈
{1, . . . , }.</p>
        <p>As before, the marital function  0 for the initial population 0 as well as the marital
status of immigrants are given.</p>
      </sec>
      <sec id="sec-3-4">
        <title>Job status</title>
        <p>There are four job status: unemployed, public job, private job and retired. Overall the treatment
of the job status is similar to the marital status, for example young people and 21 year old
people have only one status available, namely unemployed; the only diference, excepts for
probabilities of the transition functions, is that adults and elderly people have two diferent sets
of available status. More precisely adults can have one among: unemployed, public job and
private job; whereas elderly people one among: retired, public job and private job. During the
transition between adulthood and old age the status remains unchanged except for unemployed
people who become retired; formally, if +1() = 61 then:
 +1() :=
⎧ if  () = ,
⎪
⎨</p>
        <p>if  () = ,
⎪⎩ if  () = .</p>
        <p>Moreover, as it can be seen from the transition function for elderly people, retirement is
irreversible; namely if a person retires than from that year their status will always be retired.
Abbreviating with  , ,  ,   respectively being unemployed, retired, public job and
private job, the transition functions are as follows:</p>
        <p>As before, the job function  0 for the initial population 0 as well as the job status of
immigrants are given.</p>
      </sec>
      <sec id="sec-3-5">
        <title>Ethical tendency</title>
        <p>Ethical tendency aims to formalize the behavior of people with respect to the violation of law.
The main idea is that the violation of a norm by a person depends on the expected utility of that
person violating the norm compared with the expected utility respecting the norm together
with the Ethical tendency of that person, i.e. the general propensity to respect the law. Let  be
a norm and  a person; if we denote with  ( +) the expected utility of  in complying with 
and with  ( − ) the expected utility of  in violating  , than the probability of  of violating 
is given by:
 (,  ) :=
{︃0</p>
        <p>
          if  ( +) ⩾  ( − ),
1 − −   if  =  ( − ) −  ( +) &gt; 0.
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
  is a parameter representing the tendency of  to violate the law, there are three cases
corresponding to the three Ethical tendency status:
•   = 0, in this case  does not violate the law, no matter how high is the expected utility
in the violation; in this case  is a legalist.
• 0 &lt;   &lt; +∞, in this case  may violate  if  ( − ) &gt;  ( +) and the probability
increases as  =  ( − ) −  ( +) increases; in this case  is legally neutral.
•   = +∞, in this case  violates  as soon as  ( − ) &gt;  ( +); in this case  is an
opportunist.
        </p>
        <p>For a quantitative estimation of the role of  , if   = 1 and  =  ( − ) −  ( +) = 1 then
there is a probability of 1 − − 1 ≃ 63% that  violate  .</p>
        <p>The Ethical tendency of a person  may change during time according a transition function,
in this case we assign the same transition process to the three age status. Denoting with , , 
being respectively legalist, neutral and opportunist we adopt the following status transitions
(again the probabilities are editable parameters):</p>
        <p>When the ethical tendency of a person  becomes neutral, the transition function also assigns
to   a strictly positive real value. As before the legal function 0 of the starting population
0, as well as the ones for new born and immigrants, are given.4
4Another possibility would be to assign to new born legal tendencies that reflect the proportion of legal tendencies
in the adult population or in the whole population.</p>
        <p>90%

10%
5%
90%

5%
10%
90%</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Law Compliance</title>
      <p>The conceptualisation of law compliance is taken from the current literature on DDL and derived
models. In particular we presuppose that an agent enforces freely literals that are taken as
feasible actions. On the other hand, when an agent performs a particular action, for which, as
we specified before, she has a specific tendency (a probability of doing that action), the legal
system takes care of that. In particular, we assume that the normative background is actually
constructed as a DDT, and feasible actions are literals of that theory.</p>
      <p>Defeasible Logic [6, 7] is a simple, flexible, and eficient rule-based non-monotonic formalism.
Its strength lies in its constructive proof theory, allowing it to draw meaningful conclusions
from (potentially) conflicting and incomplete knowledge base. In non-monotonic systems, more
accurate conclusions can be obtained when more pieces of information become available.</p>
      <p>Many variants of Defeasible Logic have been proposed for the logical modelling of diferent
application areas, specifically agents [ 8, 9, 10], legal reasoning [11, 12] and workflows from a
business process compliance perspective [13, 14].</p>
      <p>In this research we focus on the Defeasible Deontic Logic (henceforth DDL) framework [15]
that allows us to determine what prescriptive behaviours are in force in a given situation. For
detailed descriptions of how to adopt DDL for legal reasoning we refer the reader to [16].</p>
      <p>We start by defining the language of a Defeasible Deontic Theory (henceforth a DDT)
Let PROP be a set of propositional atoms, and Lab be a set of arbitrary labels (the names of
the rules). We use lower-case Roman letters to denote literals and lower-case Greek letters to
denote rules.</p>
      <p>Accordingly, PLit = PROP ∪ {¬ |  ∈ PROP} is the set of plain literals; the set of deontic
literals is ModLit = {□, ¬□ |  ∈ PLit ∧ □ ∈ {O, P}} and, finally, the set of literals is
Lit = PLit ∪ ModLit. The complement of a literal  is denoted by ∼ : if  is a positive literal
 then ∼  is ¬, and if  is a negative literal ¬ then ∼  is . We will not have specific rules
nor modality for prohibitions, as we will treat them according to the standard duality that
something is forbidden if the opposite is obligatory (i.e., O¬).</p>
      <p>Definition 1 (Defeasible Deontic Theory). A DDT  is a triple (, , &gt;), where  is the set
of facts,  is the set of rules, and &gt; is a binary relation over  (called superiority relation).
Specifically, the set of facts  ⊆ PLit denotes simple pieces of information that are always
considered to be true, like “Sylvester is a cat”, formally (). In this paper, we
subscribe to the distinction between the notions of obligations and permissions, and that of
norms, where the norms in the system determine the obligations and permissions in force in a
normative system. A DDT is meant to represent a normative system, where the rules encode
the norms of the system, and the set of facts corresponds to a case. As we will see below, the
rules are used to conclude the institutional facts, obligations and permissions that hold in a case.
Accordingly, we do not admit obligations and permissions as facts of the theory.</p>
      <p>The set of rules  contains three types of rules: strict rules, defeasible rules, and defeaters.
Rules are also of two kinds:
• Constitutive rules (non-deontic rules) C model constitutive statements (count-as rules);
• Deontic rules to model prescriptive behaviours, which are either obligation rules O that
determine when and which obligations are in force, or permission rules which represent
strong (or explicit) permissions P.</p>
      <p>Lastly, &gt; ⊆  ×  is the superiority (or preference) relation, which is used to solve conflicts in
case of potentially conflicting information.</p>
      <p>A theory is finite if the set of facts and rules are so. We only focus on finite theories.</p>
      <p>A strict (constitutive) rule is a rule in the classical sense: whenever the premises are
indisputable, so is the conclusion.</p>
      <p>On the other hand, defeasible rules are to conclude statements that can be defeated by contrary
evidence. In contrast, defeaters are special rules whose only purpose is to prevent the derivation
of the opposite conclusion.</p>
      <p>A prescriptive behaviour like “Passing on zebra crossing is not permitted when the trafic
light for pedestrian is red” can be formalised via the general permissive rule</p>
      <p>AtZebraCross ⇒P Pass
and the exception through the obligation rule</p>
      <p>Pedestrian_traffic_light _red ⇒O ¬Pass.</p>
      <p>Following the ideas of [17], obligation rules gain more expressiveness with the compensation
operator ⊗ for obligation rules, which is to model reparative chains of obligations. Intuitively,
 ⊗  means that  is the primary obligation, but if for some reason we fail to obtain, to comply
with,  (by either not being able to prove , or by proving ∼ ) then  becomes the new obligation
in force. This operator is used to build chains of preferences, called ⊗ -expressions.</p>
      <p>The formation rules for ⊗ -expressions are as follows (i) every plain literal is an ⊗ -expression
and (ii) if  is an ⊗ -expression and  is a plain literal then  ⊗  is an ⊗ -expression [15].</p>
      <p>In general an ⊗ -expression has the form ‘1 ⊗ 2 ⊗ · · · ⊗ ’, and it appears as consequent of
a rule ‘( ) ˓→O ( )’ where ( ) = 1 ⊗ 2 ⊗ · · · ⊗ ; the meaning of the ⊗ -expression
is: if the rule is allowed to draw its conclusion, then 1 is the obligation in force, and only
when 1 is violated then 2 becomes the new in force obligation, and so on for the rest of
the elements in the chain. In this setting,  stands for the last chance to comply with the
prescriptive behaviour enforced by  , and in case  is violated as well, then we will result in a
non-compliant situation.</p>
      <p>For instance, the previous prohibition to pass on pedestrian cross in case of red can foresee a
compensatory fine, like</p>
      <p>Pedestrian _traffic_light _red ⇒O ¬Pass ⊗</p>
      <p>PayFine
that has to be paid in case someone passes the pedestrian cross when the light is red.</p>
      <p>It is worth noticing that we admit ⊗ -expressions with only one element. The intuition, in
this case, is that the obligatory condition does not admit compensatory measures or, in other
words, that it is impossible to recover from its violation.</p>
      <p>In this paper, we focus exclusively on the defeasible part of the logic ignoring the monotonic
component given by the strict rules; consequently, we limit the language to the cases where the
rules are either defeasible or defeaters. From a practical point of view, the restriction does not
efectively limit the expressive power of the logic: a defeasible rule where there are no rules
for the opposite conclusion, or where all rules for the opposite conclusion are weaker than the
given defeasible rules, efectively behaves like a strict rule. Formally a rule is defined as below.
Definition 2 (Rule).</p>
      <p>A rule is an expression of the form  : ( ) ˓→□ ( ), where
1.  ∈ Lab is the unique name of the rule;
2. ( ) ⊆</p>
      <p>Lit is the set of antecedents;
4. □ ∈ {C, O, P};
5. its consequent ( ), which is either
3. An arrow ˓→ ∈ {⇒, ↝} denoting, respectively, defeasible rules, and defeaters;
a) a single plain literal  ∈ PLit, if either (i) ˓→ ≡
↝ or (ii) □ ∈ {C, P}, or
b) an ⊗ -expression, if □ ≡</p>
      <p>O.</p>
      <p>If □ = C then the rule is used to derive non-deontic literals (constitutive statements), whilst if
□ is O or P then the rule is used to derive deontic conclusions (prescriptive statements). The
conclusion ( ) is, as before, a single literal in case □ = {C, P}; in case □ = O, then the
conclusion is an ⊗ -expression. ⊗ -expressions can only occur in prescriptive rules though we
do not admit them on defeaters (Condition 5.(a).i), see [15] for a detailed explanation.</p>
      <p>We use some abbreviations on sets of rules. The set of defeasible rules in  is ⇒, the set of
defeaters is dft. □[] is the rule set appearing in  with head  and modality □, while O[, ]
denotes the set of obligation rules where  is the -th element in the ⊗ -expression. Given that
the consequent of a rule is either a single literal or an ⊗ -expression (that can be understood as
a sequence of elements, and then as an ordered set), in what follows we are going to abuse the
notation and use  ∈ ( ). □ is the set of rules  : ( ) ˓→□ ( ) such that  appears in .
For a theory as determined by Definitions 1 and 2,  appears in  means that  ∈ ; thus P
is the set of permissive rules appearing in . We use ◇ and ◇[] as shorthands for O ∪ P
and O[] ∪ P[], respectively. The abbreviations can be combined. Finally, a literal  appears
in a theory , if there is a rule  ∈  such that  ∈ ( ) ∪ ( ).</p>
      <p>Definition 3 (Tagged modal formula).
± □, with the following meanings</p>
      <p>A tagged modal formula is an expression of the form
• +□:  is defeasibly provable (or simply provable) with mode □;
• − □:  is defeasibly refuted (or simply refuted) with mode □.</p>
      <p>Accordingly, the meaning of +O is that  is provable as an obligation, and − P¬ is that we
have a refutation for the permission of ¬. Similarly, for the other combinations.</p>
      <p>
        As we will shortly see (Definitions 5 and 6), one of the key ideas of DDL is that we use
tagged modal formulas to determine which formulas are (defeasibly) provable or rejected given
a theory and a set of facts (used as input for the theory). Therefore, when we have asserted
the tagged modal formula +O in a derivation (see Definition 4 below), we can conclude that
the obligation of  (O) follows from the rules and the facts and that we used a prescriptive
rule to derive ; similarly for permission (using a permissive rule). However, the C modality
is silent, meaning that we do not put the literal in the scope of the C modal operator, thus for
+C, the derivation simply asserts that  holds (and not that C holds, even if the two have the
same meaning). For the negative cases (i.e., − □), the interpretation is that it is not possible
to derive  with a given mode. Accordingly, we read − O as it is impossible to derive  as an
obligation. For □ ∈ {O, P} we are allowed to infer ¬□, giving a constructive interpretation of
the deontic modal operators. Notice that this is not the case for C, where we cannot assert that
∼  holds (this would require +C∼ ); in the logic, failing to prove  does not equate to proving
¬. We will use the term conclusions and tagged modal formulas interchangeably.
Definition 4 (Proof). Given a DDT , a proof  of length  in  is a finite sequence  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),  (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ),
. . . ,  () of tagged modal formulas, where the proof conditions hold.
 (1..) denotes the first  steps of  , and we also use the notational convention  ⊢ ± □,
meaning that there is a proof  for ± □ in .
      </p>
      <p>Core notions in DL are that of applicability/discardability. As knowledge in a defeasible theory
is circumstantial, given a defeasible rule like ‘ : ,  ⇒□ ’, there are four possible scenarios:
the theory defeasibly proves both  and , the theory proves neither, the theory proves one but
not the other. Naturally, only in the first case, where both  and  are proved, we can use 
to support/try to conclude □. Briefly, we say that a rule is applicable when every antecedent’s
literal has been proved at a previous derivation step. Symmetrically, a rule is discarded when
one of such literals has been previously refuted. Formally:
Definition 5 (Applicability). Assume a deontic defeasible theory  = (, , &gt;). We say that
rule  ∈ C ∪ P is applicable at  ( + 1), if for all  ∈ ( )
1. if  ∈ PLit, then +C ∈  (1..),
2. if  = □, then +□ ∈  (1..), with □ ∈ {O, P},
3. if  = ¬□, then − □ ∈  (1..), with □ ∈ {O, P}.</p>
      <p>We say that rule  ∈ O is applicable at index  and  ( + 1) if Conditions 1–3 above hold and
4. ∀ ∈ ( ),  &lt; , then +O ∈  (1..) and +C∼  ∈  (1..).5
Definition 6 (Discardability). Assume a deontic defeasible theory , with  = (, , &gt;). We
say that rule  ∈ C ∪ P is discarded at  ( + 1), if there exists  ∈ ( ) such that
1. if  ∈ PLit, then − C ∈  (1..), or
2. if  = □, then − □ ∈  (1..), with □ ∈ {O, P}, or
3. if  = ¬□, then +□ ∈  (1..), with □ ∈ {O, P}.</p>
      <p>We say that rule  ∈ O is discarded at index  and  ( + 1) if either at least one of the
Conditions 1–3 above hold, or</p>
      <p>4. ∃ ∈ ( ),  &lt;  such that − O ∈  (1..), or − C∼  ∈  (1..).</p>
      <p>Discardability is obtained by applying the principle of strong negation to the definition of
applicability. The strong negation principle applies the function that simplifies a formula by
moving all negations to an innermost position in the resulting formula, replacing the positive
tags with the respective negative tags, and the other way around; see [19]. Positive proof
tags ensure that there are efective decidable procedures to build proofs; the strong negation
principle guarantees that the negative conditions provide a constructive and exhaustive method
to verify that a derivation of the given conclusion is not possible. Accordingly, Condition 3 of
Definition 5 allows us to state that ¬□ holds when we have a (constructive) failure to prove 
with mode □ (for obligation or permission), thus it corresponds to a constructive version of
negation as failure.</p>
      <p>We are ready to formalise the proof conditions, as in [15]. We start with positive proof
conditions for constitutive statements. In the following, we shall omit the explanations for
negative proof conditions, when trivial, reminding the reader that they are obtained through
the application of the strong negation principle to the positive counterparts.
Definition 7 (Constitutive Proof Conditions).</p>
      <p>
        +C: If  ( + 1) = +C then
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )  ∈  , or
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) ∼  ̸∈  , and
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) ∃ ∈ ⇒C[] s.t.  is appl., and
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) ∀ ∈ C[∼ ] either
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )  is disc., or
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) ∃ ∈ C[] s.t.
      </p>
      <p>
        (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )  is appl. and
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )  &gt;  .
      </p>
      <p>
        − C: If  ( + 1) = − C then
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )  ̸∈  and either
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) ∼  ∈  , or
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) ∀ ∈ ⇒C[], either  is disc., or
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) ∃ ∈ C[∼ ] such that
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )  is appl., and
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) ∀  ∈ C[], either
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )  is disc., or
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )  ̸&gt;  .
5As discussed above, we are allowed to move to the next element of an ⊗ -expression when the current element
is violated. To have a violation, we need (i) the obligation to be in force, and (ii) that its content does not hold.
+O indicates that the obligation is in force. For the second part we have two options. The former, +C∼ 
means that we have “evidence” that the opposite of the content of the obligation holds. The latter would be to have
− C ∈  (1..) corresponding to the intuition that we failed to provide evidence that the obligation has been
satisfied. The former option implies the latter one. For a deeper discussion on the issue, see [18].
A literal is defeasibly proved if: it is a fact, or there exists an applicable, defeasible rule supporting
it (such a rule cannot be a defeater) and all opposite rules are either discarded or defeated. To
prove a conclusion, not all the work has to be done by a stand-alone (applicable) rule (the rule
witnessing Condition (2.2)): all the applicable rules for the same conclusion (may) contribute to
defeating applicable rules for the opposite conclusion. Both  as well as  may be defeaters.
Below we present the proof conditions for obligations.
      </p>
      <p>Definition 8 (Obligation Proof Conditions).</p>
      <p>
        +O: If  ( + 1) = +O then
∃ ∈ ⇒O[, ] s.t.
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )  is applicable at index  and
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) ∀ ∈ O[∼ , ] ∪ P[∼ ] either
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )  is discarded (at index ), or
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) ∃ ∈ O[, ] s.t.
      </p>
      <p>
        (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )  is applicable at index  and
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )  &gt;  .
      </p>
      <p>
        − O: If  ( + 1) = − O then
∀ ∈ ⇒O[, ] either
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )  is discarded at index , or
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) ∃ ∈ O[∼ , ] ∪ P[∼ ] s.t.
      </p>
      <p>
        (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )  is applicable (at index ), and
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) ∀ ∈ O[, ] either
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )  is discarded at index , or
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )  ̸&gt;  .
(i) in Condition (
        <xref ref-type="bibr" rid="ref2">2</xref>
        )  can be a permission rule as explicit, opposite permissions represent
exceptions to obligations, whereas  (Condition 2.2) must be an obligation rule as a permission
rule cannot reinstate an obligation, and that (ii)  may appear at diferent positions (indices , ,
and ) within the three ⊗ -chains. Below, we introduce the proof conditions for permissions.
Definition 9 (Permission Proof Conditions).
      </p>
      <p>
        +P: If  ( + 1) = +P then
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) +O ∈  (1..), or
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) ∃ ∈ ⇒P[] s.t.
      </p>
      <p>
        (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )  is appl. and
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) ∀ ∈ O[∼ , ] either
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )  is disc. at index , or
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) ∃ ∈ P[] ∪ O[, ] s.t.
      </p>
      <p>
        (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )  is appl. (at index ) and
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )  &gt;  .
      </p>
      <p>
        − P: If  ( + 1) = − P then
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) − O ∈  (1..), and
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) ∀ ∈ ⇒P[] either
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )  is disc. or
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) ∃ ∈ O[∼ , ] s.t.
      </p>
      <p>
        (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )  is appl. at index  and
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) ∀ ∈ P[] ∪ O[, ] either
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )  is disc. (at index ), or
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )  ̸&gt;  .
      </p>
      <p>
        Condition (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) allows us to derive a permission from the corresponding obligation. Thus it
corresponds to the O → P axiom of Deontic Logic. Condition (2.2) considers as possible
counter-arguments only obligation rules in situations where both P and P¬ hold are allowed.
We refer the readers interested in a deeper discussion on how to model permissions and
obligations in DDL to [15].
      </p>
      <p>The set of positive and negative conclusions of a theory is called extension. The extension of
a theory is computed based on the literals that appear in it; more precisely, the literals in the
Herbrand Base of the theory HB () = {, ∼  ∈ PLit|  appears in }.</p>
      <p>Definition 10 (Extension). Given a DDT , we define the extension of  as () =
(+C, − C, +O, − O, +P, − P), where ± □ = { ∈ HB ()|  ⊢ ± □}, with □ ∈ {C, O, P}.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and further extensions</title>
      <p>While expressing some concerns and the plan to correct macroscopic limits of current simulation
platforms, in particular, GAMA, we value the idea that it is a relevant step to enterprise the
definition of a credible system to actually simulate the efects on a society of the introduction
of a new norm. The discussion we provided upon the notion of class to which a particular
individual of the MAS belongs (while classes do not constitute a partition) is devised as a support
to the usage of the rules in the Deontic counterpart of the simulator. However, when the system
will be completely built, we shall have three specific strengths that diferentiate our approach
to any other ones already discussed in the simulation literature for agents’ modelling that are
inspired by the approach followed for the GAMA framework:
• We consider the class hierarchy described in terms of transitions, the results of the
computation of the extension of the constituent rules as devised above. There is no such a
thing as a priori class, but only classes built as result of a set of rules;
• Probabilistic and utility-driven models attain at agents’ definition, not the normative
background that is solely prescriptive;
• There exists a system of negative feedback, able to re-devise the properties of the class
hierarchy, the probabilities and the consequential behaviors of the agents.</p>
      <p>We followed here the concepts expressed in the researches by Riveret et al. [20, 21] and also
discussed further on by Governatori et al. [9].</p>
      <p>Finally, the ductility of defeasible deontic logic would allow, in a further extension, to include
in the model not only law compliance, but also ethical and personal behaviors.
[6] D. Nute, Defeasible logic, in: Handbook of Logic in Artificial Intelligence and Logic</p>
      <p>Programming, Oxford University Press, 1987.
[7] G. Antoniou, D. Billington, G. Governatori, M. J. Maher, Representation results for
defeasible logic, ACM Trans. Comput. Log. (2001) 255–287. doi:10.1145/371316.371517.
[8] K. Kravari, N. Bassiliades, A survey of agent platforms, Journal of Artificial Societies and</p>
      <p>Social Simulation (2015) 11. doi:10.18564/jasss.2661.
[9] G. Governatori, F. Olivieri, S. Scannapieco, A. Rotolo, M. Cristani, The rationale behind
the concept of goal, Theory Pract. Log. Program. (2016) 296–324. URL: https://doi.org/10.
1017/S1471068416000053. doi:10.1017/S1471068416000053.
[10] M. Dastani, G. Governatori, A. Rotolo, L. van der Torre, Programming cognitive agents in
defeasible logic, in: LPAR 2005 Conference, Montego Bay, Jamaica, LNAI, Springer, 2005,
pp. 621–636.
[11] G. Governatori, A. Rotolo, Changing legal systems: Legal abrogations and annulments in
defeasible logic, Logic Journal of the IGPL (2009) 157–194. doi:10.1093/jigpal/jzp075.
[12] M. Cristani, F. Olivieri, A. Rotolo, Changes to temporary norms, in: ICAIL 2017, 2017, pp.</p>
      <p>39–48. doi:10.1145/3086512.3086517.
[13] G. Governatori, F. Olivieri, S. Scannapieco, M. Cristani, Designing for compliance:
Norms and goals, in: RuleML 2011, LNCS, Springer, 2011, pp. 282–297. doi:10.1007/
978-3-642-24908-2\_29.
[14] F. Olivieri, M. Cristani, G. Governatori, Compliant business processes with exclusive
choices from agent specification, LNCS (2015) 603–612.
[15] G. Governatori, F. Olivieri, A. Rotolo, S. Scannapieco, Computing strong and weak
permissions in defeasible logic, J. Philos. Log. (2013) 799–829. URL: https://doi.org/10.1007/
s10992-013-9295-1. doi:10.1007/s10992-013-9295-1.
[16] G. Governatori, A. Rotolo, G. Sartor, Logic and the law: Philosophical foundations, deontics,
and defeasible reasoning, in: D. Gabbay, J. Horty, X. Parent, R. van der Meyden, L. van der
Torre (Eds.), Handbook of Deontic Logic and Normative Systems, College Publications,
London, 2021, pp. 657–764.
[17] G. Governatori, A. Rotolo, Logic of violations: A gentzen system for reasoning with
contrary-to-duty obligations, Australasian Journal of Logic (2006) 193–215. URL: http:
//ojs.victoria.ac.nz/ajl/article/view/1780.
[18] G. Governatori, Burden of compliance and burden of violations, in: A. Rotolo (Ed.), 28th
Annual Conference on Legal Knowledge and Information Systems, Frontiers in AI and
Applications, IOS Press, Amsterdam, 2015, pp. 31–40.
[19] G. Governatori, V. Padmanabhan, A. Rotolo, A. Sattar, A defeasible logic for modelling
policy-based intentions and motivational attitudes, Log. J. IGPL (2009) 227–265. doi:10.
1093/jigpal/jzp006.
[20] R. Riveret, A. Rotolo, G. Sartor, Probabilistic rule-based argumentation for
normgoverned learning agents, Artificial Intelligence and Law (2012) 383–420. doi: 10.1007/
s10506-012-9134-7.
[21] R. Riveret, G. Contissa, A. Rotolo, J. Pitt, Law enforcement in norm-governed learning
agents, in: AAMAS 2013, 2013, pp. 1151–1152.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Cristani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Governatori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Olivieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pasetto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Tubini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Veronese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Villa</surname>
          </string-name>
          ,
          <string-name>
            <surname>E. Zorzi,</surname>
          </string-name>
          <article-title>The architecture of a reasoning system for defeasible deontic logic</article-title>
          , in: Procedia Computer Science,
          <year>2023</year>
          , pp.
          <fpage>4214</fpage>
          -
          <lpage>4224</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.procs.
          <year>2023</year>
          .
          <volume>10</volume>
          .418.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Cristani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Governatori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Olivieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pasetto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Tubini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Veronese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Villa</surname>
          </string-name>
          ,
          <string-name>
            <surname>E. Zorzi,</surname>
          </string-name>
          <article-title>Houdini (unchained): an efective reasoner for defeasible logic</article-title>
          ,
          <source>in: CEUR Workshop Proceedings</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Taillandier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.-A.</given-names>
            <surname>Vo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Amouroux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Drogoul</surname>
          </string-name>
          ,
          <article-title>Gama: A simulation platform that integrates geographical information data, agent-based modeling and multi-scale control</article-title>
          ,
          <source>in: LNCS</source>
          ,
          <year>2012</year>
          , p.
          <fpage>242</fpage>
          -
          <lpage>258</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>642</fpage>
          -25920-3_
          <fpage>17</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Drogoul</surname>
          </string-name>
          , E. Amouroux,
          <string-name>
            <given-names>P.</given-names>
            <surname>Caillou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Gaudou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Grignard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Marilleau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Taillandier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vavasseur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.-A.</given-names>
            <surname>Vo</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.-D. Zupker</surname>
          </string-name>
          ,
          <article-title>Gama: Multi-level and complex environment for agent-based models and simulations</article-title>
          ,
          <source>in: AAMAS</source>
          <year>2013</year>
          ,
          <year>2013</year>
          , p.
          <fpage>1361</fpage>
          -
          <lpage>1362</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>E.</given-names>
            <surname>Amouroux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.-Q.</given-names>
            <surname>Chu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Boucher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Drogoul</surname>
          </string-name>
          ,
          <string-name>
            <surname>Gama:</surname>
          </string-name>
          <article-title>An environment for implementing and running spatially explicit multi-agent simulations</article-title>
          ,
          <source>in: LNCS</source>
          ,
          <year>2009</year>
          , p.
          <fpage>359</fpage>
          -
          <lpage>371</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>642</fpage>
          -01639-4_
          <fpage>32</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>