=Paper=
{{Paper
|id=Vol-3735/paper_18
|storemode=property
|title=Simulating the Law in a Multi-Agent System
|pdfUrl=https://ceur-ws.org/Vol-3735/paper_18.pdf
|volume=Vol-3735
|authors=Matteo Cristani,Francesco Olivieri,Guido Governatori,Gabriele Buriola
|dblpUrl=https://dblp.org/rec/conf/woa/CristaniOGB24
}}
==Simulating the Law in a Multi-Agent System==
Simulating the Law in a Multi-Agent System
Matteo Cristani1,*,† , Francesco Olivieri2 , Guido Governatori3 and Gabriele Buriola1,†
1
Dept. of Computer Science, University of Verona, Verona, Italy
2
Independent researcher, Brisbane, Australia
3
Artificial Intelligence and Cyber Futures Institute, Charles Sturt University, Bathurst, Australia
1
Dept. of Computer Science, University of Verona, Verona, Italy
Abstract
In this paper we define a Multiple Agent System able to simulate an artificial society to be paired with
a normative background. The purpose of this architecture shall be the simulation of a law to devise
its impact. We analise some existent architecture (GAMA) that has already been used for simulating
MAS, with BDI agents. In fact, GAMA technology is insufficient to guarantee certain validity properties
and actual computational effectiveness that could instead be provided if we manage the rule system to
interpret Defeasible Deontic Logic, a logic framework that satisfies the aforementioned properties. As a
first step of an experimental endeavour aiming at law simulation by design, we provide here a theoretical
model of the MAS which simulates the society.
Keywords
Defeasible Deontic Logic, Multiple Agent Systems, Law Simulation
1. Introduction
Simulating collective behaviour is a challenging topic of the recent past. There is a rising
demand that emerges from a variety of contexts, including the production of legislation at broad,
for the simulation paradigm can be useful for determining the actual effects of introducing a
new norm on a given society. Therefore, the drafters, namely those people who are in charge
of producing a norm, either when designing a new norm for the general population, or when
providing a norm background on the actual domain of a restricted society (a company, an
association) or again to govern direct multiparty relationships (as in contracts) consider helpful
to apply the norms into a simulated society for evaluating the impact the new norm has.
We then need a method to describe a society, in a way that generates a cycle of multiple agent
system evolution steps able to support the impact evaluation we mentioned above.
This concept is part of a series of investigation that are conducted with the purpose of
developing a complex system for law evaluation in diverse point of the production process: by
design (as in the current investigation), after the enforcement, in the application phase. The
WOA 2024: 25th Workshop "From Objects to Agents", July 8-10, 2024, Forte di Bard (AO), Italy
*
Corresponding author.
†
These authors contributed equally.
$ matteo.cristani@univr.it (M. Cristani); francesco.olivieri.phd@gmail.com (F. Olivieri); ggovernatori@csu.edu.au
(G. Governatori); gabriele.buriola@univr.it (G. Buriola)
0000-0001-5680-0080 (M. Cristani); 0000-0003-0838-9850 (F. Olivieri); 0000-0002-9878-2762 (G. Governatori);
0000-0002-1612-0985 (G. Buriola)
© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
etGAMA Houdini LegalRuleML
GAML++
Normative
MAS Simulation -
background -
construction
specification NORM
CHANGES
etGAMA
MAS Simulation-
execution
Houdini 2.0
Consequence Consequence
generation - MAS generation - state of
configuration affairs Houdini
Figure 1: A general schema of the architecture we devise for the long-term aim investigation on law
simulation.
schema of the application system is illustrated in Figure 1. There are essentially three phases of
the application:
1. MAS Simulation - construction: a society is generated, possibly based on real-world
data such as sociological evidence.
2. Normative background - specification: the existing normative background of the
society simulated in Phase 1 is defined within an engine for Legal Reasoning, in particular,
for the implementation of the aforementioned projects, we use the DDL reasoner Houdini
[1, 2].
3. MAS Simulation - execution: the MAS system defined in Phase 1 is run, and it generates,
technically, bunches of facts for a Deontic Defeasible Theory, implemented to devise the
normative background, as described in Phase 2.
4. Consequence generation - state of affairs: the DDL system generates the simulated
effects of the applications of the Normative Background towards the MAS system. This
changes the current state of affairs of the MAS.
5. Consequence generation - MAS configuration: the MAS system employs a negative
feedback á la Rosenblueth, Wiener, Bigelow to resettle the MAS while configuring again
the parameters in two levels of feedback layers:
• As a consequence of the normative changes, by modifying the behavioural parameters
of the agents (for instance when an action which is permitted becomes forbidden
the probability of individuals willing to do that action may decrease);
• As a consequence of the application of the law because some agents could have
been punished, and therefore they may have been limited in their permits, or
superimposed obligations and prohibitions they did not have before.
The above mentioned sequence is repeated in cycles with occasional events of actions per-
formed in the MAS and rules changed in the normative background. Also we leave the evaluation
part for further investigation.
A key aspect in modeling human behavior with respect to law compliance is given by the
nondeterministic component of actions. The stochastic nature of reality shows up in at least
two moments: the actual violation of a norm by a person and the chance that this violation
is discovered and prosecuted by the authorities. The former phenomenon is formalized in
the model we discuss here through an equation (1) pairing together the expected utility from
the violation of a norm, compared with the one relative to the respect of that norm, and the
tendency to be compliance with the law, encoded in the model discussed in this paper by a
non-negative real number. The latter would call for a probabilistic application of DDL rules
and, given its non trivial formalization, is postponed to a further and more specific paper.
The research plan is therefore as follows:
• Build the theoretical model that is documented in the current paper.
• Implement it within an actual simulation system. In this case we have chosen to employ
GAMA [3, 4, 5] that has also been showed to adapt to the context of simulating ethically
relevant behaviours [3].
• Develop an extension to the markup language GAML++ that expresses the aspects we
discuss in this paper, and correspondingly extend GAMA to etGAMA in order to allow
simulation of the law.
• Extend Houdini technology to allow law change specific management, as well as the
utility functions we shall discuss in this paper.
• Build the whole system, where the communication components are managed via json/jquery
components in order to align the current java implementation of both Gama and Houdini.
With this research plan in agenda, we now deal with the theoretical model, in the rest of this
paper.
For what concerns the structure: Sec. 2 exposes the population model, in particular how it
changes during time; Sec. 3 presents the salient characteristics which we focus on, such as age,
gender and job; Sec. 4 is dedicated to model law compliance in this in vitro society; finally, Sec. 5
summarizes the whole paper.
2. Population model
In this section we provide the population model. First of all, for what concerns time, we adopt a
discrete time 𝑇 starting with 𝑡 = 0 and indexed by natural numbers which may be interpreted
as years. Let P be the countably infinite set of all the people who may, at some point, be part of
the population model. At any moment, which in our discrete time means a specific year, the
population of the model is given by a finite subset of P; the function 𝒫 : 𝑇 → 𝒫(P)1 associates
to every year 𝑡 a finite subset of P, the current population 𝒫(𝑡) in that year denoted by 𝒫𝑡 . The
starting population 𝒫0 , as well as its characteristics, see Sec. 3, is given; whereas the population
in 𝑡 + 1, i.e. 𝒫𝑡+1 , depends on four factors: births, deaths, immigration and emigration. Births
are given by a birth function 𝐵 : 𝑇 ∖ {0} → 𝒫(P) which, for every year 𝑡 except the first one,
selects the finite subset of the new born denoted by 𝒫𝑡0 ; the function 𝐵 satisfies reasonable
constraints related to births, in particular the following two:
• if 𝑡 ̸= 𝑡′ , then 𝐵(𝑡) ∩ 𝐵(𝑡′ ) = ∅, i.e. every person can born at most one time;
0 | = 𝐵𝑟 · |𝒫 𝑎𝑑𝑢𝑙𝑡 |, where 𝒫 𝑎𝑑𝑢𝑙𝑡 is the adult population at time 𝑡 (see Sec. 3) and
• |𝒫𝑡+1 𝑡 𝑡 𝑡
𝐵𝑟𝑡 is the birth rate in the year 𝑡; namely, the number of new born depends on that one
of adults via a coefficient.
Moreover, the birth function establishes the gender of the new born; namely there is a function
𝐵𝑔𝑒𝑛 : 𝒫𝑡0 → {𝑚𝑎𝑙𝑒, 𝑓 𝑒𝑚𝑎𝑙𝑒} assigning to each new born in 𝒫𝑡0 its gender, see below for
more details.
Deaths are modeled simply by the process that every person lives exactly 80 years. Thus,
if 𝒫𝑡80 denotes the subset of 𝒫𝑡 of people in their eighties, then moving from 𝒫𝑡 to 𝒫𝑡+1 we
simply remove 𝒫𝑡80 ; see later regarding how age is encoded in the model.
Immigration and emigration are treated similarly, namely we have two functions 𝐼𝑚 : 𝑇 →
𝒫(P) and 𝐸𝑚 : 𝑇 → 𝒫(P) selecting for every year 𝑡 who is immigrating, 𝐼𝑚𝑡 , and who is
emigrating, 𝐸𝑚𝑡 . As before we have some constraints for these functions, in particular:
• 𝐸𝑚𝑡+1 ⊆ 𝒫𝑡 , only people in the current population can emigrate the next year;
• 𝐼𝑚𝑡 ∩ 𝐸𝑚𝑡 = ∅, a person can not immigrate and emigrate the same year.
Moreover, the immigrate function 𝐼𝑚 establishes also the age and the gender of the immi-
grates; namely, there are two functions 𝐼𝑚𝐴𝑔𝑒 : 𝐼𝑚𝑡 → {1, . . . , 80} and 𝐼𝑚𝐺𝑒𝑛 : 𝐼𝑚𝑡 →
{𝑚𝑎𝑙𝑒, 𝑓 𝑒𝑚𝑎𝑙𝑒} assigning to each person in 𝐼𝑚𝑡 its age and its gender, see below for more
details.
All in all, with respect to 𝑇 population satisfies the following equation:
𝒫𝑡+1 = 𝒫𝑡 ∖ 𝒫𝑡80 ∖ 𝐸𝑚𝑡+1 ∪ 𝒫𝑡+1 0
(︀ )︀
∪ 𝐼𝑚𝑡+1 .
3. Individual Characteristics
Every person 𝑝 ∈ P has some individual characteristics, such as age, gender and so on. In this
paper the following are considered:
1
𝒫(P) denotes the powerset of P.
1. Age: this is a natural number between 1 and 80.
2. Age status: there are three age status, young, adult, elderly depending on the age.
3. Gender: this preliminary model has two genders male and female which are fixed from
birth.
4. Marital status: there are three marital status, bachelor, married and divorced.2
5. Job status: there are four job status, unemployed, public job, private job and retired.
6. Ethical tendency (also called in places inclination): there are three tendencies, legalist,
neutral and opportunist; related to tendency there is also a parameter 𝛼, see later for more
detail.
We see these characteristics as sets, e.g. Age= {1, 2, . . . , 80} and Age status= {𝑦𝑜𝑢𝑛𝑔, 𝑎𝑑𝑢𝑙𝑡,
𝑒𝑙𝑑𝑒𝑟𝑙𝑦}, presenting them one by one after some general considerations.
Excluding the ethical tendency, devoted to modeling law compliance, the other status have
been chosen since they represent three of the main social categories producing common and
widespread rights and duties, e.g. age for penal responsibility, as well as three of the main
characteristics used in demographic and social studies.
Except from gender, all the status may change during time, thus each status depend on both
the person 𝑝 ∈ 𝒫𝑡 and the current year 𝑡 ∈ 𝑇 . Let ℒ = 𝐴𝑔𝑒 × 𝐴𝑔𝑒 𝑠𝑡𝑎𝑡𝑢𝑠 × 𝐺𝑒𝑛𝑑𝑒𝑟 ×
𝑀 𝑎𝑟𝑖𝑡𝑎𝑙 𝑠𝑡𝑎𝑡𝑢𝑠 × 𝐽𝑜𝑏 𝑠𝑡𝑎𝑡𝑢𝑠 × 𝐸𝑡ℎ𝑖𝑐𝑎𝑙 𝑡𝑒𝑛𝑑𝑒𝑛𝑐𝑦 be the set of all possible status array, then
for every year 𝑡 ∈ 𝑇 there is a function ℓ𝑡 : 𝒫𝑡 → ℒ which associates to each person their status.
Moreover, ℓ has, as function, different components one for each status and, for sake of readability,
the function determining each status is denoted with an abbreviation of the status itself; thus for a
person 𝑝 and a given year 𝑡, ℓ𝑡 (𝑝) = (𝐴𝑔𝑒𝑡 (𝑝), 𝐴𝑔𝑒𝑆𝑡𝑎𝑡𝑡 (𝑝), 𝐺𝑒𝑛𝑑𝑒𝑟(𝑝), 𝑀 𝑎𝑟𝑆𝑡𝑎𝑡𝑡 (𝑝), 𝐽𝑜𝑏𝑡 (𝑝),
𝐿𝑒𝑔𝑎𝑙𝑡 (𝑝)). We consider now each status to present it showing how it may change during time.
Age
The age function 𝐴𝑔𝑒𝑡 : 𝒫𝑡 → {1, . . . , 80} assigns to every person 𝑝 in the current population
𝒫𝑡 its age 𝐴𝑔𝑒𝑡 (𝑝). Obviously there is a strict connection between 𝐴𝑔𝑒𝑡 and 𝐴𝑔𝑒𝑡+1 , more
precisely the latter as the following definition:
if 𝑝 ∈ 𝒫𝑡+1
⎧
0 ,
⎨1
⎪
𝐴𝑔𝑒𝑡+1 (𝑝) := 𝐴𝑔𝑒𝑡 (𝑝) + 1 if 𝑝 ∈ 𝒫𝑡 ,
𝐼𝑚𝐴𝑔𝑒𝑡+1 (𝑝) if 𝑝 ∈ 𝐼𝑚𝑡+1 .
⎪
⎩
The age function 𝐴𝑔𝑒0 : 𝒫0 → {1, . . . , 80} for the starting population 𝒫0 is given. For what
concerns notation, given 1 ⩽ 𝑛 ⩽ 80, we denote 𝒫𝑡𝑛 := {𝑝 ∈ 𝒫𝑡 | 𝐴𝑔𝑒𝑡 (𝑝) = 𝑛}.
2
For sake of simplicity we join together divorced and widowhood and use bachelor for both males and females.
Age status
The age status for a person 𝑝 ∈ 𝒫𝑡 depends only on the current age of 𝑝, more precisely:
⎨𝑦𝑜𝑢𝑛𝑔 if 1 ⩽ 𝐴𝑔𝑒𝑡 (𝑝) ⩽ 20,
⎧
⎪
𝐴𝑔𝑒𝑆𝑡𝑎𝑡𝑡 (𝑝) := 𝑎𝑑𝑢𝑙𝑡 if 21 ⩽ 𝐴𝑔𝑒𝑡 (𝑝) ⩽ 60,
𝑒𝑙𝑑𝑒𝑟𝑙𝑦 if 61 ⩽ 𝐴𝑔𝑒𝑡 (𝑝) ⩽ 80.
⎪
⎩
We adopt the following notation 𝒫𝑡𝑎𝑑𝑢𝑙𝑡 := {𝑝 ∈ 𝒫𝑡 | 𝐴𝑔𝑒𝑆𝑡𝑎𝑡𝑡 (𝑝) = 𝑎𝑑𝑢𝑙𝑡} and similarly
for 𝒫𝑡𝑦𝑜𝑢𝑛𝑔 and 𝒫𝑡𝑒𝑙𝑑𝑒𝑟𝑙𝑦 . Other characteristics or properties may depend on the age status, in
particular:
• the transition functions (see below) from one year to the next one for Marital status and
Job status;
• the number of new born in the next year, |𝒫𝑡+1
0 |, depends on the current number of adult
people, |𝒫𝑡𝑎𝑑𝑢𝑙𝑡 |.
Gender
Since gender is fixed, it depends only on the initial conditions for the people in 𝒫0 , on the birth
gender function 𝐵𝑔𝑒𝑛𝑡 for new born and on the immigrate gender function 𝐼𝑚𝐺𝑒𝑛𝑡 for who
immigrates.
Marital status
There are three marital status: bachelor, married, divorced. Young people, i.e. people for which
the current age is less than 21, have only one marital status available, namely bachelor; thus
𝑀 𝑎𝑟𝑆𝑡𝑎𝑡𝑡 (𝑝) = 𝑏𝑎𝑐ℎ𝑒𝑙𝑜𝑟
for all 𝑝 ∈ 𝒫𝑡𝑦𝑜𝑢𝑛𝑔 .
In order to model during years the marital status, as well as the job status, for adults and
elderly people (who may have other status beside bachelor) we use Markov chains; more
precisely different Markov chains depending on the age status. Excepted for people who have
just became adult, i.e. 21 year old people, who are automatically bachelor and people who have
just became elderly, i.e. 61 year old people, who preserve the previous status,3 the transitions
between these status are given by a stochastic process with different probabilities for adults and
elderly people. Denoting with 𝐵, 𝑀, 𝐷 being respectively bachelor, married and divorced and
setting to 99%, 95%, 5%, 1% the probabilities involved (which have been arbitrary chosen for
this paper but, being editable parameters, could be instantiated with real values coming from
statistical investigations), the transition schemes for adults and elderly people are the following:
This means that if at time 𝑡 a 35-year adult is bachelor there is 95% chance that he is still
bachelor at 𝑡 + 1 and a 5% chance that he got married. If the current population is sufficiently
3
Formally, this means that if 𝐴𝑔𝑒𝑡 (𝑝) = 21 than 𝑀 𝑎𝑟𝑆𝑡𝑎𝑡𝑡 (𝑝) = 𝑏𝑎𝑐ℎ𝑒𝑙𝑜𝑟 and if 𝐴𝑔𝑒𝑡+1 (𝑝) = 61, then
𝑀 𝑎𝑟𝑆𝑡𝑎𝑡𝑡+1 (𝑝) = 𝑀 𝑎𝑟𝑆𝑡𝑎𝑡𝑡 (𝑝).
ADULTS ELDERLY PEOPLE
95% 95% 95% 99% 99% 99%
5% 1%
5% 1%
𝐵 𝑀 𝐷 𝐵 𝑀 𝐷
5% 1%
Figure 2: Adults and elderly people marital status transition diagrams.
large these probabilities can be seen as the fractions of the current population changing their
status; namely every year the 95% of married adult remain married whereas the 5% divorced.
Moreover, this setting allows for an easy introduction of further constraints. For example, if
there is a waiting period of at least 𝑛 years between the declaration of divorce and a subsequent
marriage, this can be encoded in the marital status in the following way: if 𝑀 𝑎𝑟𝑆𝑡𝑎𝑡𝑡 (𝑝) =
𝑚𝑎𝑟𝑟𝑖𝑒𝑑 and 𝑀 𝑎𝑟𝑆𝑡𝑎𝑡𝑡+1 = 𝑑𝑖𝑣𝑜𝑟𝑐𝑒𝑑, then 𝑀 𝑎𝑟𝑆𝑡𝑎𝑡𝑡+1+𝑖 (𝑝) = 𝑑𝑖𝑣𝑜𝑟𝑐𝑒𝑑 for every 𝑖 ∈
{1, . . . , 𝑛}.
As before, the marital function 𝑀 𝑎𝑟𝑆𝑡𝑎𝑡0 for the initial population 𝒫0 as well as the marital
status of immigrants are given.
Job status
There are four job status: unemployed, public job, private job and retired. Overall the treatment
of the job status is similar to the marital status, for example young people and 21 year old
people have only one status available, namely unemployed; the only difference, excepts for
probabilities of the transition functions, is that adults and elderly people have two different sets
of available status. More precisely adults can have one among: unemployed, public job and
private job; whereas elderly people one among: retired, public job and private job. During the
transition between adulthood and old age the status remains unchanged except for unemployed
people who become retired; formally, if 𝐴𝑔𝑒𝑡+1 (𝑝) = 61 then:
⎨𝑟𝑒𝑡𝑖𝑟𝑒𝑑 if 𝐽𝑜𝑏𝑡 (𝑝) = 𝑢𝑛𝑒𝑚𝑝𝑙𝑜𝑦𝑒𝑑,
⎧
⎪
𝐽𝑜𝑏𝑡+1 (𝑝) := 𝑝𝑢𝑏𝑙𝑖𝑐 if 𝐽𝑜𝑏𝑡 (𝑝) = 𝑝𝑢𝑏𝑙𝑖𝑐,
𝑝𝑟𝑖𝑣𝑎𝑡𝑒 if 𝐽𝑜𝑏𝑡 (𝑝) = 𝑝𝑟𝑖𝑣𝑎𝑡𝑒.
⎪
⎩
Moreover, as it can be seen from the transition function for elderly people, retirement is
irreversible; namely if a person retires than from that year their status will always be retired.
Abbreviating with 𝑈 𝑛, 𝑅𝑒𝑡, 𝑃 𝑢𝑏, 𝑃 𝑟𝑣 respectively being unemployed, retired, public job and
private job, the transition functions are as follows:
As before, the job function 𝐽𝑜𝑏0 for the initial population 𝒫0 as well as the job status of
immigrants are given.
Ethical tendency
Ethical tendency aims to formalize the behavior of people with respect to the violation of law.
The main idea is that the violation of a norm by a person depends on the expected utility of that
ADULTS ELDERLY PEOPLE
30% 50%
40% 𝑈𝑛 𝑃𝑏 70% 100% 𝑈𝑛 𝑃𝑏 40%
5%
10%
30% 30%
10% 25% 40% 10%
𝑃𝑟 𝑃𝑟
60% 50%
Figure 3: Adults and elderly people job transition diagram.
person violating the norm compared with the expected utility respecting the norm together
with the Ethical tendency of that person, i.e. the general propensity to respect the law. Let 𝜂 be
a norm and 𝑝 a person; if we denote with 𝜇𝑝 (𝜂 + ) the expected utility of 𝑝 in complying with 𝜂
and with 𝜇𝑝 (𝜂 − ) the expected utility of 𝑝 in violating 𝜂, than the probability of 𝑝 of violating 𝜂
is given by: {︃
0 if 𝜇𝑝 (𝜂 + ) ⩾ 𝜇𝑝 (𝜂 − ),
𝑃 (𝑝, 𝜂) := (1)
1 − 𝑒−𝛼𝑝 𝑘 if 𝑘 = 𝜇𝑝 (𝜂 − ) − 𝜇𝑝 (𝜂 + ) > 0.
𝛼𝑝 is a parameter representing the tendency of 𝑝 to violate the law, there are three cases
corresponding to the three Ethical tendency status:
• 𝛼𝑝 = 0, in this case 𝑝 does not violate the law, no matter how high is the expected utility
in the violation; in this case 𝑝 is a legalist.
• 0 < 𝛼𝑝 < +∞, in this case 𝑝 may violate 𝜂 if 𝜇𝑝 (𝜂 − ) > 𝜇𝑝 (𝜂 + ) and the probability
increases as 𝑘 = 𝜇𝑝 (𝜂 − ) − 𝜇𝑝 (𝜂 + ) increases; in this case 𝑝 is legally neutral.
• 𝛼𝑝 = +∞, in this case 𝑝 violates 𝜂 as soon as 𝜇𝑝 (𝜂 − ) > 𝜇𝑝 (𝜂 + ); in this case 𝑝 is an
opportunist.
For a quantitative estimation of the role of 𝛼𝑝 , if 𝛼𝑝 = 1 and 𝑘 = 𝜇𝑝 (𝜂 − ) − 𝜇𝑝 (𝜂 + ) = 1 then
there is a probability of 1 − 𝑒−1 ≃ 63% that 𝑝 violate 𝜂.
The Ethical tendency of a person 𝑝 may change during time according a transition function,
in this case we assign the same transition process to the three age status. Denoting with 𝐿, 𝑁, 𝑂
being respectively legalist, neutral and opportunist we adopt the following status transitions
(again the probabilities are editable parameters):
When the ethical tendency of a person 𝑝 becomes neutral, the transition function also assigns
to 𝛼𝑝 a strictly positive real value. As before the legal function 𝐿𝑒𝑔𝑎𝑙0 of the starting population
𝒫0 , as well as the ones for new born and immigrants, are given.4
4
Another possibility would be to assign to new born legal tendencies that reflect the proportion of legal tendencies
in the adult population or in the whole population.
90% 90% 90%
10% 5%
𝐿 𝑁 𝑂
5% 10%
Figure 4: Young, adults and elderly people ethical tendencies transition diagram.
4. Law Compliance
The conceptualisation of law compliance is taken from the current literature on DDL and derived
models. In particular we presuppose that an agent enforces freely literals that are taken as
feasible actions. On the other hand, when an agent performs a particular action, for which, as
we specified before, she has a specific tendency (a probability of doing that action), the legal
system takes care of that. In particular, we assume that the normative background is actually
constructed as a DDT, and feasible actions are literals of that theory.
Defeasible Logic [6, 7] is a simple, flexible, and efficient rule-based non-monotonic formalism.
Its strength lies in its constructive proof theory, allowing it to draw meaningful conclusions
from (potentially) conflicting and incomplete knowledge base. In non-monotonic systems, more
accurate conclusions can be obtained when more pieces of information become available.
Many variants of Defeasible Logic have been proposed for the logical modelling of different
application areas, specifically agents [8, 9, 10], legal reasoning [11, 12] and workflows from a
business process compliance perspective [13, 14].
In this research we focus on the Defeasible Deontic Logic (henceforth DDL) framework [15]
that allows us to determine what prescriptive behaviours are in force in a given situation. For
detailed descriptions of how to adopt DDL for legal reasoning we refer the reader to [16].
We start by defining the language of a Defeasible Deontic Theory (henceforth a DDT)
Let PROP be a set of propositional atoms, and Lab be a set of arbitrary labels (the names of
the rules). We use lower-case Roman letters to denote literals and lower-case Greek letters to
denote rules.
Accordingly, PLit = PROP ∪ {¬𝑙 | 𝑙 ∈ PROP} is the set of plain literals; the set of deontic
literals is ModLit = {□𝑙, ¬□𝑙 | 𝑙 ∈ PLit ∧ □ ∈ {O, P}} and, finally, the set of literals is
Lit = PLit ∪ ModLit. The complement of a literal 𝑙 is denoted by ∼𝑙: if 𝑙 is a positive literal
𝑝 then ∼𝑙 is ¬𝑝, and if 𝑙 is a negative literal ¬𝑝 then ∼𝑙 is 𝑝. We will not have specific rules
nor modality for prohibitions, as we will treat them according to the standard duality that
something is forbidden iff the opposite is obligatory (i.e., O¬𝑝).
Definition 1 (Defeasible Deontic Theory). A DDT 𝐷 is a triple (𝐹, 𝑅, >), where 𝐹 is the set
of facts, 𝑅 is the set of rules, and > is a binary relation over 𝑅 (called superiority relation).
Specifically, the set of facts 𝐹 ⊆ PLit denotes simple pieces of information that are always
considered to be true, like “Sylvester is a cat”, formally 𝑐𝑎𝑡(𝑆𝑦𝑙𝑣𝑒𝑠𝑡𝑒𝑟). In this paper, we
subscribe to the distinction between the notions of obligations and permissions, and that of
norms, where the norms in the system determine the obligations and permissions in force in a
normative system. A DDT is meant to represent a normative system, where the rules encode
the norms of the system, and the set of facts corresponds to a case. As we will see below, the
rules are used to conclude the institutional facts, obligations and permissions that hold in a case.
Accordingly, we do not admit obligations and permissions as facts of the theory.
The set of rules 𝑅 contains three types of rules: strict rules, defeasible rules, and defeaters.
Rules are also of two kinds:
• Constitutive rules (non-deontic rules) 𝑅C model constitutive statements (count-as rules);
• Deontic rules to model prescriptive behaviours, which are either obligation rules 𝑅O that
determine when and which obligations are in force, or permission rules which represent
strong (or explicit) permissions 𝑅P .
Lastly, > ⊆ 𝑅 × 𝑅 is the superiority (or preference) relation, which is used to solve conflicts in
case of potentially conflicting information.
A theory is finite if the set of facts and rules are so. We only focus on finite theories.
A strict (constitutive) rule is a rule in the classical sense: whenever the premises are indis-
putable, so is the conclusion.
On the other hand, defeasible rules are to conclude statements that can be defeated by contrary
evidence. In contrast, defeaters are special rules whose only purpose is to prevent the derivation
of the opposite conclusion.
A prescriptive behaviour like “Passing on zebra crossing is not permitted when the traffic
light for pedestrian is red” can be formalised via the general permissive rule
AtZebraCross ⇒P Pass
and the exception through the obligation rule
Pedestrian_traffic_light_red ⇒O ¬Pass.
Following the ideas of [17], obligation rules gain more expressiveness with the compensation
operator ⊗ for obligation rules, which is to model reparative chains of obligations. Intuitively,
𝑎 ⊗ 𝑏 means that 𝑎 is the primary obligation, but if for some reason we fail to obtain, to comply
with, 𝑎 (by either not being able to prove 𝑎, or by proving ∼𝑎) then 𝑏 becomes the new obligation
in force. This operator is used to build chains of preferences, called ⊗-expressions.
The formation rules for ⊗-expressions are as follows (i) every plain literal is an ⊗-expression
and (ii) if 𝐴 is an ⊗-expression and 𝑏 is a plain literal then 𝐴 ⊗ 𝑏 is an ⊗-expression [15].
In general an ⊗-expression has the form ‘𝑐1 ⊗ 𝑐2 ⊗ · · · ⊗ 𝑐𝑚 ’, and it appears as consequent of
a rule ‘𝐴(𝛼) ˓→O 𝐶(𝛼)’ where 𝐶(𝛼) = 𝑐1 ⊗ 𝑐2 ⊗ · · · ⊗ 𝑐𝑚 ; the meaning of the ⊗-expression
is: if the rule is allowed to draw its conclusion, then 𝑐1 is the obligation in force, and only
when 𝑐1 is violated then 𝑐2 becomes the new in force obligation, and so on for the rest of
the elements in the chain. In this setting, 𝑐𝑚 stands for the last chance to comply with the
prescriptive behaviour enforced by 𝛼, and in case 𝑐𝑚 is violated as well, then we will result in a
non-compliant situation.
For instance, the previous prohibition to pass on pedestrian cross in case of red can foresee a
compensatory fine, like
Pedestrian_traffic_light_red ⇒O ¬Pass ⊗ PayFine
that has to be paid in case someone passes the pedestrian cross when the light is red.
It is worth noticing that we admit ⊗-expressions with only one element. The intuition, in
this case, is that the obligatory condition does not admit compensatory measures or, in other
words, that it is impossible to recover from its violation.
In this paper, we focus exclusively on the defeasible part of the logic ignoring the monotonic
component given by the strict rules; consequently, we limit the language to the cases where the
rules are either defeasible or defeaters. From a practical point of view, the restriction does not
effectively limit the expressive power of the logic: a defeasible rule where there are no rules
for the opposite conclusion, or where all rules for the opposite conclusion are weaker than the
given defeasible rules, effectively behaves like a strict rule. Formally a rule is defined as below.
Definition 2 (Rule). A rule is an expression of the form 𝛼 : 𝐴(𝛼) ˓→□ 𝐶(𝛼), where
1. 𝛼 ∈ Lab is the unique name of the rule;
2. 𝐴(𝛼) ⊆ Lit is the set of antecedents;
3. An arrow ˓→ ∈ {⇒, ↝} denoting, respectively, defeasible rules, and defeaters;
4. □ ∈ {C, O, P};
5. its consequent 𝐶(𝛼), which is either
a) a single plain literal 𝑙 ∈ PLit, if either (i) ˓→ ≡ ↝ or (ii) □ ∈ {C, P}, or
b) an ⊗-expression, if □ ≡ O.
If □ = C then the rule is used to derive non-deontic literals (constitutive statements), whilst if
□ is O or P then the rule is used to derive deontic conclusions (prescriptive statements). The
conclusion 𝐶(𝛼) is, as before, a single literal in case □ = {C, P}; in case □ = O, then the
conclusion is an ⊗-expression. ⊗-expressions can only occur in prescriptive rules though we
do not admit them on defeaters (Condition 5.(a).i), see [15] for a detailed explanation.
We use some abbreviations on sets of rules. The set of defeasible rules in 𝑅 is 𝑅⇒ , the set of
defeaters is 𝑅dft . 𝑅□ [𝑙] is the rule set appearing in 𝑅 with head 𝑙 and modality □, while 𝑅O [𝑙, 𝑖]
denotes the set of obligation rules where 𝑙 is the 𝑖-th element in the ⊗-expression. Given that
the consequent of a rule is either a single literal or an ⊗-expression (that can be understood as
a sequence of elements, and then as an ordered set), in what follows we are going to abuse the
notation and use 𝑙 ∈ 𝐶(𝛼). 𝑅□ is the set of rules 𝛼 : 𝐴(𝛼) ˓→□ 𝐶(𝛼) such that 𝛼 appears in 𝑅.
For a theory as determined by Definitions 1 and 2, 𝛼 appears in 𝑅 means that 𝛼 ∈ 𝑅; thus 𝑅P
is the set of permissive rules appearing in 𝑅. We use 𝑅◇ and 𝑅◇ [𝑙] as shorthands for 𝑅O ∪ 𝑅P
and 𝑅O [𝑙] ∪ 𝑅P [𝑙], respectively. The abbreviations can be combined. Finally, a literal 𝑙 appears
in a theory 𝐷, if there is a rule 𝛼 ∈ 𝑅 such that 𝑙 ∈ 𝐴(𝛼) ∪ 𝐶(𝛼).
Definition 3 (Tagged modal formula). A tagged modal formula is an expression of the form
±𝜕□ 𝑙, with the following meanings
• +𝜕□ 𝑙: 𝑙 is defeasibly provable (or simply provable) with mode □;
• −𝜕□ 𝑙: 𝑙 is defeasibly refuted (or simply refuted) with mode □.
Accordingly, the meaning of +𝜕O 𝑝 is that 𝑝 is provable as an obligation, and −𝜕P ¬𝑝 is that we
have a refutation for the permission of ¬𝑝. Similarly, for the other combinations.
As we will shortly see (Definitions 5 and 6), one of the key ideas of DDL is that we use
tagged modal formulas to determine which formulas are (defeasibly) provable or rejected given
a theory and a set of facts (used as input for the theory). Therefore, when we have asserted
the tagged modal formula +𝜕O 𝑙 in a derivation (see Definition 4 below), we can conclude that
the obligation of 𝑙 (O𝑙) follows from the rules and the facts and that we used a prescriptive
rule to derive 𝑙; similarly for permission (using a permissive rule). However, the C modality
is silent, meaning that we do not put the literal in the scope of the C modal operator, thus for
+𝜕C 𝑙, the derivation simply asserts that 𝑙 holds (and not that C𝑙 holds, even if the two have the
same meaning). For the negative cases (i.e., −𝜕□ 𝑙), the interpretation is that it is not possible
to derive 𝑙 with a given mode. Accordingly, we read −𝜕O 𝑙 as it is impossible to derive 𝑙 as an
obligation. For □ ∈ {O, P} we are allowed to infer ¬□𝑙, giving a constructive interpretation of
the deontic modal operators. Notice that this is not the case for C, where we cannot assert that
∼𝑙 holds (this would require +𝜕C ∼𝑙); in the logic, failing to prove 𝑙 does not equate to proving
¬𝑙. We will use the term conclusions and tagged modal formulas interchangeably.
Definition 4 (Proof). Given a DDT 𝐷, a proof 𝑃 of length 𝑚 in 𝐷 is a finite sequence 𝑃 (1), 𝑃 (2),
. . . , 𝑃 (𝑚) of tagged modal formulas, where the proof conditions hold.
𝑃 (1..𝑛) denotes the first 𝑛 steps of 𝑃 , and we also use the notational convention 𝐷 ⊢ ±𝜕□ 𝑙,
meaning that there is a proof 𝑃 for ±𝜕□ 𝑙 in 𝐷.
Core notions in DL are that of applicability/discardability. As knowledge in a defeasible theory
is circumstantial, given a defeasible rule like ‘𝛼 : 𝑎, 𝑏 ⇒□ 𝑐’, there are four possible scenarios:
the theory defeasibly proves both 𝑎 and 𝑏, the theory proves neither, the theory proves one but
not the other. Naturally, only in the first case, where both 𝑎 and 𝑏 are proved, we can use 𝛼
to support/try to conclude □𝑐. Briefly, we say that a rule is applicable when every antecedent’s
literal has been proved at a previous derivation step. Symmetrically, a rule is discarded when
one of such literals has been previously refuted. Formally:
Definition 5 (Applicability). Assume a deontic defeasible theory 𝐷 = (𝐹, 𝑅, >). We say that
rule 𝛼 ∈ 𝑅C ∪ 𝑅P is applicable at 𝑃 (𝑛 + 1), iff for all 𝑎 ∈ 𝐴(𝛼)
1. if 𝑎 ∈ PLit, then +𝜕C 𝑎 ∈ 𝑃 (1..𝑛),
2. if 𝑎 = □𝑞, then +𝜕□ 𝑞 ∈ 𝑃 (1..𝑛), with □ ∈ {O, P},
3. if 𝑎 = ¬□𝑞, then −𝜕□ 𝑞 ∈ 𝑃 (1..𝑛), with □ ∈ {O, P}.
We say that rule 𝛼 ∈ 𝑅O is applicable at index 𝑖 and 𝑃 (𝑛 + 1) iff Conditions 1–3 above hold and
4. ∀𝑐𝑗 ∈ 𝐶(𝛼), 𝑗 < 𝑖, then +𝜕O 𝑐𝑗 ∈ 𝑃 (1..𝑛) and +𝜕C ∼𝑐𝑗 ∈ 𝑃 (1..𝑛).5
Definition 6 (Discardability). Assume a deontic defeasible theory 𝐷, with 𝐷 = (𝐹, 𝑅, >). We
say that rule 𝛼 ∈ 𝑅C ∪ 𝑅P is discarded at 𝑃 (𝑛 + 1), iff there exists 𝑎 ∈ 𝐴(𝛼) such that
1. if 𝑎 ∈ PLit, then −𝜕C 𝑙 ∈ 𝑃 (1..𝑛), or
2. if 𝑎 = □𝑞, then −𝜕□ 𝑞 ∈ 𝑃 (1..𝑛), with □ ∈ {O, P}, or
3. if 𝑎 = ¬□𝑞, then +𝜕□ 𝑞 ∈ 𝑃 (1..𝑛), with □ ∈ {O, P}.
We say that rule 𝛼 ∈ 𝑅O is discarded at index 𝑖 and 𝑃 (𝑛 + 1) iff either at least one of the
Conditions 1–3 above hold, or
4. ∃𝑐𝑗 ∈ 𝐶(𝛼), 𝑗 < 𝑖 such that −𝜕O 𝑐𝑗 ∈ 𝑃 (1..𝑛), or −𝜕C ∼𝑐𝑗 ∈ 𝑃 (1..𝑛).
Discardability is obtained by applying the principle of strong negation to the definition of
applicability. The strong negation principle applies the function that simplifies a formula by
moving all negations to an innermost position in the resulting formula, replacing the positive
tags with the respective negative tags, and the other way around; see [19]. Positive proof
tags ensure that there are effective decidable procedures to build proofs; the strong negation
principle guarantees that the negative conditions provide a constructive and exhaustive method
to verify that a derivation of the given conclusion is not possible. Accordingly, Condition 3 of
Definition 5 allows us to state that ¬□𝑝 holds when we have a (constructive) failure to prove 𝑝
with mode □ (for obligation or permission), thus it corresponds to a constructive version of
negation as failure.
We are ready to formalise the proof conditions, as in [15]. We start with positive proof
conditions for constitutive statements. In the following, we shall omit the explanations for
negative proof conditions, when trivial, reminding the reader that they are obtained through
the application of the strong negation principle to the positive counterparts.
Definition 7 (Constitutive Proof Conditions).
+𝜕C 𝑙: If 𝑃 (𝑛 + 1) = +𝜕C 𝑙 then −𝜕C 𝑙: If 𝑃 (𝑛 + 1) = −𝜕C 𝑙 then
(1) 𝑙 ∈ 𝐹 , or (1) 𝑙 ̸∈ 𝐹 and either
(2) (1) ∼𝑙 ̸∈ 𝐹 , and (2) (1) ∼𝑙 ∈ 𝐹 , or
(2) ∃𝛽 ∈ 𝑅⇒ C [𝑙] s.t. 𝛽 is appl., and (2) ∀𝛽 ∈ 𝑅⇒ C [𝑙], either 𝛽 is disc., or
C
(3) ∀𝛾 ∈ 𝑅 [∼𝑙] either C
(3) ∃𝛾 ∈ 𝑅 [∼𝑙] such that
(1) 𝛾 is disc., or (1) 𝛾 is appl., and
(2) ∃𝜁 ∈ 𝑅C [𝑙] s.t. (2) ∀ 𝜁 ∈ 𝑅C [𝑙], either
(1) 𝜁 is appl. and (1) 𝜁 is disc., or
(2) 𝜁 > 𝛾. (2) 𝜁 ̸> 𝛾.
5
As discussed above, we are allowed to move to the next element of an ⊗-expression when the current element
is violated. To have a violation, we need (i) the obligation to be in force, and (ii) that its content does not hold.
+𝜕O 𝑐𝑖 indicates that the obligation is in force. For the second part we have two options. The former, +𝜕C ∼𝑐𝑖
means that we have “evidence” that the opposite of the content of the obligation holds. The latter would be to have
−𝜕C 𝑐𝑖 ∈ 𝑃 (1..𝑛) corresponding to the intuition that we failed to provide evidence that the obligation has been
satisfied. The former option implies the latter one. For a deeper discussion on the issue, see [18].
A literal is defeasibly proved if: it is a fact, or there exists an applicable, defeasible rule supporting
it (such a rule cannot be a defeater) and all opposite rules are either discarded or defeated. To
prove a conclusion, not all the work has to be done by a stand-alone (applicable) rule (the rule
witnessing Condition (2.2)): all the applicable rules for the same conclusion (may) contribute to
defeating applicable rules for the opposite conclusion. Both 𝛾 as well as 𝜁 may be defeaters.
Below we present the proof conditions for obligations.
Definition 8 (Obligation Proof Conditions).
+𝜕O 𝑙: If 𝑃 (𝑛 + 1) = +𝜕O 𝑙 then −𝜕O 𝑙: If 𝑃 (𝑛 + 1) = −𝜕O 𝑙 then
O [𝑙, 𝑖] s.t. ∀𝛽 ∈ 𝑅⇒ O [𝑙, 𝑖] either
∃𝛽 ∈ 𝑅⇒
(1) 𝛽 is applicable at index 𝑖 and (1) 𝛽 is discarded at index 𝑖, or
(2) ∀𝛾 ∈ 𝑅O [∼𝑙, 𝑗] ∪ 𝑅P [∼𝑙] either (2) ∃𝛾 ∈ 𝑅O [∼𝑙, 𝑗] ∪ 𝑅P [∼𝑙] s.t.
(1) 𝛾 is discarded (at index 𝑗), or (1) 𝛾 is applicable (at index 𝑗), and
O
(2) ∃𝜁 ∈ 𝑅 [𝑙, 𝑘] s.t. (2) ∀𝜁 ∈ 𝑅O [𝑙, 𝑘] either
(1) 𝜁 is applicable at index 𝑘 and (1) 𝜁 is discarded at index 𝑘, or
(2) 𝜁 > 𝛾. (2) 𝜁 ̸> 𝛾.
(i) in Condition (2) 𝛾 can be a permission rule as explicit, opposite permissions represent
exceptions to obligations, whereas 𝜁 (Condition 2.2) must be an obligation rule as a permission
rule cannot reinstate an obligation, and that (ii) 𝑙 may appear at different positions (indices 𝑖, 𝑗,
and 𝑘) within the three ⊗-chains. Below, we introduce the proof conditions for permissions.
Definition 9 (Permission Proof Conditions).
+𝜕P 𝑙: If 𝑃 (𝑛 + 1) = +𝜕P 𝑙 then −𝜕P 𝑙: If 𝑃 (𝑛 + 1) = −𝜕P 𝑙 then
(1) +𝜕O 𝑙 ∈ 𝑃 (1..𝑛), or (1) −𝜕O 𝑙 ∈ 𝑃 (1..𝑛), and
P [𝑙] s.t. (2) ∀𝛽 ∈ 𝑅⇒ P [𝑙] either
(2) ∃𝛽 ∈ 𝑅⇒
(1) 𝛽 is appl. and (1) 𝛽 is disc. or
(2) ∀𝛾 ∈ 𝑅O [∼𝑙, 𝑗] either (2) ∃𝛾 ∈ 𝑅O [∼𝑙, 𝑗] s.t.
(1) 𝛾 is disc. at index 𝑗, or (1) 𝛾 is appl. at index 𝑗 and
P O
(2) ∃𝜁 ∈ 𝑅 [𝑙] ∪ 𝑅 [𝑙, 𝑘] s.t. (2) ∀𝜁 ∈ 𝑅P [𝑙] ∪ 𝑅O [𝑙, 𝑘] either
(1) 𝜁 is appl. (at index 𝑘) and (1) 𝜁 is disc. (at index 𝑘), or
(2) 𝜁 > 𝛽. (2) 𝜁 ̸> 𝛽.
Condition (1) allows us to derive a permission from the corresponding obligation. Thus it
corresponds to the O𝑎 → P𝑎 axiom of Deontic Logic. Condition (2.2) considers as possible
counter-arguments only obligation rules in situations where both P𝑙 and P¬𝑙 hold are allowed.
We refer the readers interested in a deeper discussion on how to model permissions and
obligations in DDL to [15].
The set of positive and negative conclusions of a theory is called extension. The extension of
a theory is computed based on the literals that appear in it; more precisely, the literals in the
Herbrand Base of the theory HB (𝐷) = {𝑙, ∼𝑙 ∈ PLit| 𝑙 appears in 𝐷}.
Definition 10 (Extension). Given a DDT 𝐷, we define the extension of 𝐷 as 𝐸(𝐷) =
(+𝜕C , −𝜕C , +𝜕O , −𝜕O , +𝜕P , −𝜕P ), where ±𝜕□ = {𝑙 ∈ HB (𝐷)| 𝐷 ⊢ ±𝜕□ 𝑙}, with □ ∈ {C, O, P}.
5. Conclusions and further extensions
While expressing some concerns and the plan to correct macroscopic limits of current simulation
platforms, in particular, GAMA, we value the idea that it is a relevant step to enterprise the
definition of a credible system to actually simulate the effects on a society of the introduction
of a new norm. The discussion we provided upon the notion of class to which a particular
individual of the MAS belongs (while classes do not constitute a partition) is devised as a support
to the usage of the rules in the Deontic counterpart of the simulator. However, when the system
will be completely built, we shall have three specific strengths that differentiate our approach
to any other ones already discussed in the simulation literature for agents’ modelling that are
inspired by the approach followed for the GAMA framework:
• We consider the class hierarchy described in terms of transitions, the results of the
computation of the extension of the constituent rules as devised above. There is no such a
thing as a priori class, but only classes built as result of a set of rules;
• Probabilistic and utility-driven models attain at agents’ definition, not the normative
background that is solely prescriptive;
• There exists a system of negative feedback, able to re-devise the properties of the class
hierarchy, the probabilities and the consequential behaviors of the agents.
We followed here the concepts expressed in the researches by Riveret et al. [20, 21] and also
discussed further on by Governatori et al. [9].
Finally, the ductility of defeasible deontic logic would allow, in a further extension, to include
in the model not only law compliance, but also ethical and personal behaviors.
References
[1] M. Cristani, G. Governatori, F. Olivieri, L. Pasetto, F. Tubini, C. Veronese, A. Villa, E. Zorzi,
The architecture of a reasoning system for defeasible deontic logic, in: Procedia Computer
Science, 2023, pp. 4214–4224. doi:10.1016/j.procs.2023.10.418.
[2] M. Cristani, G. Governatori, F. Olivieri, L. Pasetto, F. Tubini, C. Veronese, A. Villa, E. Zorzi,
Houdini (unchained): an effective reasoner for defeasible logic, in: CEUR Workshop
Proceedings, 2022, pp. 1–16.
[3] P. Taillandier, D.-A. Vo, E. Amouroux, A. Drogoul, Gama: A simulation platform that
integrates geographical information data, agent-based modeling and multi-scale control,
in: LNCS, 2012, p. 242 – 258. doi:10.1007/978-3-642-25920-3_17.
[4] A. Drogoul, E. Amouroux, P. Caillou, B. Gaudou, A. Grignard, N. Marilleau, P. Taillandier,
M. Vavasseur, D.-A. Vo, J.-D. Zupker, Gama: Multi-level and complex environment for
agent-based models and simulations, in: AAMAS 2013, 2013, p. 1361 – 1362.
[5] E. Amouroux, T.-Q. Chu, A. Boucher, A. Drogoul, Gama: An environment for implementing
and running spatially explicit multi-agent simulations, in: LNCS, 2009, p. 359 – 371.
doi:10.1007/978-3-642-01639-4_32.
[6] D. Nute, Defeasible logic, in: Handbook of Logic in Artificial Intelligence and Logic
Programming, Oxford University Press, 1987.
[7] G. Antoniou, D. Billington, G. Governatori, M. J. Maher, Representation results for defeasi-
ble logic, ACM Trans. Comput. Log. (2001) 255–287. doi:10.1145/371316.371517.
[8] K. Kravari, N. Bassiliades, A survey of agent platforms, Journal of Artificial Societies and
Social Simulation (2015) 11. doi:10.18564/jasss.2661.
[9] G. Governatori, F. Olivieri, S. Scannapieco, A. Rotolo, M. Cristani, The rationale behind
the concept of goal, Theory Pract. Log. Program. (2016) 296–324. URL: https://doi.org/10.
1017/S1471068416000053. doi:10.1017/S1471068416000053.
[10] M. Dastani, G. Governatori, A. Rotolo, L. van der Torre, Programming cognitive agents in
defeasible logic, in: LPAR 2005 Conference, Montego Bay, Jamaica, LNAI, Springer, 2005,
pp. 621–636.
[11] G. Governatori, A. Rotolo, Changing legal systems: Legal abrogations and annulments in
defeasible logic, Logic Journal of the IGPL (2009) 157–194. doi:10.1093/jigpal/jzp075.
[12] M. Cristani, F. Olivieri, A. Rotolo, Changes to temporary norms, in: ICAIL 2017, 2017, pp.
39–48. doi:10.1145/3086512.3086517.
[13] G. Governatori, F. Olivieri, S. Scannapieco, M. Cristani, Designing for compliance:
Norms and goals, in: RuleML 2011, LNCS, Springer, 2011, pp. 282–297. doi:10.1007/
978-3-642-24908-2\_29.
[14] F. Olivieri, M. Cristani, G. Governatori, Compliant business processes with exclusive
choices from agent specification, LNCS (2015) 603–612.
[15] G. Governatori, F. Olivieri, A. Rotolo, S. Scannapieco, Computing strong and weak
permissions in defeasible logic, J. Philos. Log. (2013) 799–829. URL: https://doi.org/10.1007/
s10992-013-9295-1. doi:10.1007/s10992-013-9295-1.
[16] G. Governatori, A. Rotolo, G. Sartor, Logic and the law: Philosophical foundations, deontics,
and defeasible reasoning, in: D. Gabbay, J. Horty, X. Parent, R. van der Meyden, L. van der
Torre (Eds.), Handbook of Deontic Logic and Normative Systems, College Publications,
London, 2021, pp. 657–764.
[17] G. Governatori, A. Rotolo, Logic of violations: A gentzen system for reasoning with
contrary-to-duty obligations, Australasian Journal of Logic (2006) 193–215. URL: http:
//ojs.victoria.ac.nz/ajl/article/view/1780.
[18] G. Governatori, Burden of compliance and burden of violations, in: A. Rotolo (Ed.), 28th
Annual Conference on Legal Knowledge and Information Systems, Frontiers in AI and
Applications, IOS Press, Amsterdam, 2015, pp. 31–40.
[19] G. Governatori, V. Padmanabhan, A. Rotolo, A. Sattar, A defeasible logic for modelling
policy-based intentions and motivational attitudes, Log. J. IGPL (2009) 227–265. doi:10.
1093/jigpal/jzp006.
[20] R. Riveret, A. Rotolo, G. Sartor, Probabilistic rule-based argumentation for norm-
governed learning agents, Artificial Intelligence and Law (2012) 383–420. doi:10.1007/
s10506-012-9134-7.
[21] R. Riveret, G. Contissa, A. Rotolo, J. Pitt, Law enforcement in norm-governed learning
agents, in: AAMAS 2013, 2013, pp. 1151–1152.