=Paper=
{{Paper
|id=Vol-1668/paper5
|storemode=property
|title=Formal Verification of Ethical Properties in Multiagent Systems
|pdfUrl=https://ceur-ws.org/Vol-1668/paper5.pdf
|volume=Vol-1668
|authors=Bruno Mermet,Gaële Simon
|dblpUrl=https://dblp.org/rec/conf/ecai/MermetS16
}}
==Formal Verification of Ethical Properties in Multiagent Systems==
Formal Verication of Ethical Properties in
Multiagent Systems
Bruno Mermet and Gaële Simon1
Abstract. The increasing use of autonomous articial from an ethical point of view. As stated in the work of Abram-
agents in hospitals or in transport control systems leads to son and Pike, a moral rule is represented by a formal property
consider whether moral rules shared by many of us are fol- that must be entailed by an agent [1]. As a consequence, the
lowed by these agents. This is a particularly hard problem behaviour of an agent is an ethical one if it entails all the
because most of these moral rules are often not compatible. expected moral rules in a given context.
In such cases, humans usually follow ethical rules to promote Considering that a moral rule can be represented by a rst
one moral rule or another. Using formal verication to ensure order logical formula F with enough expressiveness for most
that an agent follows a given ethical rule could help in in- practical cases, our goal is to establish that the behaviour
creasing the condence in articial agents. In this article, we of an agent is an ethical one if it entails F . If not, then the
show how a set of formal properties can be obtained from an behaviour is not an ethical one. However, such a logical system
ethical rule ordering conicting moral rules. If the behaviour is only semi-decidable: it is not always possible to prove that
of an agent entails these properties (which can be proven us- a system does not entail a formula F . Indeed, if an automatic
ing our existing proof framework), it means that this agent prover does not manage to prove that the behaviour of an
follows this ethical rule. agent entails a formula F , it is not possible to automatically
determine if it results from the fact that the behaviour does
not actually entail F or if it is because the prover can not
1 Introduction prove the opposite.
The introduction of autonomous articial agents in domains As a consequence, we propose to use a formal framework
like health or high-frequency trading could lead to numer- allowing to reduce as far as possible the number of correct for-
ous problems if these agents are not able to understand and mulae that can not automatically be proven. In section 2, such
to take into account moral rules. For example, agents able formal frameworks are described, especially those dedicated
to understand and to use the code of medical ethics could to multiagent systems. Then what we call moral and ethical
base their decision on ethical motivations in order to choose rules are dened. In section 3, our formal system is described
which piece of information must be provided, according to ans its use in an ethical context is presented in section 4.
the medical condentiality. This explains that the research
community [13, 23] seems recently to focus on ethics for au-
tonomous agents, which lead to numerous articles [4, 18, 22] 2 tate of the art
and conferences2 . Since the early days of computing, the need to ensure the cor-
This article takes place in ETHICAA project3 which aims rectness of softwares is a major issue for software developers.
at dealing with management of moral and ethical conicts This need has become crucial with critical systems, that is
between autonomous agents. If existing works are mainly fo- to say applications dedicated to domains where safety is vi-
cused on ethical decision and reasoning questions, [5, 27, 30, tal (as transport for example). However, formally proving a
2, 8, 16], there are very few proposals dedicated to formal software is a long and dicult process which can conict with
verication of such behaviours. But the main specicity of protability and eciency criteria of some companies. There
moral and ethical codes is that, acccording to the context, are two main kinds of validation processes: test and proof. In
they may be not entailed by agents or by people and it must this article, we only focus on the second one. Proofs can be
be considered as a normal situation. For example, in a human performed either by model checkers, or by theorem provers.
context, if stealing is not considered as a moral action, some- Model-checkers are basically based on an exhaustive test prin-
body stealing because of hunger is not considered as immoral. ciple whereas theorem provers often use sequent calculus and
As a consequence, this article presents a work which aims heuristics in order to generate proofs.
at proposing a framework for the formal specication and the Even if the proof process can be a long and dicult one,
formal verication of the behaviour of an autonomous agent it allows to prove very early specications which can then be
1 Laboratoire GREYC - UMR 6072, Université du Havre, France, rened progressively until an executable code is obtained with
email:Bruno.Mermet@unicaen.fr proofs at each step. So errors are detected early in the process
2 Symposium on Roboethics, International Conference on Com- which reduces their cost. Renement allows also to simplify
puter Ethics and Philosophical Enquiry, Workshop on AI and formulae to prove at each step enabling their automatic proof.
Ethics, International Conference on AI and Ethics.
3 http://ethicaa.org/ These proofs are based on a formal specication expressed
thanks to a formal language.
2.1 Models and methods dedicated to MAS thing, a distinction between them has been introduced by
The main goal of models dedicated to MAS is to help de- some authors [9, 29]. Indeed, moral establishes rules allowing
velopers to design multiagent systems. A lot of models have to evaluate situations or actions as good or bad. Ethics allows
been proposed, but the most well-known of them is surely the to reconcile moral rules when a conict occurs or when there
BDI model [26] which has become a standard with several are diculties in their application. In the work presented in
extensions. this paper, our denitions are based on this distinction.
MetateM [15] and Desire [7] are among the rst proposed
formal methods dedicated to MAS. However, they don't allow 2.3.1 Moral rules
to specify properties that are expected to be veried by the In our point of view, a moral rule is a rule describing, in a
system. given context, which states of the system must be consid-
In Gaia [32], a MAS is specied twice: as a rst step, its be- ered as good or bad. Thus, moral rules can be specied by
haviour is specied especially thanks to safety properties and a formula like context → Pvar , with Pvar being a predicate
then invariant properties are introduced. Thus, this method dened on variables known by agents. So, a moral rule can
proposes foundations for proving properties about agents be- be seen as a specic conditional invariant property in that it
haviour. However, such proofs are not really possible with is not necessary to check it in order to ensure a correct exe-
Gaia because properties are not associated to agents but to cution of the system. But it must be established if the agent
roles, and there is no formal semantics specifying how the must be used in a system in which the rule is expected to be
dierent roles must be combined. entailed. For example, in the context of an autonomous car,
There is another kind of method: goal-oriented methods. the property lane = highway → speed ≤ 130 can be con-
Most of them, however, are dedicated to agents specication, sidered as a safety property. As a consequence, this property
and, seldom, provide tools for the system level which implies must be fullled by the agent behaviour. Nevertheless, in or-
that the agentication phase must have been achieved before. der to avoid life-threatening, a caution moral rule rp states
Two exceptions can be mentioned: Moise [17] and PASSI [10]. that, when there is ice on the road, the car can not have a
For example, as far as PASSI is concerned, agent types are speed greater than 30 km/h. Formally, this rule is specied
built gathering use cases identied during the analysis phase. as: weather = ice → speed ≤ 30. This property needs not
However, there is no guidelines for the gathering process. to be entailed in order for the car to have a valid behaviour
Finally, more recently, Dastani et al. have proposed the in general. But it must be taken into account in systems in
2APL language [11]. Unfortunately, this formal language does which preservation of life is considered.
not include any proof system. Moreover, 2APL is not compo-
sitional which leads to a too much monolithic system in a 2.3.2 Ethical rules
proof context. When an individual or an agent follows several moral rules, it
sometimes happens that two rules, or more, enter in conict
2.2 Models and methods dedicated to proof with one another. In such a situation, an ethical rule species
As stated before, there are mainly two approaches to check if a what should be done. If some rules like the doctrine of double-
specication is correct: model-checking and theorem-proving. eect [19] can be complex ones, we consider in our work that
Most of works in this area dedicated to agents use model- an ethical rule is a rule stating, in a conict situation, the
checking [6, 25]. however, all these proposals share the same sequence in which the moral rules should be adressed by the
limit: the combinatorial explosion of the possible system ex- agent. We also consider that an ethical rule is contextual: it
ecutions makes the proof of complex MAS very dicult. As may lead to dierent decisions according to the circumstances.
a matter of fact, these approaches often reduce the proof to Considering the autonomous car example, in order to respect
propositional formulae and not predicates. other drivers, a moral rule rr can be introduced. This new
Works dealing with the use of theorem proving for MAS rule states that, when driving on a highway, the speed can
proof are quite unusual. It is certainly because, rst-order not be lower than 80 km/h which can be formally specied as
logic being only semi-decidable, proof attempts must be lane = highway → speed ≥ 80. This rule may conict with
achieved using heuristics and the proof of a true property can the rp rule described before: if there is ice on the road and
fail. However, now, numerous theorem provers, like PVS [24], if the car uses an highway, according to rp , its speed must
are able to prove automatically complex properties. be lower than 30 km/h but is must also be greater than 80
There are other models based on logic programming, such km/h according to rr . An ethical rule can, for example, states
as CaseLP and DCaseLP [3] which are most suited to theorem that, in any case, caution (specied by rp ) must be preferred
proving than the previous one. But, it seems that only proofs to respect (specied by rr ). An other ethical rule could state
on interaction protocols can be performed using these models. that this preference is to be considered only in case of surgery
Congolog [12] and CASL [28] are also two interesting lan- and, in other situations, that the preference must be inverted.
guages, based on situation calculus. Moreover, they allow to
perform proofs. But these proofs are focused only on actions
sequencing. It is not possible to reason about their semantics. 2.4 Very little works about verication of
ethics
Dealing with ethical problems with formal approaches is stud-
2.3 Ethical and moral rules ied especially in [1]. In this article, authors explain why using
Both in philosophy and in latest research in neurology and formal approaches could be interesting to ensure that agents
in cognitive sciences, concepts like moral and ethics have fulll ethical rules. However it is only a position paper: there is
been discussed. Although these words initially mean the same no proposed concrete method to implement these principles.
In [14], authors propose to specify and to formally prove
the ethical decision process described in [31]: when a choice
between dierent actions must be made, a value is associated
to each possible action according to the safety level provided
by the action. As a consequence, if an action A is considered
to be safer than an other one, then A is executed. There is yet
a major drawback to this approach: the ethical dimension is
taken into account only during a choice between actions which
must be managed using the decision procedure described be-
fore. Thus, this work is not general enough to provide an
eective handling of ethics.
Example of a GDT
3 GDT4MAS Figure 1.
To take ethical problems into account, we have decided to use
the GDT4MAS approach [20, 21]. Indeed, this method, that
also includes a model, exhibits several characteristics which 3.3 Proof principles
are interesting to deal with ethical problems: The goal of the proof mechanism proposed in GDT4MAS is
• This method proposes a formal language to speciy not to prove the following properties:
only properties an agent or a MAS must entail but also the
behaviour of agents; • During their execution, agents maintain their invariant
• Properties are specied using rst-order logic, a well-known property. This kind of properties states that the agent must
and expressive formal notation; stay in valid states;
• The proof process can be managed automatically. • The behaviour of agents is sound (i.e. plans associated to
goals are correct);
In next sections, the GDT4MAS method is summarized. • Agents fullll their liveness properties. These properties
More details can be found in previous cited articles. specify dynamic characteristics which must be exhibited
by the agent behaviour.
3.1 Main concepts
Using GDT4MAS requires to specify 3 concepts: the environ- Moreover, the proof mechanism is based on some key prin-
ment, agent types and agents themselves which are considered ciples. Especially, proof obligations (ie. properties that must
as instances of agent types. In the remainder of this section, be proven to ensure the system correctness) can be generated
each of these parts is briey described. automatically from a GDT4MAS specication. They are ex-
pressed in rst-order logic and can be proven by any suited
theorem prover. Last but not least, the proof system is a com-
Environment The environment is specied by a set of positional one: proving the correctness of an agent consists in
typed variables and by an invariant property iE . proving several small independant proof obligations.
Agents types Agent types are specied each by a set of 4 Proving an ethics
typed variables, an invariant property and a behaviour. An 4.1 Problem characterisation
agent behaviour is mainly dened by a Goal Decomposition
Let consider an agent ag whose behaviour has been formally
Tree (GDT). A GDT is a tree where each node is a goal. Its
specied and whose correctness has been proven with respect
root node is associated to the main goal of the agent. A plan,
to previously described properties. Let suppose that this agent
specied by a sub-tree, is associated to each goal: when this
must be used in a world with an ethical rule based on a set of
plan is successfully executed, it means that the goal associated
moral rules. The question we are interested in is the following:
to its root node is achieved. A plan can be made of either a
does the behaviour of ag entails the ethical rule er ?
simple action or a set of goals linked by a a decomposition As GDT4MAS allows especially to prove invariant prop-
operator. The reader is invited to read [20, 21] to know how
erties, we propose that moral rules and ethical rules are ex-
goals are formally described. pressed as such properties. Indeed, most moral rules can eas-
ily be specied by invariant properties. As a consequence, we
Agents Agents are specied as instances of agents types, propose to structure each moral rule as:
with eective values associated to agents types parameters. {(wheni , {(vari , seti )})}
This means that each rule constrains, in dierent contexts,
3.2 GDT example the set of values (seti ) which can be assigned to dierent vari-
Figure 1 shows an example of GDT. The goal of the behaviour ables (vari ). So, the caution rule rp described in section 2.3
specied by this GDT is to turn on the light in a room n (n could be formalized as follows:
is a GDT parameter). To achieve this goal, the agent tries {(weather = ice, {(speed, {0 . . . 30})})}
to enter the room. Indeed, a photoelectric cell is expected
to detect when someone tries to enter the room and, then, However, specifying ethical rules as invariant properties is
to switch on the light. So this seems to be a relevant plan. not as obvious as it is for moral rules. Indeed, they do not char-
However, the photoelectric cell does not always work properly acterize system states but provide prioritisations on moral
(thus, the resolution of the goal Entering the room may fail) rules with respect to dierent contexts.
and the agent can have to use the switch. More details can be
found in [21].
Let M R be the set of moral rules and let P be the set n
(whenmr11 , (V, setmr11 ))
o
of predicates on variables which can be perceived by a given mr1 =
(whenmr12 , (V, setmr12 ))
agent. An ethical rule er is dened as: Let suppose that a second moral rule mr2 provides the
er ∈ P +−> (1 . . . card(M R) >−>> M R) following constraints
( on V : )
(whenmr21 , (V, setmr21 ))
Here, X +−> Y is the set of partial functions from X to Y mr2 = (whenmr22 , (V, setmr22 ))
and X >−>> Y is the set of bijections from X to Y . Therefore, (whenmr23 , (V, setmr23 ))
informally, this denition means that, in some cases charac- Last but not least, it is also supposed that an ethical rule
terized by a given predicate p ∈ P , moral rule M R are priori- species that if the condition cond1 is true, mr1 has the high-
tized. For example, if p ∈ P is a predicate, er(p)(1) denes the est priority against mr2 and it is the opposite if the condition
moral rule with the highest priority when p is true, er(p)(2) cond2 is true. This ethical rule er is dened as follows:
denes the one with the second highest priority and so on. (cond1 , {(1, mr1), (2, mr2)})
n o
er =
To examplify this principle, here is an example: an agent (cond2 , {(1, mr2), (2, mr1)})
A1 must choose the color of a trac light tl1 which stands on We can then generate a set of provable invariant properties
road r1 , at a crossroad with road r2. In the system in which associated to the ethical rule and to moral rules. First of all,
this agent acts, two moral rules stand. The rst one states according to er, when cond1 is true, mr1 takes precedence:
that, to avoid accidents, when the trac light on road r2 is cond1 → (whenmr11 → V ∈ setmr11 )
green or orange then tl1 can not be green. This rule can be cond1 → (whenmr12 → V ∈ setmr12 )
formalized as: Secondly, when cond1 is true and when mr1 does not apply,
{(tl2 ∈ {green, orange} , {tl1, {orange, red}})} mr2 must be fullled: !
The second moral rule mr2 states that the road r1 is a very (¬whenmr11 ∧ ¬whenmr12 )
high priority road and thus, the trac light on road tl1 must cond1 → →
always be green. This rule can be formalized as: (whenmr21 → V ∈ setmr21 ) !
(¬whenmr11 ∧ ¬whenmr12 )
{(true, {tl1, {green}})}
cond1 → →
Obviously, these two rules can not be always satised in the (whenmr22 → V ∈ setmr22 ) !
same time, especially when the second trac light is green. (¬whenmr11 ∧ ¬whenmr12 )
cond1 → →
In this situation, according to mr1, tl1 must be orange or red (whenmr23 → V ∈ setmr23 )
but, according to mr2, tl1 must be green. Finally, when cond1 is true and when mr1 and mr2 apply, if
Let now suppose that, in the considered system, an ethi- possible, a value entailing the two moral rules must be chosen:
cal rule er provides priorities on moral rules. For example, er (cond1 ∧ whenmr11 ∧ whenmr21 )
!
states that r1 is a priority road unless tl2 is green or orange. →
In other words, this means that mr1 has always a higher pri- (setmr11 ∩ setmr21 6= ∅ → V ∈ setmr11 ∩ setmr21 ) !
ority than mr2. Formally, it can be expressed by: (cond1 ∧ whenmr11 ∧ whenmr22 )
{(true, {(1, mr1) , (2, mr2)})} →
(setmr11 ∩ setmr22 6= ∅ → V ∈ setmr11 ∩ setmr22 ) !
(cond1 ∧ whenmr11 ∧ whenmr23 )
4.2 Proposed solution →
As part of our work, we wish to prove that the behaviour of (setmr11 ∩ setmr23 6= ∅ → V ∈ setmr11 ∩ setmr23 ) !
an agent is correct with respect to an ethical rule dened on (cond1 ∧ whenmr12 ∧ whenmr21 )
the basis of several moral rules. The behaviour of an agent →
can not fulll the set of all moral rules that are relevant to it (setmr12 ∩ setmr21 6= ∅ → V ∈ setmr12 ∩ setmr21 ) !
(cond1 ∧ whenmr12 ∧ whenmr22 )
since, as explained previously, these rules may be conicting. →
As a consequence, in order to ensure that the behaviour of (setmr12 ∩ setmr22 6= ∅ → V ∈ setmr12 ∩ setmr22 ) !
an agent is correct with respect to a given ethical rule, we (cond1 ∧ whenmr12 ∧ whenmr23 )
propose a predicates transformation system that turns pred- →
icates associated to moral rules into other predicates which (setmr12 ∩ setmr23 6= ∅ → V ∈ setmr12 ∩ setmr23 )
can be proven, according to the priorities introduced by the Similar invariant properties must also be generated when
ethical rule. In the work presented here, situations with only cond2 is true, but this time with mr2 being the moral rule
two moral rules involved are considered. But the proposed with the highest priority.
principle could be used for a system with more moral rules. Let us now use this mechanism for the previously presented
The main principle is that moral rules and ethical rules are example. As cond1 is true, formulae can be simplied a rst
turned into a set of invariant properties, properties which can time. Moreover, as there is only one case (a single when) for
be proven with our proof system. mr1 and mr2, previous formulae can be simplied a second
time as described in the following. When cond1 is true, mr1
In the remainder, the transformation is shown in a case is the rule with the highest priority:
where only one variable is aected by moral ryles. In the gen- whenmr11 → V ∈ setmr11
eral case, the same formulae must be generated for each vari-
able appearing in the set of moral rules. If a variable appears When cond1 is true and when mr1 does not apply, mr2
only in a subset of moral rules, it is added in other moral must be taken into account:
rules with a unique constraint: its value must be in the vari- (¬whenmr11 ) → (whenmr21 → V ∈ setmr21 )
able denition domain). When cond1 is true and when mr1 and mr2 apply, if pos-
Let's now consider a variable V . Let also suppose that the sible, a value entailing the two moral rules must be chosen:
moral rule mr provides the following constraints on V :
set
mr11 ∩ setmr21 6= ∅
Moreover, the set of hurting answers for each agent is spec-
(whenmr11 ∧ whenmr21 ) → → ied by a function FRD ∈ agents → P(ERP ). In the example,
V ∈ setmr11 ∩ setmr21
Moreover, the following formulae stand: FRD = {(A, {r1, r3}), (B, {r4})} which means that r1 and r3
V ≡ T L1 are hurting answers for agent A and r4 is a hurting answer
when ≡ (T L2 ∈ {green, orange})
mr11
for agent B . The variable containing the answer to agent A is
called VRF A and the variable containing the answer to agent
setmr11 ≡ ({orange, red})
whenmr21 ≡ (true)
B is called VRF B .
setmr21 ≡ ({green}) In this example, two moral rules are identied:
As a consequence, the following invariant property can be • mr1: C does not want to hurt A or B that is why its answers
obtained. It must be proven in order to ensure that the be- must be chosen among non hurting ones ;
haviour of agent a1 is executed with respect to the ethical • mr2: C does not want to lie that is why its answers must
rule which species that the road r1 is a priority road unless
the trac light T L2 is green or orange: be true reasons.
T L2 ∈ {green, orange} → T L1 ∈ {orange, red}
T L2 ∈
/ {green, orange}) →T L1 ∈ {green} These rules n
can be nformalised as:
(VRF A , ERP − FRD (A))
oo
{orange, red} ∩ {green} 6= ∅ mr1 : true,
(T L2 ∈ {green, orange})→ → (VRF B , ERP − FRD (B))
T L1 ∈ {orange, red} ∩ {green}
As {orange, red}∩{green} = ∅, this invariant property can mr2 : {true, {(VRF A , {VRC }), (VRF B , {VRC })}}
be simplied: Finally, an ethical rule er states that, in any case, mr1 has
T L2 ∈ {green, orange} → T L1 ∈ {orange, red}
a highest priority than mr2 which can be formalised by:
T L2 ∈
/ {green, orange}) → T L1 ∈ {green} er = {(true, {(1, mr1), (2, mr2)})}
Therefore, thanks to the proposed predicates transforma- Applying principles described in the previous section, we
tion system, a new invariant property is generated which will have to add formulae given below to the invariant property
be maintained by any agent whose behaviour fullls the dif- associated to C (here are only shown formulae generated for
ferent moral rules as specied by the ethical rule dened in VRF A ; similar formulae for VRF B must be also added). For
the system. The proof system associated to GDT4MAS allows each formula, we summarize informally what it species.
to prove that the formal specication of such an agent leads When cond1 is true, mr1 has the highest priority:
to a behaviour that maintains the invariant property. true → (true → VRF A ∈ ERP − FRD (A))
4.3 Case study When cond1 is true, when mr1 does not apply, mr2 must be
used:
In this section, an application of principles presented in the true → ((¬true ∧ ¬true) → (true → VRF A ∈ {VRC }))
previous section to a new case study is shown. This case study
is based on a more usual ethical question and has not been When cond1 is true, when mr1 and mr2 apply, if possible, a
value entailing the two moral rules must be chosen:
designed especially as a use case for our framework. It involves (true ∧ true ∧ true →
three agents A, B and C which have to nd a meeting date. ((ERP −F R D(A))∩{VRC } 6= ∅ →
An agent can propose a date and the two other agents must VRF A ∈ (ERP −FRD (A))∩{VRC }))
inform all agents if the date suits them or not. For exemple,
A can propose a date to B and C . If B or C does not accept This can then be simplied as follows:
the date, it must give a reason for its denial to other agents. VRF A ∈ ERP − FRD (A)
((ERP −FRD (A))∩{VRC } 6= ∅ →
Let suppose that d is a date proposed by A. C has to act with VRF A ∈ (ERP −FRD (A))∩{VRC }))
respect to the following moral rules:
• mr1: C does not want to hurt anybody; If the nal invariant property, obtained by adding this set
• mr2: C must inform A and B about the reason why the of properties to the initial invariant property of agent C , is
date d does not suit him. maintained by C , it ensures that the behaviour of C entails
the ethical rule introduced before. And the proof system as-
However, if the true reason that explains why C does not sociated to GDT4MAS allows to prove that the behaviour of
accept the date d can hurt A (for example a date with A's an agent maintains an invariant property.
wife), the two rules mr1 and mr2 are conicting. To manage As a consequence, according to these properties, in the pre-
this conict, C is supposed to use an ethical rule er which sented case study, and in a context where the true reason for
C to deny the date is r1, an agent whose behaviour is exe-
states that, in any case, it is better not to hurt than to tell cuted with respect to the ethical rule er should have only to
the truth. ensure:
In order to formalise this problem, some notations must be VRF A ∈ {r1, r2, r3, r4} − {r1, r3}
introduced. The set of answers that can be given by C to A This can be simplied as:
or B is called ERP and is dened as ERP = {r1, r2, r3, r4}. VRF A ∈ {r2, r4}
The variable that contains the true reason which explains the On the other hand, if the true reason is r2, the behaviour of
denial of C is call VRC . To clarify the issue, here is an example C should entail the two following properties:
of what could be the dierent reasons used by C : VRF A ∈ {r1, r2, r3, r4} − {r1, r3}
• r1: I have a date with A's wife; VRF A ∈ ({r1, r2, r3, r4} − {r1, r3}) ∩ {r2}
• r2: I am sick ; This implies that the only solution is: VRF A = r2. Proceeding
• r3: A is not good at organising meetings; like that for each possible reason that can be given by C , the
• r4: I had an accident with the car that B lended to me. following table can be obtained:
VRC r1 r2 r3 r4 [6] R.H. Bordini, M. Fisher, W. Visser, and M. Wooldridge, `Ver-
VRF A r2, r4 r2 r2, r4 r4 iable multi-agent programs', in 1st ProMAS, (2003).
[7] F.M.T. Brazier, P.A.T. van Eck, and J. Treur, Simulating So-
This little analysis allows to generate simple properties which cial Phenomena, volume 456, chapter Modelling a Society of
must be entailed by an agent that prefers to lie than to hurt Simple Agents: from Conceptual Specication to Experimen-
but that, when possible, tells the truth. Indeed, when the true tation, pp 103109, LNEMS, 1997.
[8] H. Coelho and A.C. da Rocha Costa. On the intelligence of
reason does not hurt A (r2 or r4), an agent whose behaviour is moral agency, 2009.
proven to be correct, must have to give this reason. However, [9] A. Comte-Sponville, La philosophie, PUF, 2012.
when the true reason hurts A (r1 or r3), an agent with a [10] M. Cossentino and C. Potts, `A CASE tool supported method-
correct behaviour must have to lie by giving to A an other ology for the design of multi-agent systems', in SERP, (2002).
reason (here, r2 or r4). [11] M. Dastani, `2APL: a practical agent programming language',
JAAMAS, 16, 214248, (2008).
[12] G. de Giacomo, Y. Lesperance, and H. J. Levesque, `Con-
5 Conclusion and future works golog, a concurrent programming language based on the sit-
In this article, we have shown it is possible to formally prove uation calculus', Articial Intelligence, 121(1-2), 109169,
that an agent acts with respect to potentially conicting moral (2000).
rules if there exists an ethical rule allowing to manage con- [13] Commission de réexion sur l'Éthique de la Recherche en sci-
ence et technologies du Numérique d'Allistene, `éthique de la
icts. Indeed, this rule must specify, when at least two moral recherche en robotique', Technical report, CERNA, (2014).
rules are conicting and in dierent contexts, priorities be- [14] L.A. Dennis, M. Fisher, and A.F.T. Wineld, `Towards Ver-
tween the dierent moral rules. In order to achieve this, we iably Ethical Robot Behaviour', in Artial Intelligence and
have introduced predicate transformers which enable us to Ethics AAAI Workshop, (2015).
generate a set of consistent predicates from nonetheless con- [15] M. Fisher, `A survey of concurrent METATEM the language
and its applications', in 1st ICTL, pp. 480505, (1994).
icting moral rules. After a rst simple example used to in- [16] J.G. Ganascia, `Modeling ethical rules of lying with answer
troduce concepts, we have shown with a more concrete case set programming', Ethics and Information Technologyn, 9,
study that the proposed framework may be used for more 3947, (2007).
real-world cases. [17] J.F. Hubner, J.S. Sichman, and O. Boissier, `Spécication
structurelle, fonctionnelle et déontique d'organisations dans
Other case studies are however required to really validate les SMA', in JFIADSM. Hermes, (2002).
the scope of the proposed framework. In particular, moral [18] D. McDermott, `Why ethics is a high hurdle for ai', in North
rules have been restricted to rules that can be specied as American Conference on Computers and Philosophy, (2008).
disjoint assignment constraints on variables values. It seems [19] A. McIntyre, `Doctrine of double eect', in The Stanford En-
important to evaluate the consequences of this restriction. cyclopedia of Philosophy, ed., Edward N. Zalta, winter edn.,
(2014).
For cases where this restriction would invalidate the proposed [20] B. Mermet and G. Simon, `Specifying recursive agents with
approach, we have to study how this framework could be GDTs.', JAAMAS, 23(2), 273301, (2011).
extended to linked variables assignments. For example, one [21] B. Mermet and G. Simon, `A new proof system to verify GDT
could imagine that the caution rule, associated to the case agents.', in IDC, volume 511 of Studies in Computational In-
of driving on ice, may establish a link between the maximum telligence, pp. 181187. Springer, (2013).
[22] J.H. Moor, `The nature, importance, and diculty of machine
speed and the angle of the car to the straigth direction as ethics', IEEE Intelligent Systems, 21(4), 2937, (2006).
follows: weather = ice → speed + angle/2 ≤ 40. Indeed, the [23] Future of Life Institute. Research priorities for robust and
sharper is the turn taken by the car, the lower must be the benecial articial intelligence, 2015.
speed to avoid the car to skid. [24] S. Owre, N. Shankar, and J. Rushby, `Pvs: A prototype veri-
cation system', in 11th CADE, (1992).
Last but not least, from a philosophical point of view, our [25] F. Raimondi and A. Lomuscio, `Verication of multiagent sys-
approach must be extended in order to capture more precisely tems via orderd binary decision diagrams: an algorithm and
moral and ethics, especially by integrating value notion. In- its implementation', in 3rd AAMAS, (2004).
deed, moral rules are generally based on values such as gen- [26] A. Rao and M. George, `BDI agents from theory to practice',
in Technical note 56. AAII, (1995).
erosity, equality, love of the truth... and, in a specic context, [27] A. Saptawijaya and L.M. Pereira, `Towards modeling morality
ethical judgement uses a hierarchy between these values. For- computationally with logic programming', in 16th ISPADL,
mally specifying the value notion is then the next step of our pp. 104119, (2014).
work. [28] S. Shapiro, Y. Lespérance, and H. J. Levesque, `The Cognitive
Agents Specication Language and Verication Environment
for Multiagent Systems', in 2nd AAMAS, pp. 1926, (2002).
REFERENCES [29] M. Timmons, Moral theory: An introduction, Rowman and
Littleled, 2012.
[1] D. Abramson and L. Pike, `When formal Systems Kill: Com- [30] M. Tus and J.-G. Ganascia, `Normative rational agents: A
puter Ethics and Formal Methods', APA Newsletter on Phi- BDI approach', in 1st Workshop on Rights and Duties of Au-
losophy and Computers, 11(1), (2011). tonomous Agents, pp. 3743. CEUR Proceedings Vol. 885,
[2] R. Arkin, Governing Lethal Behavior in Autonomous Robots, (2012).
Chapman and Hall, 2009. [31] A.F.T. Wineld, C. Blum, and W. Liu, `Towards and Ethi-
[3] M. Baldoni, C. Baroglio, I. Gungui, A. Martelli, M. Martelli, cal Robot: Internal Models, Consequences and Ethical Action
V. Mascardi nd V. Patti, and C. Schifanella, `Reasoning Selection', in LNCS, volume 8717, pp. 8596, (2014).
About Agents' Interaction Protocols Inside DCaseLP', in [32] M. Wooldridge, N. R. Jennings, and D. Kinny, `The gaia
LNCS, volume 3476, pp. 112131, (2005). methodology for agent-oriented analysis and design', JAA-
[4] A.F. Beavers, `Moral machines and the threat of ethical ni- MAS, 3(3), 285312, (2000).
hilism', in Robot ethics: the ethical and social implication of
robotics, 333386, MIT Press, (2011).
[5] F. Berreby, G. Bourgne, and J.-G. Ganascia, `Modelling moral
reasoning and ethical responsibility with logic programming',
in 20th LPAR, pp. 532548, (2015).