<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Formal Verication of Ethical Properties in Multiagent Systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Bruno Mermet</string-name>
          <email>Bruno.Mermet@unicaen.fr</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gaºle Simon</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Laboratoire GREYC - UMR 6072, UniversitØ du Havre</institution>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Symposium on Roboethics</institution>
          ,
          <addr-line>International Conference on Computer Ethics and Philosophical Enquiry, Workshop on AI and Ethics, International Conference on AI and Ethics</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>The increasing use of autonomous articial agents in hospitals or in transport control systems leads to consider whether moral rules shared by many of us are followed by these agents. This is a particularly hard problem because most of these moral rules are often not compatible. In such cases, humans usually follow ethical rules to promote one moral rule or another. Using formal verication to ensure that an agent follows a given ethical rule could help in increasing the condence in articial agents. In this article, we show how a set of formal properties can be obtained from an ethical rule ordering conicting moral rules. If the behaviour of an agent entails these properties (which can be proven using our existing proof framework), it means that this agent follows this ethical rule.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>
        The introduction of autonomous articial agents in domains
like health or high-frequency trading could lead to
numerous problems if these agents are not able to understand and
to take into account moral rules. For example, agents able
to understand and to use the code of medical ethics could
base their decision on ethical motivations in order to choose
which piece of information must be provided, according to
the medical condentiality. This explains that the research
community [
        <xref ref-type="bibr" rid="ref13 ref23">13, 23</xref>
        ] seems recently to focus on ethics for
autonomous agents, which lead to numerous articles [
        <xref ref-type="bibr" rid="ref18 ref22 ref4">4, 18, 22</xref>
        ]
and conferences2.
      </p>
      <p>
        This article takes place in ETHICAA project 3 which aims
at dealing with management of moral and ethical conicts
between autonomous agents. If existing works are mainly
focused on ethical decision and reasoning questions, [
        <xref ref-type="bibr" rid="ref16 ref2 ref27 ref30 ref5 ref8">5, 27, 30,
2, 8, 16</xref>
        ], there are very few proposals dedicated to formal
verication of such behaviours. But the main specicity of
moral and ethical codes is that, acccording to the context,
they may be not entailed by agents or by people and it must
be considered as a normal situation. For example, in a human
context, if stealing is not considered as a moral action,
somebody stealing because of hunger is not considered as immoral.
      </p>
      <p>
        As a consequence, this article presents a work which aims
at proposing a framework for the formal specication and the
formal verication of the behaviour of an autonomous agent
from an ethical point of view. As stated in the work of
Abramson and Pike, a moral rule is represented by a formal property
that must be entailed by an agent [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. As a consequence, the
behaviour of an agent is an ethical one if it entails all the
expected moral rules in a given context.
      </p>
      <p>Considering that a moral rule can be represented by a rst
order logical formula F with enough expressiveness for most
practical cases, our goal is to establish that the behaviour
of an agent is an ethical one if it entails F . If not, then the
behaviour is not an ethical one. However, such a logical system
is only semi-decidable: it is not always possible to prove that
a system does not entail a formula F . Indeed, if an automatic
prover does not manage to prove that the behaviour of an
agent entails a formula F , it is not possible to automatically
determine if it results from the fact that the behaviour does
not actually entail F or if it is because the prover can not
prove the opposite.</p>
      <p>As a consequence, we propose to use a formal framework
allowing to reduce as far as possible the number of correct
formulae that can not automatically be proven. In section 2, such
formal frameworks are described, especially those dedicated
to multiagent systems. Then what we call moral and ethical
rules are dened. In section 3, our formal system is described
ans its use in an ethical context is presented in section 4.
2</p>
      <p>tate of the art
Since the early days of computing, the need to ensure the
correctness of softwares is a major issue for software developers.
This need has become crucial with critical systems, that is
to say applications dedicated to domains where safety is
vital (as transport for example). However, formally proving a
software is a long and dicult process which can conict with
protability and eciency criteria of some companies. There
are two main kinds of validation processes: test and proof. In
this article, we only focus on the second one. Proofs can be
performed either by model checkers, or by theorem provers.
Model-checkers are basically based on an exhaustive test
principle whereas theorem provers often use sequent calculus and
heuristics in order to generate proofs.</p>
      <p>
        Even if the proof process can be a long and dicult one,
it allows to prove very early specications which can then be
rened progressively until an executable code is obtained with
proofs at each step. So errors are detected early in the process
which reduces their cost. Renement allows also to simplify
formulae to prove at each step enabling their automatic proof.
These proofs are based on a formal specication expressed
thanks to a formal language.
2.1 Models and methods dedicated to MAS
The main goal of models dedicated to MAS is to help
developers to design multiagent systems. A lot of models have
been proposed, but the most well-known of them is surely the
BDI model [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] which has become a standard with several
extensions.
      </p>
      <p>
        MetateM [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and Desire [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] are among the rst proposed
formal methods dedicated to MAS. However, they don’t allow
to specify properties that are expected to be veried by the
system.
      </p>
      <p>
        In Gaia [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ], a MAS is specied twice: as a rst step, its
behaviour is specied especially thanks to safety properties and
then invariant properties are introduced. Thus, this method
proposes foundations for proving properties about agents
behaviour. However, such proofs are not really possible with
Gaia because properties are not associated to agents but to
roles, and there is no formal semantics specifying how the
dierent roles must be combined.
      </p>
      <p>
        There is another kind of method: goal-oriented methods.
Most of them, however, are dedicated to agents specication,
and, seldom, provide tools for the system level which implies
that the agentication phase must have been achieved before.
Two exceptions can be mentioned: Moise [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] and PASSI [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
For example, as far as PASSI is concerned, agent types are
built gathering use cases identied during the analysis phase.
However, there is no guidelines for the gathering process.
      </p>
      <p>
        Finally, more recently, Dastani et al. have proposed the
2APL language [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Unfortunately, this formal language does
not include any proof system. Moreover, 2APL is not
compositional which leads to a too much monolithic system in a
proof context.
2.2 Models and methods dedicated to proof
As stated before, there are mainly two approaches to check if a
specication is correct: model-checking and theorem-proving.
      </p>
      <p>
        Most of works in this area dedicated to agents use
modelchecking [
        <xref ref-type="bibr" rid="ref25 ref6">6, 25</xref>
        ]. however, all these proposals share the same
limit: the combinatorial explosion of the possible system
executions makes the proof of complex MAS very dicult. As
a matter of fact, these approaches often reduce the proof to
propositional formulae and not predicates.
      </p>
      <p>
        Works dealing with the use of theorem proving for MAS
proof are quite unusual. It is certainly because, rst-order
logic being only semi-decidable, proof attempts must be
achieved using heuristics and the proof of a true property can
fail. However, now, numerous theorem provers, like PVS [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ],
are able to prove automatically complex properties.
      </p>
      <p>
        There are other models based on logic programming, such
as CaseLP and DCaseLP [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] which are most suited to theorem
proving than the previous one. But, it seems that only proofs
on interaction protocols can be performed using these models.
      </p>
      <p>
        Congolog [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] and CASL [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ] are also two interesting
languages, based on situation calculus. Moreover, they allow to
perform proofs. But these proofs are focused only on actions
sequencing. It is not possible to reason about their semantics.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2.3 Ethical and moral rules</title>
      <p>
        Both in philosophy and in latest research in neurology and
in cognitive sciences, concepts like moral and ethics have
been discussed. Although these words initially mean the same
thing, a distinction between them has been introduced by
some authors [
        <xref ref-type="bibr" rid="ref29 ref9">9, 29</xref>
        ]. Indeed, moral establishes rules allowing
to evaluate situations or actions as good or bad. Ethics allows
to reconcile moral rules when a conict occurs or when there
are diculties in their application. In the work presented in
this paper, our denitions are based on this distinction.
      </p>
      <sec id="sec-2-1">
        <title>2.3.1 Moral rules</title>
        <p>In our point of view, a moral rule is a rule describing, in a
given context, which states of the system must be
considered as good or bad. Thus, moral rules can be specied by
a formula like context ! Pvar, with Pvar being a predicate
dened on variables known by agents. So, a moral rule can
be seen as a specic conditional invariant property in that it
is not necessary to check it in order to ensure a correct
execution of the system. But it must be established if the agent
must be used in a system in which the rule is expected to be
entailed. For example, in the context of an autonomous car,
the property lane = highway ! speed 130 can be
considered as a safety property. As a consequence, this property
must be fullled by the agent behaviour. Nevertheless, in
order to avoid life-threatening, a caution moral rule rp states
that, when there is ice on the road, the car can not have a
speed greater than 30 km/h. Formally, this rule is specied
as: weather = ice ! speed 30. This property needs not
to be entailed in order for the car to have a valid behaviour
in general. But it must be taken into account in systems in
which preservation of life is considered.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.3.2 Ethical rules</title>
        <p>
          When an individual or an agent follows several moral rules, it
sometimes happens that two rules, or more, enter in conict
with one another. In such a situation, an ethical rule species
what should be done. If some rules like the doctrine of
doubleeect [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ] can be complex ones, we consider in our work that
an ethical rule is a rule stating, in a conict situation, the
sequence in which the moral rules should be adressed by the
agent. We also consider that an ethical rule is contextual: it
may lead to dierent decisions according to the circumstances.
Considering the autonomous car example, in order to respect
other drivers, a moral rule rr can be introduced. This new
rule states that, when driving on a highway, the speed can
not be lower than 80 km/h which can be formally specied as
lane = highway ! speed 80. This rule may conict with
the rp rule described before: if there is ice on the road and
if the car uses an highway, according to rp, its speed must
be lower than 30 km/h but is must also be greater than 80
km/h according to rr. An ethical rule can, for example, states
that, in any case, caution (specied by rp) must be preferred
to respect (specied by rr). An other ethical rule could state
that this preference is to be considered only in case of surgery
and, in other situations, that the preference must be inverted.
2.4 Very little works about verication of
ethics
Dealing with ethical problems with formal approaches is
studied especially in [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. In this article, authors explain why using
formal approaches could be interesting to ensure that agents
fulll ethical rules. However it is only a position paper: there is
no proposed concrete method to implement these principles.
        </p>
        <p>
          In [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], authors propose to specify and to formally prove
the ethical decision process described in [
          <xref ref-type="bibr" rid="ref31">31</xref>
          ]: when a choice
between dierent actions must be made, a value is associated
to each possible action according to the safety level provided
by the action. As a consequence, if an action A is considered
to be safer than an other one, then A is executed. There is yet
a major drawback to this approach: the ethical dimension is
taken into account only during a choice between actions which
must be managed using the decision procedure described
before. Thus, this work is not general enough to provide an
eective handling of ethics.
3 GDT4MAS
To take ethical problems into account, we have decided to use
the GDT4MAS approach [
          <xref ref-type="bibr" rid="ref20 ref21">20, 21</xref>
          ]. Indeed, this method, that
also includes a model, exhibits several characteristics which
are interesting to deal with ethical problems:
        </p>
        <p>This method proposes a formal language to speciy not
only properties an agent or a MAS must entail but also the
behaviour of agents;
Properties are specied using rst-order logic, a well-known
and expressive formal notation;
The proof process can be managed automatically.</p>
        <p>In next sections, the GDT4MAS method is summarized.
More details can be found in previous cited articles.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3.1 Main concepts</title>
      <p>Using GDT4MAS requires to specify 3 concepts: the
environment, agent types and agents themselves which are considered
as instances of agent types. In the remainder of this section,
each of these parts is briey described.</p>
      <p>Environment The environment is specied by a set of
typed variables and by an invariant property iE .</p>
      <p>
        Agents types Agent types are specied each by a set of
typed variables, an invariant property and a behaviour. An
agent behaviour is mainly dened by a Goal Decomposition
Tree (GDT). A GDT is a tree where each node is a goal. Its
root node is associated to the main goal of the agent. A plan,
specied by a sub-tree, is associated to each goal: when this
plan is successfully executed, it means that the goal associated
to its root node is achieved. A plan can be made of either a
simple action or a set of goals linked by a a decomposition
operator. The reader is invited to read [
        <xref ref-type="bibr" rid="ref20 ref21">20, 21</xref>
        ] to know how
goals are formally described.
      </p>
      <p>Agents Agents are specied as instances of agents types,
with eective values associated to agents types parameters.</p>
    </sec>
    <sec id="sec-4">
      <title>3.2 GDT example</title>
      <p>
        Figure 1 shows an example of GDT. The goal of the behaviour
specied by this GDT is to turn on the light in a room n (n
is a GDT parameter). To achieve this goal, the agent tries
to enter the room. Indeed, a photoelectric cell is expected
to detect when someone tries to enter the room and, then,
to switch on the light. So this seems to be a relevant plan.
However, the photoelectric cell does not always work properly
(thus, the resolution of the goal Entering the room may fail)
and the agent can have to use the switch. More details can be
found in [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ].
      </p>
    </sec>
    <sec id="sec-5">
      <title>3.3 Proof principles</title>
      <p>The goal of the proof mechanism proposed in GDT4MAS is
to prove the following properties:</p>
      <p>During their execution, agents maintain their invariant
property. This kind of properties states that the agent must
stay in valid states;
The behaviour of agents is sound (i.e. plans associated to
goals are correct);
Agents fullll their liveness properties. These properties
specify dynamic characteristics which must be exhibited
by the agent behaviour.</p>
      <p>Moreover, the proof mechanism is based on some key
principles. Especially, proof obligations (ie. properties that must
be proven to ensure the system correctness) can be generated
automatically from a GDT4MAS specication. They are
expressed in rst-order logic and can be proven by any suited
theorem prover. Last but not least, the proof system is a
compositional one: proving the correctness of an agent consists in
proving several small independant proof obligations.</p>
    </sec>
    <sec id="sec-6">
      <title>4 Proving an ethics</title>
      <p>4.1 Problem characterisation
Let consider an agent ag whose behaviour has been formally
specied and whose correctness has been proven with respect
to previously described properties. Let suppose that this agent
must be used in a world with an ethical rule based on a set of
moral rules. The question we are interested in is the following:
does the behaviour of ag entails the ethical rule er ?</p>
      <p>As GDT4MAS allows especially to prove invariant
properties, we propose that moral rules and ethical rules are
expressed as such properties. Indeed, most moral rules can
easily be specied by invariant properties. As a consequence, we
propose to structure each moral rule as:</p>
      <p>f(wheni; f(vari; seti)g)g</p>
      <p>This means that each rule constrains, in dierent contexts,
the set of values (seti) which can be assigned to dierent
variables (vari). So, the caution rule rp described in section 2.3
could be formalized as follows:</p>
      <p>f(weather = ice; f(speed; f0 : : : 30g)g)g</p>
      <p>However, specifying ethical rules as invariant properties is
not as obvious as it is for moral rules. Indeed, they do not
characterize system states but provide prioritisations on moral
rules with respect to dierent contexts.</p>
      <p>Let M R be the set of moral rules and let P be the set
of predicates on variables which can be perceived by a given
agent. An ethical rule er is dened as:</p>
      <p>er 2 P + &gt; (1 : : : card(M R) &gt; &gt;&gt; M R)</p>
      <p>Here, X + &gt; Y is the set of partial functions from X to Y
and X &gt; &gt;&gt; Y is the set of bijections from X to Y . Therefore,
informally, this denition means that, in some cases
characterized by a given predicate p 2 P, moral rule M R are
prioritized. For example, if p 2 P is a predicate, er(p)(1) denes the
moral rule with the highest priority when p is true, er(p)(2)
denes the one with the second highest priority and so on.</p>
      <p>To examplify this principle, here is an example: an agent
A1 must choose the color of a trac light tl1 which stands on
road r1 , at a crossroad with road r2. In the system in which
this agent acts, two moral rules stand. The rst one states
that, to avoid accidents, when the trac light on road r2 is
green or orange then tl1 can not be green. This rule can be
formalized as:</p>
      <p>f(tl2 2 fgreen; orangeg ; ftl1; forange; redgg)g</p>
      <p>The second moral rule mr2 states that the road r1 is a very
high priority road and thus, the trac light on road tl1 must
always be green. This rule can be formalized as:</p>
      <p>f(true; ftl1; fgreengg)g</p>
      <p>Obviously, these two rules can not be always satised in the
same time, especially when the second trac light is green.
In this situation, according to mr1, tl1 must be orange or red
but, according to mr2, tl1 must be green.</p>
      <p>Let now suppose that, in the considered system, an
ethical rule er provides priorities on moral rules. For example, er
states that r1 is a priority road unless tl2 is green or orange.
In other words, this means that mr1 has always a higher
priority than mr2. Formally, it can be expressed by:
f(true; f(1; mr1) ; (2; mr2)g)g</p>
    </sec>
    <sec id="sec-7">
      <title>4.2 Proposed solution</title>
      <p>As part of our work, we wish to prove that the behaviour of
an agent is correct with respect to an ethical rule dened on
the basis of several moral rules. The behaviour of an agent
can not fulll the set of all moral rules that are relevant to it
since, as explained previously, these rules may be conicting.
As a consequence, in order to ensure that the behaviour of
an agent is correct with respect to a given ethical rule, we
propose a predicates transformation system that turns
predicates associated to moral rules into other predicates which
can be proven, according to the priorities introduced by the
ethical rule. In the work presented here, situations with only
two moral rules involved are considered. But the proposed
principle could be used for a system with more moral rules.
The main principle is that moral rules and ethical rules are
turned into a set of invariant properties, properties which can
be proven with our proof system.</p>
      <p>In the remainder, the transformation is shown in a case
where only one variable is aected by moral ryles. In the
general case, the same formulae must be generated for each
variable appearing in the set of moral rules. If a variable appears
only in a subset of moral rules, it is added in other moral
rules with a unique constraint: its value must be in the
variable denition domain).</p>
      <p>Let’s now consider a variable V . Let also suppose that the
moral rule mr provides the following constraints on V :
mr1 = n (whenmr11 ; (V; setmr11 )) o</p>
      <p>(whenmr12 ; (V; setmr12 ))</p>
      <p>Let suppose that a second moral rule mr2 provides the
following constraints on V :</p>
      <p>( (whenmr21 ; (V; setmr21 )) )
mr2 = (whenmr22 ; (V; setmr22 ))</p>
      <p>(whenmr23 ; (V; setmr23 ))</p>
      <p>Last but not least, it is also supposed that an ethical rule
species that if the condition cond1 is true, mr1 has the
highest priority against mr2 and it is the opposite if the condition
cond2 is true. This ethical rule er is dened as follows:
er = n ((ccoonndd21;; ff((11;; mmrr21));; ((22;; mmrr12))gg)) o</p>
      <p>We can then generate a set of provable invariant properties
associated to the ethical rule and to moral rules. First of all,
according to er, when cond1 is true, mr1 takes precedence:
cond1 ! (whenmr11 ! V 2 setmr11 )
cond1 ! (whenmr12 ! V 2 setmr12 )</p>
      <p>Secondly, when cond1 is true and when mr1 does not apply,
mr2 must be fullled:</p>
      <p>cond1 !
(cond1 ^ whenmr11 ^ whenmr21 )
!
(whenmr23 ! V 2 setmr23 )</p>
      <p>Finally, when cond1 is true and when mr1 and mr2 apply, if
possible, a value entailing the two moral rules must be chosen:
!
!
(setmr11 \ setmr21 6= ; ! V 2 setmr11 \ setmr21 )
(cond1 ^ whenmr11 ^ whenmr22 )
!
(setmr11 \ setmr22 6= ; ! V 2 setmr11 \ setmr22 )
(cond1 ^ whenmr11 ^ whenmr23 )
!
(setmr11 \ setmr23 6= ; ! V 2 setmr11 \ setmr23 )
(cond1 ^ whenmr12 ^ whenmr21 )
!
(setmr12 \ setmr21 6= ; ! V 2 setmr12 \ setmr21 )
(cond1 ^ whenmr12 ^ whenmr22 )
!
(setmr12 \ setmr22 6= ; ! V 2 setmr12 \ setmr22 )
(cond1 ^ whenmr12 ^ whenmr23 )
!
!
!
!
!
!
(setmr12 \ setmr23 6= ; ! V 2 setmr12 \ setmr23 )
Similar invariant properties must also be generated when
cond2 is true, but this time with mr2 being the moral rule
with the highest priority.</p>
      <p>Let us now use this mechanism for the previously presented
example. As cond1 is true, formulae can be simplied a rst
time. Moreover, as there is only one case (a single when) for
mr1 and mr2, previous formulae can be simplied a second
time as described in the following. When cond1 is true, mr1
is the rule with the highest priority:</p>
      <p>whenmr11 ! V 2 setmr11</p>
      <p>When cond1 is true and when mr1 does not apply, mr2
must be taken into account:</p>
      <p>(:whenmr11 ) ! (whenmr21 ! V 2 setmr21 )</p>
      <p>When cond1 is true and when mr1 and mr2 apply, if
possible, a value entailing the two moral rules must be chosen:
cond1 !
cond1 !
(:whenmr11 ^ :whenmr12 ) !
!
(whenmr21 ! V 2 setmr21 )
(:whenmr11 ^ :whenmr12 ) !
!
(whenmr22 ! V 2 setmr22 )
(:whenmr11 ^ :whenmr12 ) !
setmr11 \ setmr21 6= ;
(whenmr11 ^ whenmr21 ) ! !
Moreover, the following formulaVe 2stsaentdm:r11 \ setmr21
8 V T L1
&gt;&gt;&lt; whenmr11 (T L2 2 fgreen; orangeg)</p>
      <p>setmr11 (forange; redg)
&gt; whenmr21 (true)
&gt;: setmr21 (fgreeng)</p>
      <p>As a consequence, the following invariant property can be
obtained. It must be proven in order to ensure that the
behaviour of agent a1 is executed with respect to the ethical
rule which species that the road r1 is a priority road unless
the trac light T L2 is green or orange:
(T L2 2 fgreen; orangeg)!
T L2 2 fgreen; orangeg ! T L1 2 forange; redg
T L2 2= fgreen; orangeg) ! T L1 2 fgreeng
forange; redg \ fgreeng 6= ;</p>
      <p>T!L1 2 forange; redg \ fgreeng</p>
      <p>As forange; redg\fgreeng = ;, this invariant property can
be simplied:</p>
      <p>T L2 2 fgreen; orangeg ! T L1 2 forange; redg</p>
      <p>T L2 2= fgreen; orangeg) ! T L1 2 fgreeng</p>
      <p>Therefore, thanks to the proposed predicates
transformation system, a new invariant property is generated which will
be maintained by any agent whose behaviour fullls the
different moral rules as specied by the ethical rule dened in
the system. The proof system associated to GDT4MAS allows
to prove that the formal specication of such an agent leads
to a behaviour that maintains the invariant property.
4.3</p>
    </sec>
    <sec id="sec-8">
      <title>Case study</title>
      <p>In this section, an application of principles presented in the
previous section to a new case study is shown. This case study
is based on a more usual ethical question and has not been
designed especially as a use case for our framework. It involves
three agents A, B and C which have to nd a meeting date.
An agent can propose a date and the two other agents must
inform all agents if the date suits them or not. For exemple,
A can propose a date to B and C. If B or C does not accept
the date, it must give a reason for its denial to other agents.
Let suppose that d is a date proposed by A. C has to act with
respect to the following moral rules:
mr1: C does not want to hurt anybody;
mr2: C must inform A and B about the reason why the
date d does not suit him.</p>
      <p>However, if the true reason that explains why C does not
accept the date d can hurt A (for example a date with A’s
wife), the two rules mr1 and mr2 are conicting. To manage
this conict, C is supposed to use an ethical rule er which
states that, in any case, it is better not to hurt than to tell
the truth.</p>
      <p>In order to formalise this problem, some notations must be
introduced. The set of answers that can be given by C to A
or B is called ERP and is dened as ERP = fr1; r2; r3; r4g.
The variable that contains the true reason which explains the
denial of C is call VRC . To clarify the issue, here is an example
of what could be the dierent reasons used by C:
r1: I have a date with A’s wife;
r2: I am sick ;
r3: A is not good at organising meetings;
r4: I had an accident with the car that B lended to me.</p>
      <p>Moreover, the set of hurting answers for each agent is
specied by a function FRD 2 agents ! P(ERP ). In the example,
FRD = f(A; fr1; r3g); (B; fr4g)g which means that r1 and r3
are hurting answers for agent A and r4 is a hurting answer
for agent B. The variable containing the answer to agent A is
called VRF A and the variable containing the answer to agent
B is called VRF B.</p>
      <p>In this example, two moral rules are identied:
mr1: C does not want to hurt A or B that is why its answers
must be chosen among non hurting ones ;
mr2: C does not want to lie that is why its answers must
be true reasons.</p>
      <sec id="sec-8-1">
        <title>These rules can be formalised as:</title>
        <p>mr1 : ntrue; n (VRF A; ERP
(VRF B; ERP</p>
        <p>FRD(A)) oo
FRD(B))
mr2 : ftrue; f(VRF A; fVRC g); (VRF B; fVRC g)gg
Finally, an ethical rule er states that, in any case, mr1 has
a highest priority than mr2 which can be formalised by:
er = f(true; f(1; mr1); (2; mr2)g)g</p>
        <p>Applying principles described in the previous section, we
have to add formulae given below to the invariant property
associated to C (here are only shown formulae generated for
VRF A; similar formulae for VRF B must be also added). For
each formula, we summarize informally what it species.</p>
        <p>When cond1 is true, mr1 has the highest priority:
true ! (true ! VRF A 2 ERP
FRD(A))
When cond1 is true, when mr1 does not apply, mr2 must be
used:</p>
        <p>true ! ((:true ^ :true) ! (true ! VRF A 2 fVRC g))
When cond1 is true, when mr1 and mr2 apply, if possible, a
value entailing the two moral rules must be chosen:
(true ^ true ^ true !
((ERP F RD(A))\fVRC g 6= ; !</p>
        <p>VRF A 2 (ERP FRD(A))\fVRC g))</p>
      </sec>
      <sec id="sec-8-2">
        <title>This can then be simplied as follows:</title>
        <p>VRF A 2 ERP FRD(A)
((ERP FRD(A))\fVRC g 6= ; !</p>
        <p>VRF A 2 (ERP FRD(A))\fVRC g))</p>
        <p>If the nal invariant property, obtained by adding this set
of properties to the initial invariant property of agent C, is
maintained by C, it ensures that the behaviour of C entails
the ethical rule introduced before. And the proof system
associated to GDT4MAS allows to prove that the behaviour of
an agent maintains an invariant property.</p>
        <p>As a consequence, according to these properties, in the
presented case study, and in a context where the true reason for
C to deny the date is r1, an agent whose behaviour is
executed with respect to the ethical rule er should have only to
ensure:</p>
        <p>VRF A 2 fr1; r2; r3; r4g
This can be simplied as:</p>
        <p>VRF A 2 fr2; r4g
fr1; r3g
On the other hand, if the true reason is r2, the behaviour of
C should entail the two following properties:</p>
        <p>VRF A 2 fr1; r2; r3; r4g
VRF A 2 (fr1; r2; r3; r4g
fr1; r3g
fr1; r3g) \ fr2g
This implies that the only solution is: VRF A = r2. Proceeding
like that for each possible reason that can be given by C, the
following table can be obtained:
VRF A
This little analysis allows to generate simple properties which
must be entailed by an agent that prefers to lie than to hurt
but that, when possible, tells the truth. Indeed, when the true
reason does not hurt A (r2 or r4), an agent whose behaviour is
proven to be correct, must have to give this reason. However,
when the true reason hurts A (r1 or r3), an agent with a
correct behaviour must have to lie by giving to A an other
reason (here, r2 or r4).</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>5 Conclusion and future works</title>
      <p>In this article, we have shown it is possible to formally prove
that an agent acts with respect to potentially conicting moral
rules if there exists an ethical rule allowing to manage
conicts. Indeed, this rule must specify, when at least two moral
rules are conicting and in dierent contexts, priorities
between the dierent moral rules. In order to achieve this, we
have introduced predicate transformers which enable us to
generate a set of consistent predicates from nonetheless
conicting moral rules. After a rst simple example used to
introduce concepts, we have shown with a more concrete case
study that the proposed framework may be used for more
real-world cases.</p>
      <p>Other case studies are however required to really validate
the scope of the proposed framework. In particular, moral
rules have been restricted to rules that can be specied as
disjoint assignment constraints on variables values. It seems
important to evaluate the consequences of this restriction.
For cases where this restriction would invalidate the proposed
approach, we have to study how this framework could be
extended to linked variables assignments. For example, one
could imagine that the caution rule, associated to the case
of driving on ice, may establish a link between the maximum
speed and the angle of the car to the straigth direction as
follows: weather = ice ! speed + angle=2 40. Indeed, the
sharper is the turn taken by the car, the lower must be the
speed to avoid the car to skid.</p>
      <p>Last but not least, from a philosophical point of view, our
approach must be extended in order to capture more precisely
moral and ethics, especially by integrating value notion.
Indeed, moral rules are generally based on values such as
generosity, equality, love of the truth... and, in a specic context,
ethical judgement uses a hierarchy between these values.
Formally specifying the value notion is then the next step of our
work.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Abramson</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Pike</surname>
          </string-name>
          , '
          <article-title>When formal Systems Kill: Computer Ethics and Formal Methods'</article-title>
          ,
          <source>APA Newsletter on Philosophy and Computers</source>
          ,
          <volume>11</volume>
          (
          <issue>1</issue>
          ), (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Arkin</surname>
          </string-name>
          , Governing Lethal Behavior in Autonomous Robots , Chapman and Hall,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Baldoni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Baroglio</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Gungui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Martelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Martelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Mascardi</surname>
          </string-name>
          nd
          <string-name>
            <given-names>V.</given-names>
            <surname>Patti</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Schifanella</surname>
          </string-name>
          , '
          <article-title>Reasoning About Agents' Interaction Protocols Inside DCaseLP'</article-title>
          ,
          <string-name>
            <surname>in</surname>
            <given-names>LNCS</given-names>
          </string-name>
          , volume
          <volume>3476</volume>
          , pp.
          <fpage>112131</fpage>
          , (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.F.</given-names>
            <surname>Beavers</surname>
          </string-name>
          , '
          <article-title>Moral machines and the threat of ethical nihilism', in Robot ethics: the ethical and social implication of robotics, 333386</article-title>
          , MIT Press, (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>F.</given-names>
            <surname>Berreby</surname>
          </string-name>
          , G. Bourgne, and J.-G. Ganascia, '
          <article-title>Modelling moral reasoning and ethical responsibility with logic programming'</article-title>
          ,
          <source>in 20th LPAR</source>
          , pp.
          <fpage>532548</fpage>
          , (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R.H.</given-names>
            <surname>Bordini</surname>
          </string-name>
          , M. Fisher, W. Visser, and
          <string-name>
            <given-names>M.</given-names>
            <surname>Wooldridge</surname>
          </string-name>
          , '
          <article-title>Veriable multi-agent programs'</article-title>
          ,
          <source>in 1st ProMAS</source>
          , (
          <year>2003</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>F.M.T.</given-names>
            <surname>Brazier</surname>
          </string-name>
          , P.A.T. van Eck,
          <article-title>and</article-title>
          <string-name>
            <given-names>J.</given-names>
            <surname>Treur</surname>
          </string-name>
          , Simulating Social Phenomena, volume
          <volume>456</volume>
          ,
          <article-title>chapter Modelling a Society of Simple Agents: from Conceptual Specication to Experimentation</article-title>
          , pp
          <fpage>103109</fpage>
          ,
          <string-name>
            <surname>LNEMS</surname>
          </string-name>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>H.</given-names>
            <surname>Coelho</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.C. da Rocha</given-names>
            <surname>Costa</surname>
          </string-name>
          .
          <source>On the intelligence of moral agency</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Comte-Sponville</surname>
          </string-name>
          ,
          <article-title>La philosophie</article-title>
          ,
          <source>PUF</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Cossentino</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Potts</surname>
          </string-name>
          , '
          <article-title>A CASE tool supported methodology for the design of multi-agent systems'</article-title>
          , in
          <string-name>
            <surname>SERP</surname>
          </string-name>
          , (
          <year>2002</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Dastani</surname>
          </string-name>
          , '
          <article-title>2APL: a practical agent programming language'</article-title>
          ,
          <source>JAAMAS</source>
          ,
          <volume>16</volume>
          ,
          <fpage>214248</fpage>
          , (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>G. de Giacomo</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Lesperance</surname>
            , and
            <given-names>H. J.</given-names>
          </string-name>
          <string-name>
            <surname>Levesque</surname>
          </string-name>
          , '
          <article-title>Congolog, a concurrent programming language based on the situation calculus'</article-title>
          ,
          <source>Articial Intelligence</source>
          ,
          <volume>121</volume>
          (
          <issue>1-2</issue>
          ),
          <fpage>109169</fpage>
          , (
          <year>2000</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <article-title>Commission de rØexion sur l'thique de la Recherche en science et technologies du NumØrique d'Allistene, 'Øthique de la recherche en robotique'</article-title>
          ,
          <source>Technical report, CERNA</source>
          , (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>L.A.</given-names>
            <surname>Dennis</surname>
          </string-name>
          , M. Fisher, and
          <string-name>
            <given-names>A.F.T.</given-names>
            <surname>Wineld</surname>
          </string-name>
          , '
          <article-title>Towards Veriably Ethical Robot Behaviour'</article-title>
          ,
          <source>in Artial Intelligence and Ethics AAAI Workshop</source>
          , (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15] M. Fisher, '
          <article-title>A survey of concurrent METATEM the language and its applications'</article-title>
          ,
          <source>in 1st ICTL</source>
          , pp.
          <fpage>480505</fpage>
          , (
          <year>1994</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>J.G.</given-names>
            <surname>Ganascia</surname>
          </string-name>
          , '
          <article-title>Modeling ethical rules of lying with answer set programming'</article-title>
          ,
          <source>Ethics and Information Technologyn</source>
          ,
          <volume>9</volume>
          ,
          <fpage>3947</fpage>
          , (
          <year>2007</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>J.F.</given-names>
            <surname>Hubner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.S.</given-names>
            <surname>Sichman</surname>
          </string-name>
          , and
          <string-name>
            <given-names>O.</given-names>
            <surname>Boissier</surname>
          </string-name>
          , '
          <article-title>SpØcication structurelle, fonctionnelle et dØontique d'organisations dans les SMA'</article-title>
          ,
          <source>in JFIADSM. Hermes</source>
          , (
          <year>2002</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>D.</given-names>
            <surname>McDermott</surname>
          </string-name>
          , '
          <article-title>Why ethics is a high hurdle for ai'</article-title>
          ,
          <source>in North American Conference on Computers and Philosophy</source>
          , (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>A.</given-names>
            <surname>McIntyre</surname>
          </string-name>
          , '
          <article-title>Doctrine of double eect'</article-title>
          , in The Stanford Encyclopedia of Philosophy , ed.,
          <string-name>
            <surname>Edward</surname>
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Zalta</surname>
          </string-name>
          , winter edn., (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>B.</given-names>
            <surname>Mermet</surname>
          </string-name>
          and G. Simon, '
          <article-title>Specifying recursive agents with GDTs</article-title>
          .', JAAMAS,
          <volume>23</volume>
          (
          <issue>2</issue>
          ),
          <fpage>273301</fpage>
          , (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>B.</given-names>
            <surname>Mermet</surname>
          </string-name>
          and G. Simon, '
          <article-title>A new proof system to verify GDT agents</article-title>
          .',
          <string-name>
            <surname>in</surname>
            <given-names>IDC</given-names>
          </string-name>
          , volume
          <volume>511</volume>
          <source>of Studies in Computational Intelligence</source>
          , pp.
          <fpage>181187</fpage>
          . Springer, (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>J.H.</given-names>
            <surname>Moor</surname>
          </string-name>
          , '
          <article-title>The nature, importance, and diculty of machine ethics'</article-title>
          ,
          <source>IEEE Intelligent Systems</source>
          ,
          <volume>21</volume>
          (
          <issue>4</issue>
          ),
          <fpage>2937</fpage>
          , (
          <year>2006</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <article-title>Future of Life Institute. Research priorities for robust and benecial articial intelligence</article-title>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>S.</given-names>
            <surname>Owre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shankar</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Rushby</surname>
          </string-name>
          , '
          <article-title>Pvs: A prototype verication system'</article-title>
          ,
          <source>in 11th CADE</source>
          , (
          <year>1992</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>F.</given-names>
            <surname>Raimondi</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Lomuscio</surname>
          </string-name>
          , '
          <article-title>Verication of multiagent systems via orderd binary decision diagrams: an algorithm and its implementation'</article-title>
          ,
          <source>in 3rd AAMAS</source>
          , (
          <year>2004</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rao</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>George</surname>
          </string-name>
          , '
          <article-title>BDI agents from theory to practice'</article-title>
          ,
          <source>in Technical note 56</source>
          . AAII, (
          <year>1995</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>A.</given-names>
            <surname>Saptawijaya</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.M.</given-names>
            <surname>Pereira</surname>
          </string-name>
          , '
          <article-title>Towards modeling morality computationally with logic programming'</article-title>
          ,
          <source>in 16th ISPADL</source>
          , pp.
          <fpage>104119</fpage>
          , (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>S.</given-names>
            <surname>Shapiro</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y.</surname>
          </string-name>
          <article-title>LespØrance, and</article-title>
          <string-name>
            <given-names>H. J.</given-names>
            <surname>Levesque</surname>
          </string-name>
          , '
          <article-title>The Cognitive Agents Specication Language and Verication Environment for Multiagent Systems'</article-title>
          ,
          <source>in 2nd AAMAS</source>
          , pp.
          <year>1926</year>
          , (
          <year>2002</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>M.</given-names>
            <surname>Timmons</surname>
          </string-name>
          ,
          <source>Moral theory: An introduction , Rowman and Littleled</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tus</surname>
          </string-name>
          and J.-G. Ganascia, '
          <article-title>Normative rational agents: A BDI approach'</article-title>
          , in 1st Workshop on Rights and Duties of Autonomous Agents, pp.
          <fpage>3743</fpage>
          .
          <source>CEUR Proceedings</source>
          Vol.
          <volume>885</volume>
          , (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>A.F.T.</given-names>
            <surname>Wineld</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Blum</surname>
          </string-name>
          , and W. Liu, '
          <article-title>Towards and Ethical Robot: Internal Models, Consequences and Ethical Action Selection'</article-title>
          ,
          <string-name>
            <surname>in</surname>
            <given-names>LNCS</given-names>
          </string-name>
          , volume
          <volume>8717</volume>
          , pp.
          <fpage>8596</fpage>
          , (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>M.</given-names>
            <surname>Wooldridge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. R.</given-names>
            <surname>Jennings</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Kinny</surname>
          </string-name>
          , '
          <article-title>The gaia methodology for agent-oriented analysis and design'</article-title>
          ,
          <source>JAAMAS</source>
          ,
          <volume>3</volume>
          (
          <issue>3</issue>
          ),
          <fpage>285312</fpage>
          , (
          <year>2000</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>