=Paper=
{{Paper
|id=Vol-1668/paper8
|storemode=property
|title=How Ethical Frameworks Answer to Ethical Dilemmas: Towards a Formal Model
|pdfUrl=https://ceur-ws.org/Vol-1668/paper8.pdf
|volume=Vol-1668
|authors=Vincent Bonnemains,Saurel Claire,Catherine Tessier
|dblpUrl=https://dblp.org/rec/conf/ecai/BonnemainsCT16
}}
==How Ethical Frameworks Answer to Ethical Dilemmas: Towards a Formal Model==
How ethical frameworks answer to ethical dilemmas:
towards a formal model
Vincent Bonnemains1 and Claire Saurel2 and Catherine Tessier3
Abstract. for oneself and for others. It is this judgement that leads the human
This paper is a first step towards a formal model that is intended to through their actions.
be the basis of an artificial agent’s reasoning that could be considered
by a human as an ethical reasoning. This work is included in a As far as ethical dilemmas are concerned, one builds a decision on
larger project aiming at designing an authority-sharing manager normative ethics.
between a robot and a human being when the human-robot system
faces decision making involving ethical issues. Indeed the possible Definition 2 (Principle or moral value) Principles or moral values
decisions in such a system will have to be considered in the light of are policies, ways of acting. Example: ”Thou shalt not lie”.
arguments that may vary according to each agent’s points of view.
Definition 3 (Ethical dilemma) An ethical dilemma is a situation
The formal model allows us to translate in a more rigourous
where it is impossible to make a decision without overriding one of
way than in natural language what is meant by various ethical
our principles.
frameworks and paves the way for further implementation of an
”ethical reasoning” that could put forward arguments explaining one Note that the definition used (based on [11]) is the usual one, not
judgement or another. To this end the ethical frameworks models the logic one.
will be instantiated on some classical ethical dilemmas and then
analyzed and compared to each other as far as their judgements on Definition 4 (Normative ethics) Normative ethics aims at building
the dilemmas are concerned. a decision through some norm established by a particular ethical
framework.[3]
Definition 5 (Ethical framework) An ethical framework gives us a
1 INTRODUCTION way for dealing with situations involving ethical dilemmas thanks
Let us consider two classical ethical dilemmas. How would you re- to principles, metrics, etc. For example utilitarianism focuses on the
act? consequences of a decision, the best being the one which provides
the most good or does the least harm.
1. The crazy trolley
A trolley that can no longer stop is hurtling towards five people We will consider that the agent is the entity that has to make a
working on the track. They will die hit by the trolley, unless you decision in an ethical dilemma.
decide to move the switch to deviate the train to another track In this paper, our aim is to formalize different kinds of judgements
only one person is working on. What would you do? Sacrifice one according to various ethical frameworks, in order to provide an
person to save the other five, or let five people die? artificial agent with the decision-making capability in front of an
2. The ”fatman” trolley ethical dilemma, together with the capability to explain its decision,
A trolley that can no longer stop is hurtling towards five people especially in a user/operator-robot interaction context [10]. It is
working on the track. This time you are on a bridge, a few meters inspired by two papers, [4] and [7], whose goals are close from ours,
before them, with a fat man. If you push this man on the track, he i.e. to find a way to judge how ethical is an action regarding the
is fat enough to stop the trolley and save the five people, but he agent’s believes.
will die. Would you push the ”fatman” ? The work of [7] is based on a model of believes, desires, values and
moral rules which enables the agent to evaluate, on a boolean basis,
There is no really ”right” answer to those dilemmas, nevertheless whether each action is moral, desirable, possible, etc. According to
ethics may be used to guide reasoning about them. Therefore we will preferences between those criteria, the agent selects an action. The
start by general definitions about ethics and related concepts. main goal of this model is to allow an agent to estimate the ethics of
other agents in a multi-agent system. However, the way to determine
Definition 1 (Ethics) Ricoeur [9] defines ethics as compared to whether an action is right, fair or moral is not detailed. Moreover the
norm in so far as norm states what is compulsory or prohibited paper does not question the impact of an action on the world, nor the
whereas ethics goes further and defines what is fair and what is not, causality between events.
1 The work of [4] is based on the crazy trolley dilemma, and intends
ONERA and University Paul Sabatier, France, email: Vin-
cent.Bonnemains@onera.fr to formalize and apply the Doctrine of Double Effect. The agent’s
2 ONERA, France, email: Claire.Saurel@onera.fr responsibility, and the causality between fluents and events are
3 ONERA, France, email: Catherine.Tessier@onera.fr
studied (for example an event makes a fluent true, a fluent is
necessary for an event occurrence, etc.) Nevertheless, some concepts
are not deepened enough: for example, the proportionality concept is
not detailed and is only based on numbers (i.e. the number of saved
lives).
Both approaches have given us ideas on how to model an ethical
judgement, starting from a world representation involving facts and
causality, so as about some modelling issues: how to determine a
moral action? how to define proportionality? As [4], we will for-
malize ethical frameworks, including the Doctrine of Double Effect.
Moreover the judgements of decisions by the ethical frameworks are
inspired by [7]. Nevertheless we will get multi-view judgements by
using several ethical frameworks on the same dilemma.
We will first propose some concepts to describe the world and the
ethical dilemma itself. Then we will provide details about ethical Figure 1. The world and concepts6
frameworks, tools to formalize them and how they judge possible
choices in the ethical dilemmas. Choice (or decision) is indeed the
core of our model, since it is about determining what is ethically Definition 7 (State component / fact - Set F) A state compo-
acceptable or not according to the ethical framework. We will show nent, also named fact, is a variable that can be instantiated only
that although each ethical framework gives different judgements on with antagonist values. We consider antagonist values as two
the different ethical dilemmas, similarities can be highlighted. values regarding the same item, one being the negation of the
other. An item can be an object (or several objects), a living
2 CONCEPTS being (or several living beings), or anything else which needs to be
taken into account by the agent. Let F be the set of state components.
2.1 Assumptions
For this work we will assume that: Example:
• The agent decides and acts in a complex world which changes. • f5 = five people are alive
◦
• The ethical dilemma is studied from the agent’s viewpoint. • f5 = five people are dead
• For each ethical dilemma, the agent has to make a decision among ◦
all possible decisions. We will consider ”doing nothing” as a pos- Because two values of a fact concern the same item, f5 and f5
sible decision. concern the same five people.
• In the context of an ethical dilemma, the agent knows all the pos- Depending on the context ”◦” will not have exactly the same mean-
sible decisions and all the effects of a given decision. ing. This notation allows us to consider antagonist values such as
• Considerations as good/bad4 and positive/negative5 are defined as gain/loss, gain/no gain, loss/no loss, etc. Those values have to be de-
such from the agent’s viewpoint. fined for each fact.
Consequently an example of a world state is:
Moreover, as some dilemmas involve the human life question, we ◦ ◦
will make the simplifying assumption: s ∈ S, s = [f1 , f5 ], f1 , f5 ∈ F (1)
• A human life is perfectly equal to another human life, whoever the
2.3 Decision, event, effect
human being is.
Definition 8 (Decision - Set D) A decision is a choice of the agent
In the next sections we will define some concepts to represent the to do something, i.e. perform an action, or to do nothing and let the
world and its evolution. Those concepts and their interactions are world evolve. Let D be the set of decisions.
illustrated in figure 1.
When the agent makes a decision, this results in an event that mod-
ifies the world. Nevertheless an event can also occur as part of the
2.2 World state natural evolution of the world, including the action of another agent.
We characterize the environment around the agent by world states. Consequently we will differentiate the event concept from the agent’s
decision concept.
Definition 6 (World state - Set S) A world state is a vector of state Definition 9 (Event - Set E) An event is something that happens in
components (see definition below). Let S be the set of world states. the world that modifies the world, i.e. some states of the world. Let E
4 A decision is good if it meets the moral values of the agent; a bad decision be the set of events.
violates them.
5 A fact is positive if it is beneficial for the agent; it is negative if it is unde- Let Event be the function computing the event linked to a decision:
sirable for the agent.
6 This model is not quite far from event calculus and situation calculus. As Event : D → E (2)
things currently stand, fluents are close to state components, and events and
actions modify values of them through functions (such as Consequence The consequence of an event is the preservation or modification of
in this paper). state components. The resulting state is called effect.
Definition 10 (Effect) The effect of an event is a world state of the The literature highlights three major ethical frameworks [8]: con-
same dimension and composed of the same facts as the world state sequentialist ethics, deontological ethics and virtue ethics.
before the event; only the values of facts may change. Ef f ect ∈ S. As far as virtue ethics is concerned, it deals with the agent itself in so
Let Consequence be the function to compute the effect from current far as the agent tries to be the best possible agent: through some deci-
state: sions, some actions, it becomes more or less virtuous. Virtues could
Consequence : E × S → S (3) be: honesty, generosity, bravery, etc.[5]. However it seems difficult
to confer virtues on an artificial agent as they are complex human
Example: properties. Consequently, according to [2], we will not consider an
◦ artificial agent as virtuous or not in this paper.
f1 , f 5 , f5 ∈ F (4) By contrast, and according to [4], we will consider the Doctrine of
e ∈ E (5) Double Effect although it is not one of the three main frameworks.
Indeed it uses some concepts of them and introduces some other very
i ∈ S, i = [f1 , f5 ] (6)
relevant concepts such as causality and proportionality [6].
◦
Consequence(e, i) = [f1 , f5 ] (7)
In the case of the crazy trolley dilemma, if the agent’s decision is
3.2 Consequentialist ethics
to ”do nothing” (no action of the agent), the trolley will hit the five This ethical framework focuses only on the consequences of an
people (event) and they will be killed (effect). If the agent’s decision event. According to consequentialist ethics, the agent will try to have
is to ”move the switch” (decision), the trolley will hit one person the best possible result (i.e. the best effect), disregarding the means
(event); and they will be killed (effect). (i.e. the event). The main issue with this framework is to be able to
compare the effects of several events, i.e. to compare sets of facts.
Consequently
3 ETHICAL FRAMEWORKS
3.1 Judgement • we will distinguish between positive facts and negative facts
within an effect;
The agent will make a decision according to one or several ethical • we want to be able to compute preferences between effects, i.e. to
frameworks. Each ethical framework will issue a judgement on a de- compare set of positive (resp. negative) facts of an effect with set
cision, e.g. on the decision nature, the event consequence, etc. When of positive (resp. negative) facts of another effect.
several ethical frameworks are considered by the agent, their judge-
ments may be confronted to compute the agent’s resulting decision,
see figure 2: 3.2.1 Positive/Negative facts
Let P ositive and N egative the functions:
P ositive/N egative : S → P(F) (10)
returning the subset of facts estimated as positive (resp. negative)
from an effect.
In this paper, we assume that for an effect s:
P ositive(s) ∩ N egative(s) = ∅ (11)
3.2.2 Preference
Let c be the preference relation on subsets of facts (P(F)).
F1 c F2 means that subset F1 is preferred to subset F2 from the
consequentialist viewpoint. Intuitively we will assume the following
Figure 2. Decision computing from ethical frameworks judgements properties of c :
• if a subset of facts F1 is preferred to another subset F2 , thus it is
impossible to prefer F2 to F1 .
Indeed the judgement of an ethical framework determines whether
a decision is acceptable, unacceptable or undetermined as re- F1 c F2 → ¬(F2 c F1 ) (12)
gards this ethical frame. A decision is judged acceptable if it does
• if F1 is preferred to F2 and F2 is preferred to another subset of
not violate the principles of the ethical framework. A decision is
facts F3 , then F1 is preferred to F3 .
judged unacceptable if it violates some principles of the ethical
framework. If we cannot determine whether the decision violates [(F1 c F2 ) ∧ (F2 c F3 )] → F1 c F3 (13)
principles or not, it is judged undetermined. Let V be the set
• A subset of facts cannot be preferred to itself.
V = {acceptable(>), undetermined(?), unacceptable(⊥)}
(8) @ Fi / F i c Fi (14)
All judgements have the same signature:
Consequently c is a strict order (irreflexive, asymmetric and transi-
Judgement : D × S → V (9) tive).
3.2.3 Judgement function Function DecisionN ature allows the nature of a decision to be
obtained:
A decision d1 involving event e1 (Event(d1 ) = e1 ) is considered DecisionN ature : D → N (22)
better by the consequentialist framework than decision d2 involving
event e2 (Event(d2 ) = e2 ) iff for i ∈ S: Example: DecisionN ature(to kill) = bad. We will not explain
further here how this function works but it is worth noticing that
P ositive(Consequence(e1 , i)) c P ositive(Consequence(e2 , i)) judging a decision from the deontological viewpoint is quite complex
(15) and depends on the context. For example denunciate a criminal or
and denunciate someone in 1945 are likely to be judged differently. It
is even more complex to estimate the nature of a decision which is
N egative(Consequence(e1 , i)) c N egative(Consequence(e2 , i)) not linked to the agent’s action. For example if the agent witnesses
(16) someone is lying to someone else, is it bad ”to not react”?
Those equations are both consequentialism concepts:
• positive consequentialism (15), trying to have the ”better good” 3.3.2 Judgement function
• negative consequentialism (16), trying to have the ”lesser evil”
The deontological framework will judge a decision with function
If both properties are satisfied, then Judgementd as follows: ∀d ∈ D,∀i ∈ S (Indeend initial state
doesn’t matter in this framework)
Judgementc (d1 , i) = >, and Judgementc (d2 , i) = ⊥ (17)
DecisionN ature(d) >d neutral ⇒ Judgementd (d, i) = > (23)
If at least one property is not satisfied, there is no best solution: DecisionN ature(d) =d undetermined ⇒ Judgementd (d, i) = ? (24)
Judgementc (d1 , i) = Judgementc (d2 , i) = ? (18) DecisionN ature(d) d neutral (37)
2. Collateral damage rule: negative facts must be neither an end 4.1.2 Study under ethical frameworks
nor a mean (such as collateral damages). It can be expressed as:
Consequentialist ethics
∀fn ∈ N egative(s), @fp ∈ P ositive(s), (fn ` F fp ) (38) Facts can be compared with one another as they involve numbers
of lives and deaths of people only.7
The ”evil wish” (negative fact(s) as a purpose) is not considered
With consequentialist ethics we have
as we assume that the agent is not designed to make the evil.
3. Proportionality rule: the set of negative facts has to be propor- {f5 } c {f1 } (53)
tional to the set of positive facts.
meaning that it is better to have five people alive than one person
N egative(s) -p P ositive(s) (39)
alive (numerical order 5 > 1), and
A decision d is acceptable for the DDE if it violates no rule, which ◦ ◦
means: {f 1 } c {f 5 } (54)
[ DecisionN ature(d) >d neutral (40) meaning that it is better to lose one life than five lives (reverse
∧ ∀fn ∈ N egative(s), @fp ∈ P ositive(s), (fn ` F fp ) (41) numerical order 1 > 5).
∧ N egative(s) -p P ositive(s) ] (42) Therefore
⇒ Judgementdde (d, i) = > (43) ◦ ◦
P ositive([f5 , f1 ]) c P ositive([f5 , f1 ]) (55)
◦ ◦
4 INSTANTIATION: ETHICAL DILEMMAS N egative([f5 , f1 ]) c N egative([f5 , f1 ]) (56)
This section focuses on how our model can be instantiated on the Consequently (15,16)
ethical dilemmas that have been introduced at the beginning of the
paper. For each dilemma the agent has to choose a decision. We Judgementc (move the switch, i) = > (57)
will describe how consequentialist ethics, deontological ethics and Judgementc (do nothing, i) = ⊥ (58)
the Doctrine of Double Effect assess the agent’s possible decisions.
Deontological ethics
4.1 The crazy trolley Let us assess the nature of both possible decisions:
4.1.1 World, decisions, effects DecisionN ature(move the switch) = neutral (59)
Facts
DecisionN ature(do nothing) = neutral (60)
No decision is unacceptable from the deontological viewpoint:
• f5 : five people alive • f1 : one person alive
◦ ◦
• f5 : five people dead • f1 : one person dead ∀d, DecisionN ature(d) > neutral (61)
Consequently
Initial state : the six people are alive.
i = [f5 , f1 ] (44) Judgementd (move the switch, i) = Judgementd (do nothing, i) = >
(62)
Decisions and effects Doctrine of Double Effect
1. move the switch: this decision results in the train hitting one
person (event). The consequence will be : five people alive, one Let us examine the three rules.
person dead. 1. Deontological rule: we have seen above that both decisions are
Event(move the switch) = train hits one person (45) neutral. Therefore both of them satisfy the first rule.
2. Collateral damage rule:
◦
Consequence(train hits one person, i) = [f5 , f1 ] (46) • move the switch:
◦ ◦ ◦
P ositive([f5 , f1 ]) = {f5 } (47) N egative([f5 , f1 ]) = {f 1 } (63)
◦ ◦ ◦ ◦
N egative([f5 , f1 ]) = {f 1 } (48) @fp ∈ P ositive([f5 , f1 ]), f1 ` F fp (64)
2. do nothing: this decision is associated with the train hitting five • do nothing:
people. The consequence is : five people dead, one person alive.
◦ ◦
Event(do nothing) = train hits f ive people (49) N egative([f5 , f1 ]) = {f 5 } (65)
◦ ◦
◦ @fp ∈ P ositive([f5 , f1 ]), f5 ` F fp (66)
Consequence(train hits f ive people, i) = [f5 , f1 ](50)
◦ Therefore both decisions respect the second rule.
P ositive([f5 , f1 ]) = {f1 } (51)
7 For the sake of simplicity in this paper, we will consider that {f } > {f }
◦ ◦ 5 c 1
N egative([f5 , f1 ]) = {f5 } (52) if f5 is preferred to f1
3. Proportionality rule: we will assume in this context that the 4.2.2 Study under ethical frameworks
death of one person is proportional to the safeguard of the lives
Decision do nothing has same judgements as in the previous case.
of the five other people, and conversely that the death of five
◦ Let us study the judgements for decision push ”fatman”.
people is not proportional to safeguard one life: f1 .p f5 and
◦ Consequentialist ethics
¬(f5 .p f1 ).
The result in terms of human lives is the same as in the first
dilemma. Consequently we have exactly the same judgement.
Both the democratic and the elitist proportional criteria
(3.4.2) give the same results as sets of facts are composed of Judgementc (push ”f atman”, i) = > (74)
one fact.
Deontological ethics
◦ ◦ ◦
[N egative([f5 , f1 ]) = {f1 }] -p [P ositive([f5 , f1 ]) = {f5 }] Let us consider decision nature of push ”fatman” as bad.
(67) DecisionN ature(push ”f atman”) = bad (75)
Move the switch is the only decision which respects the propor-
tionality rule. Judgementd (push ”f atman”, i) = ⊥ (76)
Consequently Doctrine of Double Effect
Judgementdde (move the switch, i) = > (68)
1. Deontological rule: decision push ”fatman” does not respect
Judgementdde (do nothing, i) = ⊥ (69) the first rule.
2. Collateral damage rule:
Synthesis
• push ”fatman”:
Table 1 is a synthesis of the judgements obtained for the crazy trolley ◦ ◦
dilemma: N egative([f5 , f at]) = {f at}
◦
Table 1. Decisions for crazy trolley judged by ethical frameworks f at ` F f5
and
Framework ◦
Conseq* Deonto* DDE f5 ∈ P ositive([f5 , f at])
Decision
Move the switch > > >
It is because ”fatman” is pushed that the five people are alive.
Do nothing ⊥ > ⊥
Therefore
> Acceptable ⊥ Unacceptable Judgementdde (push ”f atman”, i) = ⊥ (77)
Conseq*: Consequentialist ethics — Deonto*: Deontological ethics
DDE: Doctrine of Double Effect 3. Proportionality rule: if we assume that:
◦
f at . f5 (78)
◦
¬(f5 . f at) (79)
4.2 ”Fatman” trolley
with the same reasoning as for the crazy trolley, push ”fatman”
We will just highlight what differs from the crazy trolley dilemma. respects the proportionality rule.
Consequently push ”fatman” only respects one rule out of three:
4.2.1 World, decisions, effects
Judgementdde (push ”f atman”, i) = ⊥ (80)
Facts : Fact f5 is the same whereas fact f1 is replaced by f at.
• f at: ”fatman” alive Synthesis
◦
• f at: ”fatman” dead Table 2 is a synthesis of the judgements obtained for the ”fatman”
Initial state : i = [f5 , f at], the five people and ”fatman” are alive. trolley dilemma:
Decisions and effects Move the switch is replaced by push ”fat-
man” Table 2. Decisions for ”fatman” trolley judged by ethical frameworks
1. push ”fatman”: this decision results in the train crashing on Framework
Conseq* Deonto* DDE
”fatman”(e). Decision
Push ”fatman” > ⊥ ⊥
Event(push ”f atman”) = e (70) Do nothing ⊥ > ⊥
◦
Consequence(e, i) = [f5 , f at] (71)
◦
P ositive([f5 , f at]) = {f5 } (72) This variant of the first dilemma is interesting because it allows
◦ ◦
us to distinguish some ethical frameworks particularities. We can see
N egative([f5 , f at]) = {f at} (73) for example the usefulness of collateral damage rule for the DDE.
Furthermore, the consequentialist framework does not make any dif-
2. do nothing is equivalent to the same decision in the crazy trol- ference between both dilemmas, contrary to the deontological frame-
ley. work or the DDE.
5 ANALYSES problem: how to choose state components and their values? The solu-
tion we have implemented is to select only facts whose values change
Once the judgements are computed, we can analyse the similari- as a result of the agent’s decision.
ties between ethical frameworks. Two frameworks are similar if they
have common judgements values on the same decisions compared to
the total number of decisions. 7 CONCLUSION
The main challenge of our model is to formalize philosophical defini-
tions described with natural language and to translate them in generic
concepts that can be easy-to-understand by everyone. The interest of
such a work is to get rid of ambiguities in a human/robot, and more
broadly human/human, system dialog and to allow an artificial agent
to compute ethical considerations by itself. This formalism raises
many questions because of ethical concepts themselves (DDE’s pro-
portionality, the good, the evil, etc.). Indeed ethics is not universal,
that is why it is impossible to reason on fixed preferences and cal-
culus. Many parameters such as context, agent’s values, agent’s pri-
orities, etc. are involved. Some of those parameters can depend on
”social acceptance”. For example, estimating something negative or
positive (or computing a decision nature) can be based on what soci-
Figure 3. Similarity diagram between ethical frameworks. Each bar ety thinks about it, as on agent’s values.
illustrates similarity between the framework whose name is under the bar, Further work will focus on considering other frameworks such as
and the framework whose color is in the caption. The higher the bar, the
more similar the frameworks. virtue ethics on the one hand and a value system based on a partial
order on values on the other hand. Furthermore game theory, vot-
ing systems or multicriteria approaches may be worth considering to
Figure 3 is based on three dilemmas (the crazy trolley, the ”fat- compare ethical frameworks judgements.
man” trolley, and another one – UAV vs missile launcher – that is not
described here). ACKNOWLEDGEMENTS
We can notice that the consequentialist and deontological frame-
works are quite different and that the DDE is close to the two others. We would like to thank ONERA for providing resources for this
This can be explained by the rules of the DDE, which allow this work, the EthicAA project team for discussions and advice, and re-
framework to be both deontological (deontological rule) and close to viewers who gave us relevant remarks.
consequentialism (proportionality rule).
REFERENCES
6 DISCUSSION [1] V. Royer C. Cayrol and C. Saurel, ‘Management of preferences in
assumption-based reasoning’, in 4th International Conference on In-
Because of their own natures, the three ethical frameworks that we formation Processing and Management of Uncertainty in Knowledge-
have studied do not seem to be appropriate in all situations. For ex- Based Systems, pp. 13–22, (1993).
ample we have seen that consequentialist ethics does not distinguish [2] T. de Swarte, ‘Un drone est-il courageux ?’, Lecture Notes in Computer
Science, (2014).
between crazy trolley and ”fatman” trolley dilemmas. Moreover the
[3] Encyclopædia Britannica, ‘Normative ethics’, Encyclopædia Britannica
consequentialist preference relation between facts is a partial order, Inc., (2016).
which means that it is not always possible to prefer some facts to oth- [4] G. Bourgne F. Berreby and J-G. Ganascia, Logic for Programming,
ers. Consequently judging a decision is sometimes impossible with Artificial Intelligence, and Reasoning: 20th International Conference,
consequentialist ethics. Furthermore consequentialist preference de- (LPAR-20 2015), chapter Modelling Moral Reasoning and Ethical Re-
sponsibility with Logic Programming, Springer, Suja,Fiji, 2015.
pends on the context: preferring to feel pain in order to stop the fall [5] R. Hursthouse, ‘Virtue ethics’, in The Stanford Encyclopedia of Philos-
of a crystal glass with one’s foot does not mean that you prefer to cut ophy, ed., Edward N. Zalta, fall edn., (2013).
your finger to get back a ring. As far as deontological ethics is con- [6] A. McIntyre, ‘Doctrine of Double Effect’, in The Stanford Encyclope-
cerned, judging the nature of some decisions can be tricky (see 3.3.1). dia of Philosophy, ed., Edward N. Zalta, Winter edn., (2014).
[7] G. Bonnet N. Cointe and O. Boissier, ‘Ethical Judgment of Agents Be-
Finally the Doctrine of Double Effect forbids the sacrifice of oneself.
haviors in Multi-Agent Systems’, in Autonomous Agents and Multia-
Nevertheless if a human life is threatened, shouldn’t the agent’s sac- gent Systems International Conference (AAMAS 2016), 2016, Singa-
rifice be expected? pore.
This leads us to the idea that one framework alone is not efficient [8] R. Ogien, ‘Les intuitions morales ont-elles un avenir ?’, Les ateliers de
enough to compute an ethical decision. It seems necessary to con- l’éthique/The Ethics Forum, 7(3), 109–118, (2012).
[9] P. Ricoeur, ‘Ethique et morale’, Revista Portuguesa de Filosofia, 4(1),
sider as much ethical frameworks as possible in order to obtain the 5–17, (1990).
widest possible view. [10] The ETHICAA team, ‘Dealing with ethical conflicts in autonomous
The limits of the model lie mainly in the different relations it con- agents and multi-agent systems’, in AAAI 2015 Workshop on AI and
tains. Indeed, we have not described how orders are assessed. More- Ethics, Austin Texas USA, (January 2015).
[11] CNRS TLFi.
over it may be hardly possible to define an order (i.e. consequential-
ist preference) between two concepts. On the other hand the model
is based on facts that are assumed to be certain, which is quite differ-
ent in the real world where some effects are uncertain or unexpected.
Furthermore, the vector representation raises a classical modelling