=Paper=
{{Paper
|id=None
|storemode=property
|title=Ethics and authority sharing for autonomous armed robots
|pdfUrl=https://ceur-ws.org/Vol-885/paper1.pdf
|volume=Vol-885
}}
==Ethics and authority sharing for autonomous armed robots==
Ethics and Authority Sharing for Autonomous Armed
Robots
Florian Gros1 and Catherine Tessier1 and Thierry Pichevin2
Abstract. The goal of this paper is to review several ethical ques- have to distinguish those terms and define them.
tions that are relevant to the use of autonomous armed robots and to
authority sharing between such robots and the human operator. First,
we discern the commonly confused meanings of morality and ethics. 2.1 Morality
We continue by proposing leads to answer some of the most common If we ignore meta-ethical debates that aim at defining morality and its
ethical questions raised by literature, namely the autonomy, respon- theoretical grounds precisely, we can conceive morality as principles
sibility and moral status of autonomous robots, as well as their ability of good or bad behaviour, an evaluation of an action in terms of right
to reason ethically. We then present the possible advantages that au- and wrong [52]. This evaluation can be considered either absolute or
thority sharing with the operator could provide with respect to these coming from a particular conception of life, a typical moral rule be-
questions. ing ”Killing is wrong”. It is important to note that in this work, we
focus on moral action, whether it results from rules, or from inten-
1 INTRODUCTION tions of the subject doing the action.
There are many questions and controversies commonly raised by the
use of increasingly autonomous robots, especially in military con- 2.2 Deontology and teleology
texts [51]. In this domain autonomy is can be explored because of the
need for reducing the atrocities of war, e.g. loss of human lives, vio- One of the bases for morality is the human constant need to believe in
lation of human rights, and for increasing battle performance to avoid a meaning of one’s actions. In most philosophical debates, this sense
unnecessary violence [3]. Since full autonomy is far from achieved, pertains to two often opposed categories : teleology and deontology.
robots are usually supervised by human operators. This coupling be- For teleology, the moral action has to be good, the goal being to
tween a human and a robotic agent involves a shared authority on maximize the good and to minimize the evil produced by the action
the robot’s resources [30], allowing for adaptability of the system [33]. In this case, morality is commonly viewed as external to the
in complex and dynamic battle contexts. Even with humans in the agent, because it comes within the scope of a finalized world defining
process, the deployment of autonomous armed robots raises ethical the rules and the possible actions and their goals, therefore defining
questions such as the responsibility of robots using lethal force in- the evaluation of actions.
correctly [47], the extent of their autonomous abilities and the re- For deontology, the moral action is done by duty, and must comply
lated dangers, their ability to comply with a set of moral rules and to with rules regardless of the consequences of the action, whether they
reason ethically [44], and the status of robots with regard to law due are foreseen or not, good or bad [34]. A case by case evaluation is
to the ever-increasing autonomy and human resemblance that robots not necessarily relevant here, because it is the humans’ responsibility
display [28]. to dictate the rational and universal principles they want to live by.
In this paper we will highlight the distinction between morality
and ethics (section 2). Then several ethical issues raised by the de- 2.3 Ethics
ployment of autonomous armed robots, such as autonomy, responsi-
bility, consciousness and moral status will be discussed (section 3). Ethics appears as soon as a conflict between existing legal or moral
As another kind of ethical questions, a review of the frameworks used rules emerges, or when there is no rule to guide one’s actions [36].
to implement ethical reasoning into autonomous armed robots will be For example, if a soldier has received an order not to hurt any civil-
presented afterwards (section 4). Finally, we will consider the ethical ian, but to neutralize any armed person, what should he do if he en-
issues and implementations mentioned earlier in the framework of counters an armed civilian? We can thus consider ethics as the com-
authority sharing between a robot and a human operator (section 5). mitment to resolving moral controversies [13] where the agent, with
good will, has to solve the conflicts he is faced with.
Those conflicts often oppose deontological and teleological prin-
2 MORALITY AND ETHICS ciples, namely what has to be privileged between right and good ?
The concepts of morality and ethics are often used in an identical The goal of ethics is not to pick one side and stand by it forever, but
fashion. If we want to talk about ethics for autonomous robots, we to be able to keep a balance between right and good when solving
1
complex problems. Solving an ethical conflict then requires, apart
Onera, the French Aerospace Lab, Toulouse, France, email:
name.surname@onera.fr from weighing good and evil, a sense of creativity in front of a com-
2 CREC, Ecoles de Saint-Cyr Coetquidan, France, email: plex situation and to be able to provide alternative solutions to moral
thierry.pichevin@st-cyr.terre-net.defense.gouv.fr rules imperatives [31].
To provide an illustration of the distinction between morality and We can see a difference between those definitions and Kant’s.
ethics, we will consider that any moral conflict needs ethical reason- Robot autonomy is perceived differently for robots than for humans,
ing abilites to be solved. Speaking of ethical rules would not make as an autonomy of means, not of end. The reason for this is that robots
sense since ethics apply when rules are absent or in conflict. are not sophisticated enough to be able to define their own goals and
to achieve them. Robots are therefore viewed as mere tools whose
autonomy is only intended to alleviate the operators’ workload.
3 AUTONOMY, RESPONSIBILITY, MORAL
Consequently, to be envisioned as really autonomous, robots
STATUS : PROSPECTS FOR ROBOTS
should be able to determine their own goals once deployed, thus to
Technologies leave us presently in an intermediate position where have will and be ends in themselves. The real question to ask here is
robots can perceive their environment, act and make decisions by if it is really desirable to build such fully autonomous robots, espe-
themselves, but lack a more complete kind of autonomy or the tech- cially if they are to be used on a battlefield. If the objective is solely
nological skill to be able to analyze their environment precisely and to display better performance than human soldiers, full autonomy is
understand what happens in a given situation. Still research advances probably inappropriate, since being able to control robots and their
urge us to think about how to consider autonomous robots in a moral, goals from the beginning to the end of their deployment is one of the
legal and intellectual frame, both for the time being and when robots main reasons for actually using them.
are actually skilled enough to be considered similar to humans. In
this section, we will review important questions for autonomous
robots i.e. autonomy, responsibility, moral status and see which an- 3.2 Responsibility
swers are plausible. Then we will relate these questions to authority
sharing. If we want to use autonomous robots, we have to know to what extent
a subject is considered responsible for his actions. It is especially
3.1 Autonomy important when applied to armed robots, since they can be involved
in accidents where lives are at stake.
3.1.1 Kant and the autonomy of will
When considering autonomy, one of the most influential view in oc- 3.2.1 Philosophical approaches to responsibility
cidental culture is Kant’s. For him, human beings bend reality to
Classically responsibility has been considered from a broad variety
themselves with their perception and reason, they escape natural or
of angles, whether being a relationship to every other human being
divine laws. Only reason enables humans to create laws that will de-
in order to achieve a goal of salvation given by a divine entity (Au-
termine humankind. Then laws cannot depend on external circum-
gustine of Hippo), a logic consequence of the application of the cat-
stances as reason only can provide indications in order to determine
egorical imperative (Kant), a duty towards the whole humanity as
what is right or wrong. Consequently laws have to be created by a
the only way to give a sense, a determination to one’s actions and
good will, i.e. a will imposing rules on itself not to satisfy an in-
to define oneself in the common human condition (Sartre, [42]), or
terest, but by duty towards other humans. Therefore no purpose can
an obligation to maintain human life on Earth as long as possible by
be external to humankind, and laws are meaningful to humans only
one’s actions (Jonas, [22]).
if they are universal. This leads to the well-known moral ”categori-
The problem with those approaches is that they are thought for
cal” imperative3 , that immediatly determines what it orders because
humans and consequently they require, more or less, an autonomy
it enounces only the idea of an universal law and the necessity for the
of end. As discussed above, this is not a direct possibility for robots.
will to follow it [39].
We then need to envision robot responsibility in their own ”area”
Humans being the authors of the law they obey, it is possible to
of autonomy, namely an autonomy of means, where the actions are
consider them as an end, and the will as autonomous. Thus, to be
not performed by humans. To discuss this problem, it is necessary
universal, a law has to respect humans as ends in themselves, induc-
to distinguish two types of responsibility : causal responsibility and
ing a change in the categorical imperative. If the law was external to
moral responsibility.
humans, they would not be ends in themselves, but mere instruments
used by another entity. Such a statement would deny the human abil-
ity to escape divine or natural laws, which is not acceptable for the 3.2.2 Causal responsibility vs. moral responsibility
kantian theory. We can only conceive law as completely universal,
respecting humans as ends in themselves. To sum up, the kantian au- By moral responsibility, we mean the ability, for a conscious and
tonomy is the ability for an agent to define his own laws as ways to willing agent, to make a decision without referring to a higher au-
fulfill his goals and to govern his own actions. thority, to give the purposes of his actions, and to be judged by these
purposes. To sum up, the agent has to possess a high-level inten-
3.1.2 Autonomy and robots tionality [12]. This moral responsibility is not to be confused with
causal responsibility, which establishes the share of a subject (or an
In the case of an Unmanned System, autonomy usually stands for object) in a causal chain of events. The former is the responsibility of
decisional autonomy. It can be defined as the ability for an agent a soldier who willingly shot an innocent person, the latter is the re-
to minimize the need for supervision and to evolve alone in its en- sponsibility of a malfunctioning toaster that started a fire in a house.
vironment [43], or more precisely, its ”own ability of sensing, per- Every robot has some kind of causal responsibility. Still, trying
ceiving, analyzing, communicating, planning, decision making, and to determine the causal responsibility of a robot (or of any agent)
acting/executing, to achieve its goals as assigned by its human oper- for a given event is way too complex because it requires to analyze
ators” [21]. every action the robot did that could have led to this event. What we
3 ”act only according to that maxim by which you can at the same time will are really interested in is to define what would endow robots with a
that it be a universal law” moral responsibility for their actions.
3.2.3 Reduced responsibility, a solution ? would be absurd, in this case it is more fitting to replace the mal-
functioning component. The same applies with certain types of law
Some approaches that are currently considered for the responsibil- infringement (leading to psychological treatment or therapy), so it
ity of autonomous robots are based on their status of ”tools”, not of could apply to robots as well, e.g. by changing the program of the
autonomous agents. Thus, their share of responsibility is reduced or defective vehicle. Waiting for technology to progress to finally being
transferred to another agent. able to punish robots so that they could have moral responsibility is
The first approach is to consider robots as any product manufac- not a desirable solution, but using vicarious liability, treatment and
tured and designed by an industry. In case of a failure, the responsi- moral status appears to be a sound basis.
bility of the industry (as a moral person) is substituted to the respon-
sibility of the robot. The relevant legal term here is negligence [24].
It implies that manufacturers and designers have failed to do what
3.3 Consciousness and moral status for
was legally or morally required, thus can be held accountable of the
autonomous robots
damage caused by their product. The downside of this approach is We have said earlier that for a robot to be considered responsible
that it can lean towards a causal responsibility which – as said ear- for its actions, it must be attributed a moral status, so it needs con-
lier – is more difficult to assess than a moral responsibility. Besides, sciousness [19]. The purpose of this section is to see how this can be
developing a robot that is sure enough to be used on a battlefield achieved and how moral status can be applicable to robots in order
would demand too much time for it to represent a good business, and to help them to have moral responsibility.
it wouldn’t even be enough to be safely used, a margin of error still
existing no matter how sophisticated a robot is. 3.3.1 Consciousness
Another approach then would be to apply the slave morality to Since there is an abundant literature on the topic of consciousness,
autonomous robots [24] [28]. A slave, by itself, is not considered re- and still no real consensus among the scientific community on how
sponsible for his actions, but his master is. At a legal level, it is con- define consciousness, the purpose of this section is not to give an ex-
sidered as vicarious liability, illustrated by the well-known maxim haustive nor accurate definition of consciousness, but merely to see
Qui facit per alium facit per se4 . If we want to apply this to au- what seems relevant to robots. However, if we want to use conscious-
tonomous armed robots, their responsibility would be substituted to ness, we can consider it as described by [32], namely the ability to
their nearest master, namely the closest person in the chain of com- know what it is like to have such or such mental state from one’s own
mand who decided and authorized the deployment of the robots. perspective, to subjectively experience one’s own environment and
This way, a precise person takes responsibility for the robots actions, internal states.
which spares investigations through the chain of command to assess The first approach for robots consciousness is the theory of mind
causal responsibilities. [38] [6]. It is based on the assumption that humans tend to grant in-
Finally, if we consider an autonomous robot to be able to comply tentionnality to any being displaying enough similarities of action
with some moral rules, to reason as well as to act, it is possible to with them (emotions ou functional use of language). It is then possi-
envision the robot as possessing, not moral responsibility, but moral ble for humans, by analogy with their experience of their own con-
intelligence [5]. The robotic agent is then considered to be able to sciousness, to assume that those beings have a consciousness as well.
adhere to an ethical system. Therefore there is a particular morality This approach is already developing with conversational agents or
within the robot that is specific to the task it is designed for. robots mimicking emotions, even if it can be viewed as a trick of hu-
man reasoning more than an ”absolutely true” model of conscious-
3.2.4 Other leads for a moral responsibility ness.
The second approach considers consciousness as a purely biologi-
No robot has been meeting the necessary requirements for moral re- cal phenomenon, and has gained influence with the numerous discov-
sponsibility, and no law has been specifically written for robots. The eries of neurosciences. Even if we do not know what really explains
question is then to determine what is necessary for robots to achieve consciousness (see the Hard problem of consciousness [9]), consid-
moral responsibility and what to do when they break laws. ering it as a property of the brain may allow conscious robots to be
For [19] and [1], the key to moral responsibility is the access to a developed, as did [55] [54] by recreating a brain from collected brain
moral status. Besides an emotional system, this requires the ability of cells. There is still a lot of work to do here, as well as many ethical
rational deliberation, allowing oneself to know what one is doing, to questions to answer, but it definitely looks promising. Indeed, if a be-
be conscious of one’s actions in addition to make decisions. Severals ing, even with a robotic body, has a brain that is similar to a human’s,
leads for robots to access to a moral status are detailed in the next in a materialist perspective, this being is conscious.
section. The last approach is the one proposed by [25] [26] to build self-
As far as responsibility is concerned, a commonly used argument aware robots that can explore their own physical capacities to find
is that robots cannot achieve moral responsibility because they can- their own model and to determine their own way to move accord-
not suffer, and therefore cannot be punished [47]. Still, if we con- ingly. Those robots are probably the closest ones to consciousness
sider punishment for what it is, i.e. a convenient way to change (or as defined by [32]. They are still far from being used on a battlefield,
to compensate for) a behaviour deemed undesirable or unlawful, we but this method of self-modelling could be applied to more ”evolved”
can agree that it is not the sine qua non requirement for responsibil- robots for ethical decision-making. This way a robot could explore its
ity. There are other ways to change one’s behaviour, one of the most own capacities for action and could build an ethical model of itself.
known examples being treatment, i.e. spotting the ”component” that
produces the unwanted behaviour and tweak it or replace it to correct 3.3.2 Moral status
the problem [28]. Beating one’s own car because of a malfunction An individual is granted moral status if it has to be treated never
as a means, but only as an end, as prescribed by Kant’s categorical
4 ”He who acts through another does the act himself.”
imperative. To define this moral status, two criteria are commonly
used [7], namely sentience (or qualia, the ability to experience reality ethics [8] and other logic-based frameworks [27] [15]. Still, the most
as a subject) and sapience (a set of abilities associated with high-level famous theory among top-down approaches is the Just-War Theory
intelligence). Still, none of those attributes have been successfully [35], which underlies the instructions and principles issued in the
implemented in robots. Even though it could be counter-productive Laws of War and the Rules of Engagement (for more on these doc-
to integrate qualia to robots in some situations (e.g. coding fear into uments, see [3]). Those approaches have in common to take a set of
an armed robot), it can be interesting to model some of them into rules and to program them into the robot code so that their behaviour
robots, like [4] did for moral emotions like guilt. This could provide could not violate them. The upside of those approaches is that the
a solid ground for access of robots to moral status. [7] have proposed rules are general, well-defined and easily understandable. The down-
two principles stating that two different agents can have the same side is that no set of rules will ever handle every possible situation,
moral status if they possess enough similarities : if two beings have mostly because they do not take into account the context of the par-
the same functionality and the same conscious experience, and differ ticular mission the robot is deployed for. Thus top-down approaches
only in the substrate of their implementation (Principle of Substrate are usually too rigid and not precise enough to be applicable. Also,
Non-Discrimination) or on how they came to existence (Principle since they rely on specific rules – more morality-like than ethics-like
of Ontogeny Non-Discrimination), then they have the same moral – they are not fit to capture ethical reasoning abilities but they are
status. usually used to justify one’s own actions. In order to implement eth-
Put simply, those principles are pretty similar to what the theory ical reasoning abilities into robots, it seems more desirable to use
of mind proposes, that is if robots can exhibit the same functions as top-down approaches as moral heuristics guiding ethical reasoning
human’s, then they can be considered as having a moral status, no [53].
matter what their body is made of (silicon, flesh, etc.) or how they
matured (through gestation or coding). Still, proving that robots can 4.2 Bottom-up approaches
have the same conscious experience as humans is currently impossi- Bottom-up frameworks are way less developed than top-down ap-
ble, so we can consider a more applicable version of those principles: proaches. Still, some research like [26] gives interesting options, us-
[49] proposes that robots have moral agency if they are responsible ing self-modeling. Most of the bottom-up approaches insist on ma-
with respect to another moral agent, if they possess a relative level of chine learning [17] or artificial evolution using genetic algorithms
autonomy and if they can show intentional behaviour. This definition based on cooperation [45] to allow agents to reason ethically given a
is vague but is grounded on the fact that moral status is attributed. specific parameter. The strength of these frameworks is that learning
What matters is that the robot is advanced enough to be similar to allows flexibility and adaptability in complex and dynamic environ-
humans, but it does not have to be identical. ments, which is a real advantage in the field of ethics wherein there
Another solution for autonomous robots with a moral status is to is no predefined answers. Nevertheless the learning process takes a
create a sort of Turing Test comparing the respective ”value” of a lot of time and never completely removes the risk of unwanted be-
human life with the existence of a robot. This is called by [46] the haviour. Plus, the reasoning behind the action produced by the robot
Triage Turing Test and shows that robots will have the same moral cannot be traced, making the fix of undesirable behaviours barely
status as humans when it is at least as wrong to ”kill” a robot as to possible.
kill a human. Advanced reflections on this topic can be found in [48].
4.3 Hybrid approaches
4 IMPLEMENTING ETHICAL REASONING Three different frameworks can be distinguished among hybrid ap-
INTO AUTONOMOUS ARMED ROBOTS proaches : case-based approach [29] [2], virtue ethics [24] [53] and
the hybrid reactive/deliberative architecture proposed by [3], using
Another question related to autonomous armed robots is how those
the Laws of War and the Rules of Engagement as a set of rules to fol-
robots can solve ethical problems on the battlefield and make the
low. They are probably the most applicable researches to autonomous
most ethically satisfying decision. In this section, we will briefly re-
robots and combine aspects of both top-down (producing algorithms
view several frameworks to integrate ethical reasoning into robots.
derived from ethical theories) and bottom-up (using agents able to
Three kinds of approaches are considered:
learn, evolve and explore possible ethical decisions) specifications.
• Top-down : these approaches take a particular ethical theory and The main problem with these approaches is their computing time,
create algorithms for the robot, allowing it to follow the afore- since learning is often involved in the process. Nevertheless, they ap-
said theory. This is convenient to implement, e.g. a deontological pear theoretically satisfying and their applicability looks promising.
morality into a robot.
• Bottom-up : the goal is to create an environment wherein the robot 5 ETHICS AND AUTHORITY SHARING
can explore different courses of action, with rewards to make it
lean towards morally satisfying actions. Those approaches focus In this section we will focus on the previously mentioned ethical is-
on the autonomous robot learning its own ethical reasoning abili- sues in the framework of authority sharing between a robot and a
ties. human operator.
• Hybrid : these approaches look for a merge between top-down and Joining human and machine abilities aims at increasing the range
bottom-up frameworks, combining their advantages without their of actions of “autonomous” systems [23]. However the relationship
downsides. between both agents is dissymmetric since the human operator’s
“failures” are often neglected when designing the system. Moreover
simultaneous decisions and actions of the artificial and the human
4.1 Top-down approaches agents are likely to create conflicts [11]: unexpected or misunder-
Top-down frameworks are the most studied in the field of ethics for stood authority changes may lead to inefficient, dangerous or catas-
robots and the number of ethical theories involved is high. Litera- trophic situations. Therefore in order to consider the human agent
ture identifies theories such as utilitarianism [10], divine-command and the artificial agent in the same way [20] and the human-machine
system as a whole [56], it seems more relevant to work on author- integrate contracts in a concrete way, we can lean towards the per-
ity and authority control [30] than on autonomy, which concerns the spective presented by [3] who proposes some recommendations to
artificial agent exclusively. warn the operator of his responsibility when using potentially lethal
Therefore authority sharing between a robot and its operator can force.
be viewed as an “upgraded” autonomy. As far as ethical issues are
concerned, authority sharing considered as a relation between two 5.3 Consciousness and moral status
agents [18] may provide a better compliance with sets of laws and
Authority sharing is not of a great help to implement consciousness
moral rules, this way enabling ethical decision-making within a pair
into robots. Still, [37] and [50] provide leads to allow robots to assess
of agents instead of leaving this ability to only one individual.
the ”state” of the operator and to take authority from him if he is not
considered able to achieve the mission. This approach would help
5.1 Autonomy robots to improve their situational awareness and to design systems
that are better at interacting with humans, either operator or civilians.
As previously mentioned, the autonomy of an armed robot can be Enhancing the responsibility and autonomy of robots could also be a
conceived as an autonomy of means only; robots are almost always way to push them towards the ”same functionality” proposed by [7],
used as tools. Authority sharing can bring a change in this organiza- i.e. acting with enough caution to be considered equals to humans in
tion. As a robot cannot (yet) determine its own goals, it is the human a specific domain, thus helping to give a moral status to robots.
operator’s role to provide the goals so as some methods or partial
plans to achieve them [14]. Still, authority sharing allows the robot to 5.4 Ethical reasoning
be granted decision-making power allowing it to take authority from
the operator to accomplish some tasks neglected by him (e.g., going Given the current state of law and the common deployment of robots
back to base because of a fuel shortage) or even when the operator’s on battlefields, granting robots with ethical reasoning have to be
actions are not following the mission plan and may be dangerous. For rooted in a legally relevant framework, that is Just-War Theory [35].
example, some undesirable psychological and physiological ”states” Laws of War and Rules of Engagement have to be the basic set of
of the operator, e.g. tiredness, stress, attentional blindness [37] can rules for robots. Still, battlefields being complex environments ethics
be detected by the robot, in order to allow it to take authority if the needs to be integrated into robots with a hybrid approach combining
operator is not considered able to fulfill the mission anymore. learning capabilities and experience with ethical theories. In the case
of authority sharing, two frameworks seem relevant at the moment :
case-based reasoning [2] and Arkin’s reactive/deliberative architec-
5.2 Moral responsibility ture [3]. What seems applicable in case of an ethical conflict is to give
the authority to the operator and to use the robotic agent both to assist
Concerning moral responsibility, authority sharing forces us to make
him during the reasoning, i.e. by displaying relevant information on
a distinction between two instances : the one where the operator has
an appropriate interface, and to act as an ethical handrail in order to
authority over the robot, and the reverse one. The former is simple;
make sure that the principles of the Laws of War, e.g. discrimination
since the robot is a tool, we use the vicarious liability, therefore the
or proportionality, are respected.
operator engages his responsibility for any accident caused by the
use of the robot that could happen during the mission. The latter is
6 CONCLUSION AND FURTHER WORK
more complex and we do not claim to give absolute answers, but
mere propositions. The main drawback of the implementation of ethics into autonomous
What we propose is that, in order to assess moral responsibility armed robots is that, even if the technology, the autonomy and the
when the robotic agent has authority over the system, it is necessary lethal power of robots increase, the legal and philosophical frame-
to define a mission-relevant set of rules, e.g. Laws of War and Rules works do not take them into account, or consider them only from
of Engagement [35] [3], and a contract, as proposed by [41] or [40], an anthropocentric point of view. Authority sharing allow a coupling
between robotic and human agents, providing specific clauses for between a robot and a human operator, hence a better compliance
them to respect during the mission. These clauses must to be based with ethical and legal requirements for the use of autonomous robots
on the set of rules previously mentioned, and an agent who violates on battlefields. It can be achieved with vicarious liability, a good sit-
them would be morally responsible of any accident that could happen uational awareness produced by tracking both the robot and the op-
as a consequence of his actions. erator’s ”states”, and a hybrid model of ethical reasoning – allowing
This kind of contract would provide clear conditions for authority adaptability in complex battlefields environments.
sharing (i.e., an agent loses authority if he violates the contract) and We are currently building an experimental protocol in order to test
could open the way to apply works on trust [4] or persuasion [16] in some of our proposals, namely automous armed robots that embed
robotic agents. During a mission, such contracts would engage both ethical reasoning while sharing authority with a human operator. We
agents to monitor the actions of the other agent and, if possible, to have constructed two fully-simulated battlefield scenarios in which
take authority if this can prevent any infringement of the contract. we will test the compliance of the system with a specific principle
If one agent detects a possibly incoming accident due to the other of the Laws of War (proportionality and discrimination). These sce-
agent’s actions, e.g. aiming at a civilian, and does nothing to pre- narios feature hostile actions done towards the robot or its allies, e.g.
vent it, then this agent is responsible for this accident as much as throwing rocks or planting explosives, that need to be handled while
the one causing it. Because of the current state of law, i.e. dealing complying with a set of rules of engagement. During the simulation,
only with human behaviours, if a robot is considered responsible for the operator is induced to produce an immoral behaviour, provok-
”evil” or unlawful actions, then it should be treated by replacing the ing an authority conflict in which we expect the robot to detect the
parts of its program or the pieces of hardware that caused the un- said behaviour and to take authority from the operator: the authority
wanted behaviour. Human operators, if displaying the same kind of conflict thereby generated has to be solved by the robot via the pro-
unlawful behaviour, should be judged by the appropriate laws. To duction of a morally correct behaviour. Since the current state of our
software does not yet allow the robotic agent to actually observe the ceedings of the Genetic and Evolutionary Computation Conference, pp.
operator, we are working on some pre-defined evaluations of actions 2179–2188, (2009).
[27] G.J. Lokhorst, ‘Computational meta-ethics: Towards the meta-ethical
in order for the robot to be able to detect unwanted behaviours, and robot’, Minds and machines, 6, 261–274, (2011).
to act accordingly. [28] G.J. Lokhorst and J. van den Hoven, Responsibility for Military
Robots, 145–156, Robot Ethics: The Ethical and Social Implications
of Robotics, MIT Press, 2012.
REFERENCES [29] B. McLaren, ‘Computational models of ethical reasoning: Challenges,
initial steps, and future directions’, in IEEE Intelligent Systems, pp. 29–
[1] K. Abney, Robotics, Ethical Theory, and Metaethics: A Guide for the 37, (July/August 2006).
Perplexed, 35–52, Robot Ethics: The Ethical and Social Implications of [30] S. Mercier, C. Tessier, and F. Dehais, ‘Dtection et rsolution de con-
Robotics, MIT Press, 2012. flits dautorit dans un systme homme-robot’, Revue dIntelligence Artifi-
[2] M. Anderson, S. Anderson, and C. Armen, ‘An approach to computing cielle, numro spcial ’Droits et Devoirs dAgents Autonomes’, 24, 325–
ethics’, in IEEE Intelligent Systems, pp. 56–63, (July/August 2006). 356, (2010).
[3] R.C. Arkin, ‘Governing lethal behavior: Embedding ethics in a hybrid [31] S. Miller and M. Selgelid, Ethical and Philosophical Consideration of
deliberative/reactive robot architecture’, Technical report, Georgia In- the Dual-Use Dilemma in the Biological Sciences, Springer, New York,
stitute of Technology, (2007). 2009.
[4] R.C. Arkin, P. Ulam, and A.R. Wagner, ‘Moral decision making in au- [32] T. Nagel, ‘What is it like to be a bat?’, The Philosophical Review, 83(4),
tonomous systems: Enforcement, moral emotions, dignity, trust, and 435–450, (1974).
deception’, in Proceedings of the IEEE, volume 100, pp. 571–589, [33] Teleological Language in the Life Sciences, ed., L. Nissen, Rowman
(2011). and Littlefield, 1997.
[5] P. Asaro, ‘What should we want from a robot ethic?’, International [34] R.G. Olson, Deontological Ethics, The Encyclopedia of Philosophy,
Review of Information Ethics, Vol. 6, 9–16, (Dec. 2006). Collier Macmillan, London, 1967.
[6] S. Baron-Cohen, ‘The development of a theory of mind in autism: de- [35] B. Orend, The Morality of War, Broadview Press, Peterborough, On-
viance and delay?’, Psychiatrics Clinics of North America, 14, 33–51, tario, 2006.
(1991). [36] T. Pichevin, ‘Drones arms et thique’, in Penser la robotisation du champ
[7] N. Bostrom and E. Yudkowsky. The Ethics of Artificial Intelligence. de bataille, ed., D. Danet, Saint-Cyr, (November 2011). Economica.
Draft for Cambridge Handbook of Artificial Intelligence, 2011. [37] S. Pizziol, F. Dehais, and C. Tessier, ‘Towards human operator state as-
[8] S. Bringsjord and J. Taylor, The Divine-Command Approach to Robot sessment’, in 1st ATACCS (Automation in Command and Control Sys-
Ethics, 85–108, Robot Ethics: The Ethical and Social Implications of tems), Barcelona, Spain, (May 2011).
Robotics, MIT Press, 2012. [38] D. Premack and G. Woodruff, ‘Does the chimpanzee have a theory of
[9] D.J. Chalmers, ‘Facing up to the problem of consciousness’, Journal of mind?’, The Behavioral and Brain Sciences, 4, 515–526, (1978).
Consciousness Studies, 2(3), 200–219, (1995). [39] S. Rameix, Fondements philosophiques de l’thique mdicale, Ellipses,
[10] C. Cloos, ‘The utilibot project: An autonomous mobile robot based Paris, 1998.
on utilitarianism’, in 2005 AAAI Fall Symposium on Machine Ethics, [40] J. Rawls, A Theory of Justice, Belknap Harvard University Press, Har-
(2005). vard, 1971.
[11] Fr. Dehais, C. Tessier, and L. Chaudron, ‘Ghost: Experimenting con- [41] J.-J. Rousseau, Du contrat social, 1762.
flicts countermeasures in the pilot’s activity’, in IJCAI’03, Acapulco, [42] J.-P. Sartre, L’existentialisme est un humanisme, Gallimard, Paris, 1946.
Mexico, (2003). [43] D. Schreckenghost, D. Ryan, C. Thronesbery, P. Bonasso, and
[12] D. Dennett, When HAL Kills, Who’s to Blame?, chapter 16, MIT Press, D. Poirot, ‘Intelligent control of life support systems for space habi-
1996. tat’, in Proceedings of the AAAI-IAAI Conference, Madison, WI, USA,
[13] H.T. Engelhardt, The Foundations of Bioethics, Oxford Uninversity (1998).
Press, Oxford, 1986. [44] N. Sharkey, ‘Death strikes from the sky: the calculus of proportional-
[14] K. Erol, J. Hendler, and D. Nau, ‘HTN planning: complexity and ex- ity’, Technology and Society Magazine, IEEE, 28(1), 16–19, (2009).
pressivity’, in AAAI’94, Seattle, WA, USA, (1994). [45] B. Skyrms, Evolution of the Social Contract, Cambridge University
[15] J.G. Ganascia, ‘Modeling ethical rules of lying with answer set pro- Press, Cambridge, UK, 1996.
gramming’, Ethics and Information Technology, 9, 39–47, (2007). [46] R. Sparrow, ‘The Turing triage test’, Ethics and Information Technol-
[16] M Guerini and O. Stock, ‘Towards ethical persuasive agents’, in IJCAI ogy, 6(4), 203–213, (2004).
Workshop on Computational Models of Natural, (2005). [47] R. Sparrow, ‘Killer robots’, Journal of Applied Philosophy, 24(1), 62–
[17] G. Harman and S. Kulkarni, Reliable Reasoning: Induction and Statis- 77, (2007).
tical Learning Theory, MIT Press, 2007. [48] R. Sparrow, Can Machine Be People?, 301–315, Robot Ethics: The Eth-
[18] H. Hexmoor, C. Castelfranchi, and R. Falcone, Agent Autonomy, ical and Social Implications of Robotics, MIT Press, 2012.
Kluwer Academic Publishers, 2003. [49] J. Sullins, ‘When is a robot a moral agent?’, International Journal of
[19] K. Himma, ‘Artificial agency, consciousness, and the criteria for moral information Ethics, 6(12), (2006).
agency: What properties must an artificial agent have to be a moral [50] C. Tessier and F. Dehais, ‘Authority management and conflict solving
agent?’, in 7th International Computer Ethics Conference, San Diego, in human-machine systems’, AerospaceLab, The Onera Journal, Vol.4,
CA, USA, (July 2007). (2012).
[20] Handbook of cognitive task design, ed., E. Hollnagel, Mahwah, NJ: Erl- [51] G. Veruggio, ‘Roboethics roadmap’, in EURON Roboethics Atelier,
baum, 2003. Genoa, (2011).
[21] H. Huang, K. Pavek, B. Novak, J. Albus, and E. Messin, ‘A framework [52] L. Vikaros and D. Degand, Moral Development through Social Narra-
for autonomy levels for unmanned systems ALFUS’, in AUVSIs Un- tives and Game Design, 197–216, Ethics and Game Design: Teaching
manned Systems North America 2005, Baltimore, MD, USA, (2005). Values through Play, IGI Global, Hershey, 2010.
[22] H. Jonas, Das Prinzip Verantwortung. Versuch einer Ethik fr die tech- [53] W. Wallach and C. Allen, Moral Machines: Teaching Robots Rights
nologische Zivilisation, Insel Verlag, Frankfurt, 1979. from Wrong, Oxford University Press, New York, 2009.
[23] D. Kortenkamp, P. Bonasso, D. Ryan, and D. Schreckenghost, ‘Ad- [54] K. Warwick, Robots with Biological Brains, 317–332, Robot Ethics:
justable autonomy for human-centered autonomous systems’, in Pro- The Ethical and Social Implications of Robotics, MIT Press, 2012.
ceedings of the AAAI 1997 Spring Symposium on Mixed Initiative In- [55] K. Warwick, D. Xydas, S. Nasuto, V. Becerra, M. Hammond,
teraction, (1997). J. Downes, S. Marshall, and B. Whalley, ‘Controlling a mobile robot
[24] P. Lin, G. Bekey, and K. Abney, ‘Autonomous military robotics: Risk, with a biological brain’, Defence Science Journal, 60(1), 5–14, (2010).
ethics, and design’, Technical report, California Polytechnic State Uni- [56] D.D. Woods, E.M. Roth, and K.B. Bennett, ‘Explorations in joint
versity, (2008). human-machine cognitive systems’, in Cognition, computing, and co-
[25] H. Lipson, J. Bongard, and V. Zykov, ‘Resilient machines through con- operation, eds., S.P Robertson, W. Zachary, and J.B. Black, 123–158,
tinuous self-modeling’, Science, 314(5802), 1118–1121, (2006). Ablex Publishing Corp. Norwood, NJ, USA, (1990).
[26] H. Lipson and J.C. Zagal, ‘Self-reflection in evolutionary robotics: Re-
silient adaptation with a minimum of physical exploration’, in Pro-