<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Ethics and Authority Sharing for Autonomous Armed Robots</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Florian Gros</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Catherine Tessier</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Thierry Pichevin</string-name>
          <email>thierry.pichevin@st-cyr.terre-net.defense.gouv.fr</email>
        </contrib>
      </contrib-group>
      <abstract>
        <p>The goal of this paper is to review several ethical questions that are relevant to the use of autonomous armed robots and to authority sharing between such robots and the human operator. First, we discern the commonly confused meanings of morality and ethics. We continue by proposing leads to answer some of the most common ethical questions raised by literature, namely the autonomy, responsibility and moral status of autonomous robots, as well as their ability to reason ethically. We then present the possible advantages that authority sharing with the operator could provide with respect to these questions.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>INTRODUCTION</title>
      <p>
        There are many questions and controversies commonly raised by the
use of increasingly autonomous robots, especially in military
contexts [
        <xref ref-type="bibr" rid="ref51">51</xref>
        ]. In this domain autonomy is can be explored because of the
need for reducing the atrocities of war, e.g. loss of human lives,
violation of human rights, and for increasing battle performance to avoid
unnecessary violence [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Since full autonomy is far from achieved,
robots are usually supervised by human operators. This coupling
between a human and a robotic agent involves a shared authority on
the robot’s resources [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ], allowing for adaptability of the system
in complex and dynamic battle contexts. Even with humans in the
process, the deployment of autonomous armed robots raises ethical
questions such as the responsibility of robots using lethal force
incorrectly [
        <xref ref-type="bibr" rid="ref47">47</xref>
        ], the extent of their autonomous abilities and the
related dangers, their ability to comply with a set of moral rules and to
reason ethically [
        <xref ref-type="bibr" rid="ref44">44</xref>
        ], and the status of robots with regard to law due
to the ever-increasing autonomy and human resemblance that robots
display [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ].
      </p>
      <p>In this paper we will highlight the distinction between morality
and ethics (section 2). Then several ethical issues raised by the
deployment of autonomous armed robots, such as autonomy,
responsibility, consciousness and moral status will be discussed (section 3).
As another kind of ethical questions, a review of the frameworks used
to implement ethical reasoning into autonomous armed robots will be
presented afterwards (section 4). Finally, we will consider the ethical
issues and implementations mentioned earlier in the framework of
authority sharing between a robot and a human operator (section 5).</p>
    </sec>
    <sec id="sec-2">
      <title>MORALITY AND ETHICS 2</title>
      <p>The concepts of morality and ethics are often used in an identical
fashion. If we want to talk about ethics for autonomous robots, we
have to distinguish those terms and define them.
2.1</p>
    </sec>
    <sec id="sec-3">
      <title>Morality</title>
      <p>
        If we ignore meta-ethical debates that aim at defining morality and its
theoretical grounds precisely, we can conceive morality as principles
of good or bad behaviour, an evaluation of an action in terms of right
and wrong [
        <xref ref-type="bibr" rid="ref52">52</xref>
        ]. This evaluation can be considered either absolute or
coming from a particular conception of life, a typical moral rule
being ”Killing is wrong”. It is important to note that in this work, we
focus on moral action, whether it results from rules, or from
intentions of the subject doing the action.
2.2
      </p>
    </sec>
    <sec id="sec-4">
      <title>Deontology and teleology</title>
      <p>One of the bases for morality is the human constant need to believe in
a meaning of one’s actions. In most philosophical debates, this sense
pertains to two often opposed categories : teleology and deontology.</p>
      <p>
        For teleology, the moral action has to be good, the goal being to
maximize the good and to minimize the evil produced by the action
[
        <xref ref-type="bibr" rid="ref33">33</xref>
        ]. In this case, morality is commonly viewed as external to the
agent, because it comes within the scope of a finalized world defining
the rules and the possible actions and their goals, therefore defining
the evaluation of actions.
      </p>
      <p>
        For deontology, the moral action is done by duty, and must comply
with rules regardless of the consequences of the action, whether they
are foreseen or not, good or bad [
        <xref ref-type="bibr" rid="ref34">34</xref>
        ]. A case by case evaluation is
not necessarily relevant here, because it is the humans’ responsibility
to dictate the rational and universal principles they want to live by.
2.3
      </p>
    </sec>
    <sec id="sec-5">
      <title>Ethics</title>
      <p>
        Ethics appears as soon as a conflict between existing legal or moral
rules emerges, or when there is no rule to guide one’s actions [
        <xref ref-type="bibr" rid="ref36">36</xref>
        ].
For example, if a soldier has received an order not to hurt any
civilian, but to neutralize any armed person, what should he do if he
encounters an armed civilian? We can thus consider ethics as the
commitment to resolving moral controversies [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] where the agent, with
good will, has to solve the conflicts he is faced with.
      </p>
      <p>
        Those conflicts often oppose deontological and teleological
principles, namely what has to be privileged between right and good ?
The goal of ethics is not to pick one side and stand by it forever, but
to be able to keep a balance between right and good when solving
complex problems. Solving an ethical conflict then requires, apart
from weighing good and evil, a sense of creativity in front of a
complex situation and to be able to provide alternative solutions to moral
rules imperatives [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ].
      </p>
      <p>To provide an illustration of the distinction between morality and
ethics, we will consider that any moral conflict needs ethical
reasoning abilites to be solved. Speaking of ethical rules would not make
sense since ethics apply when rules are absent or in conflict.
3</p>
    </sec>
    <sec id="sec-6">
      <title>AUTONOMY, RESPONSIBILITY, MORAL</title>
    </sec>
    <sec id="sec-7">
      <title>STATUS : PROSPECTS FOR ROBOTS</title>
      <p>Technologies leave us presently in an intermediate position where
robots can perceive their environment, act and make decisions by
themselves, but lack a more complete kind of autonomy or the
technological skill to be able to analyze their environment precisely and
understand what happens in a given situation. Still research advances
urge us to think about how to consider autonomous robots in a moral,
legal and intellectual frame, both for the time being and when robots
are actually skilled enough to be considered similar to humans. In
this section, we will review important questions for autonomous
robots i.e. autonomy, responsibility, moral status and see which
answers are plausible. Then we will relate these questions to authority
sharing.
3.1
3.1.1</p>
    </sec>
    <sec id="sec-8">
      <title>Autonomy</title>
      <sec id="sec-8-1">
        <title>Kant and the autonomy of will</title>
        <p>
          When considering autonomy, one of the most influential view in
occidental culture is Kant’s. For him, human beings bend reality to
themselves with their perception and reason, they escape natural or
divine laws. Only reason enables humans to create laws that will
determine humankind. Then laws cannot depend on external
circumstances as reason only can provide indications in order to determine
what is right or wrong. Consequently laws have to be created by a
good will, i.e. a will imposing rules on itself not to satisfy an
interest, but by duty towards other humans. Therefore no purpose can
be external to humankind, and laws are meaningful to humans only
if they are universal. This leads to the well-known moral
”categorical” imperative3, that immediatly determines what it orders because
it enounces only the idea of an universal law and the necessity for the
will to follow it [
          <xref ref-type="bibr" rid="ref39">39</xref>
          ].
        </p>
        <p>Humans being the authors of the law they obey, it is possible to
consider them as an end, and the will as autonomous. Thus, to be
universal, a law has to respect humans as ends in themselves,
inducing a change in the categorical imperative. If the law was external to
humans, they would not be ends in themselves, but mere instruments
used by another entity. Such a statement would deny the human
ability to escape divine or natural laws, which is not acceptable for the
kantian theory. We can only conceive law as completely universal,
respecting humans as ends in themselves. To sum up, the kantian
autonomy is the ability for an agent to define his own laws as ways to
fulfill his goals and to govern his own actions.
3.1.2</p>
      </sec>
      <sec id="sec-8-2">
        <title>Autonomy and robots</title>
        <p>
          In the case of an Unmanned System, autonomy usually stands for
decisional autonomy. It can be defined as the ability for an agent
to minimize the need for supervision and to evolve alone in its
environment [
          <xref ref-type="bibr" rid="ref43">43</xref>
          ], or more precisely, its ”own ability of sensing,
perceiving, analyzing, communicating, planning, decision making, and
acting/executing, to achieve its goals as assigned by its human
operators” [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ].
3 ”act only according to that maxim by which you can at the same time will
that it be a universal law”
        </p>
        <p>We can see a difference between those definitions and Kant’s.
Robot autonomy is perceived differently for robots than for humans,
as an autonomy of means, not of end. The reason for this is that robots
are not sophisticated enough to be able to define their own goals and
to achieve them. Robots are therefore viewed as mere tools whose
autonomy is only intended to alleviate the operators’ workload.</p>
        <p>Consequently, to be envisioned as really autonomous, robots
should be able to determine their own goals once deployed, thus to
have will and be ends in themselves. The real question to ask here is
if it is really desirable to build such fully autonomous robots,
especially if they are to be used on a battlefield. If the objective is solely
to display better performance than human soldiers, full autonomy is
probably inappropriate, since being able to control robots and their
goals from the beginning to the end of their deployment is one of the
main reasons for actually using them.
3.2</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>Responsibility</title>
      <p>If we want to use autonomous robots, we have to know to what extent
a subject is considered responsible for his actions. It is especially
important when applied to armed robots, since they can be involved
in accidents where lives are at stake.
3.2.1</p>
      <sec id="sec-9-1">
        <title>Philosophical approaches to responsibility</title>
        <p>
          Classically responsibility has been considered from a broad variety
of angles, whether being a relationship to every other human being
in order to achieve a goal of salvation given by a divine entity
(Augustine of Hippo), a logic consequence of the application of the
categorical imperative (Kant), a duty towards the whole humanity as
the only way to give a sense, a determination to one’s actions and
to define oneself in the common human condition (Sartre, [
          <xref ref-type="bibr" rid="ref42">42</xref>
          ]), or
an obligation to maintain human life on Earth as long as possible by
one’s actions (Jonas, [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]).
        </p>
        <p>The problem with those approaches is that they are thought for
humans and consequently they require, more or less, an autonomy
of end. As discussed above, this is not a direct possibility for robots.
We then need to envision robot responsibility in their own ”area”
of autonomy, namely an autonomy of means, where the actions are
not performed by humans. To discuss this problem, it is necessary
to distinguish two types of responsibility : causal responsibility and
moral responsibility.
3.2.2</p>
      </sec>
      <sec id="sec-9-2">
        <title>Causal responsibility vs. moral responsibility</title>
        <p>
          By moral responsibility, we mean the ability, for a conscious and
willing agent, to make a decision without referring to a higher
authority, to give the purposes of his actions, and to be judged by these
purposes. To sum up, the agent has to possess a high-level
intentionality [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. This moral responsibility is not to be confused with
causal responsibility, which establishes the share of a subject (or an
object) in a causal chain of events. The former is the responsibility of
a soldier who willingly shot an innocent person, the latter is the
responsibility of a malfunctioning toaster that started a fire in a house.
        </p>
        <p>Every robot has some kind of causal responsibility. Still, trying
to determine the causal responsibility of a robot (or of any agent)
for a given event is way too complex because it requires to analyze
every action the robot did that could have led to this event. What we
are really interested in is to define what would endow robots with a
moral responsibility for their actions.
3.2.3</p>
      </sec>
      <sec id="sec-9-3">
        <title>Reduced responsibility, a solution ?</title>
        <p>Some approaches that are currently considered for the
responsibility of autonomous robots are based on their status of ”tools”, not of
autonomous agents. Thus, their share of responsibility is reduced or
transferred to another agent.</p>
        <p>
          The first approach is to consider robots as any product
manufactured and designed by an industry. In case of a failure, the
responsibility of the industry (as a moral person) is substituted to the
responsibility of the robot. The relevant legal term here is negligence [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ].
It implies that manufacturers and designers have failed to do what
was legally or morally required, thus can be held accountable of the
damage caused by their product. The downside of this approach is
that it can lean towards a causal responsibility which – as said
earlier – is more difficult to assess than a moral responsibility. Besides,
developing a robot that is sure enough to be used on a battlefield
would demand too much time for it to represent a good business, and
it wouldn’t even be enough to be safely used, a margin of error still
existing no matter how sophisticated a robot is.
        </p>
        <p>
          Another approach then would be to apply the slave morality to
autonomous robots [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ] [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ]. A slave, by itself, is not considered
responsible for his actions, but his master is. At a legal level, it is
considered as vicarious liability, illustrated by the well-known maxim
        </p>
      </sec>
      <sec id="sec-9-4">
        <title>Qui facit per alium facit per se4. If we want to apply this to au</title>
        <p>tonomous armed robots, their responsibility would be substituted to
their nearest master, namely the closest person in the chain of
command who decided and authorized the deployment of the robots.
This way, a precise person takes responsibility for the robots actions,
which spares investigations through the chain of command to assess
causal responsibilities.</p>
        <p>
          Finally, if we consider an autonomous robot to be able to comply
with some moral rules, to reason as well as to act, it is possible to
envision the robot as possessing, not moral responsibility, but moral
intelligence [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. The robotic agent is then considered to be able to
adhere to an ethical system. Therefore there is a particular morality
within the robot that is specific to the task it is designed for.
3.2.4
        </p>
      </sec>
      <sec id="sec-9-5">
        <title>Other leads for a moral responsibility</title>
        <p>No robot has been meeting the necessary requirements for moral
responsibility, and no law has been specifically written for robots. The
question is then to determine what is necessary for robots to achieve
moral responsibility and what to do when they break laws.</p>
        <p>
          For [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ] and [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], the key to moral responsibility is the access to a
moral status. Besides an emotional system, this requires the ability of
rational deliberation, allowing oneself to know what one is doing, to
be conscious of one’s actions in addition to make decisions. Severals
leads for robots to access to a moral status are detailed in the next
section.
        </p>
        <p>
          As far as responsibility is concerned, a commonly used argument
is that robots cannot achieve moral responsibility because they
cannot suffer, and therefore cannot be punished [
          <xref ref-type="bibr" rid="ref47">47</xref>
          ]. Still, if we
consider punishment for what it is, i.e. a convenient way to change (or
to compensate for) a behaviour deemed undesirable or unlawful, we
can agree that it is not the sine qua non requirement for
responsibility. There are other ways to change one’s behaviour, one of the most
known examples being treatment, i.e. spotting the ”component” that
produces the unwanted behaviour and tweak it or replace it to correct
the problem [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ]. Beating one’s own car because of a malfunction
4 ”He who acts through another does the act himself.”
would be absurd, in this case it is more fitting to replace the
malfunctioning component. The same applies with certain types of law
infringement (leading to psychological treatment or therapy), so it
could apply to robots as well, e.g. by changing the program of the
defective vehicle. Waiting for technology to progress to finally being
able to punish robots so that they could have moral responsibility is
not a desirable solution, but using vicarious liability, treatment and
moral status appears to be a sound basis.
3.3
        </p>
      </sec>
    </sec>
    <sec id="sec-10">
      <title>Consciousness and moral status for autonomous robots</title>
      <p>
        We have said earlier that for a robot to be considered responsible
for its actions, it must be attributed a moral status, so it needs
consciousness [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. The purpose of this section is to see how this can be
achieved and how moral status can be applicable to robots in order
to help them to have moral responsibility.
3.3.1
      </p>
      <sec id="sec-10-1">
        <title>Consciousness</title>
        <p>
          Since there is an abundant literature on the topic of consciousness,
and still no real consensus among the scientific community on how
define consciousness, the purpose of this section is not to give an
exhaustive nor accurate definition of consciousness, but merely to see
what seems relevant to robots. However, if we want to use
consciousness, we can consider it as described by [
          <xref ref-type="bibr" rid="ref32">32</xref>
          ], namely the ability to
know what it is like to have such or such mental state from one’s own
perspective, to subjectively experience one’s own environment and
internal states.
        </p>
        <p>
          The first approach for robots consciousness is the theory of mind
[
          <xref ref-type="bibr" rid="ref38">38</xref>
          ] [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. It is based on the assumption that humans tend to grant
intentionnality to any being displaying enough similarities of action
with them (emotions ou functional use of language). It is then
possible for humans, by analogy with their experience of their own
consciousness, to assume that those beings have a consciousness as well.
This approach is already developing with conversational agents or
robots mimicking emotions, even if it can be viewed as a trick of
human reasoning more than an ”absolutely true” model of
consciousness.
        </p>
        <p>
          The second approach considers consciousness as a purely
biological phenomenon, and has gained influence with the numerous
discoveries of neurosciences. Even if we do not know what really explains
consciousness (see the Hard problem of consciousness [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]),
considering it as a property of the brain may allow conscious robots to be
developed, as did [
          <xref ref-type="bibr" rid="ref55">55</xref>
          ] [
          <xref ref-type="bibr" rid="ref54">54</xref>
          ] by recreating a brain from collected brain
cells. There is still a lot of work to do here, as well as many ethical
questions to answer, but it definitely looks promising. Indeed, if a
being, even with a robotic body, has a brain that is similar to a human’s,
in a materialist perspective, this being is conscious.
        </p>
        <p>
          The last approach is the one proposed by [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ] [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ] to build
selfaware robots that can explore their own physical capacities to find
their own model and to determine their own way to move
accordingly. Those robots are probably the closest ones to consciousness
as defined by [
          <xref ref-type="bibr" rid="ref32">32</xref>
          ]. They are still far from being used on a battlefield,
but this method of self-modelling could be applied to more ”evolved”
robots for ethical decision-making. This way a robot could explore its
own capacities for action and could build an ethical model of itself.
3.3.2
        </p>
      </sec>
      <sec id="sec-10-2">
        <title>Moral status</title>
        <p>
          An individual is granted moral status if it has to be treated never
as a means, but only as an end, as prescribed by Kant’s categorical
imperative. To define this moral status, two criteria are commonly
used [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], namely sentience (or qualia, the ability to experience reality
as a subject) and sapience (a set of abilities associated with high-level
intelligence). Still, none of those attributes have been successfully
implemented in robots. Even though it could be counter-productive
to integrate qualia to robots in some situations (e.g. coding fear into
an armed robot), it can be interesting to model some of them into
robots, like [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] did for moral emotions like guilt. This could provide
a solid ground for access of robots to moral status. [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] have proposed
two principles stating that two different agents can have the same
moral status if they possess enough similarities : if two beings have
the same functionality and the same conscious experience, and differ
only in the substrate of their implementation (Principle of Substrate
Non-Discrimination) or on how they came to existence (Principle
of Ontogeny Non-Discrimination), then they have the same moral
status.
        </p>
        <p>
          Put simply, those principles are pretty similar to what the theory
of mind proposes, that is if robots can exhibit the same functions as
human’s, then they can be considered as having a moral status, no
matter what their body is made of (silicon, flesh, etc.) or how they
matured (through gestation or coding). Still, proving that robots can
have the same conscious experience as humans is currently
impossible, so we can consider a more applicable version of those principles:
[
          <xref ref-type="bibr" rid="ref49">49</xref>
          ] proposes that robots have moral agency if they are responsible
with respect to another moral agent, if they possess a relative level of
autonomy and if they can show intentional behaviour. This definition
is vague but is grounded on the fact that moral status is attributed.
What matters is that the robot is advanced enough to be similar to
humans, but it does not have to be identical.
        </p>
        <p>
          Another solution for autonomous robots with a moral status is to
create a sort of Turing Test comparing the respective ”value” of a
human life with the existence of a robot. This is called by [
          <xref ref-type="bibr" rid="ref46">46</xref>
          ] the
Triage Turing Test and shows that robots will have the same moral
status as humans when it is at least as wrong to ”kill” a robot as to
kill a human. Advanced reflections on this topic can be found in [
          <xref ref-type="bibr" rid="ref48">48</xref>
          ].
4
        </p>
      </sec>
    </sec>
    <sec id="sec-11">
      <title>IMPLEMENTING ETHICAL REASONING</title>
    </sec>
    <sec id="sec-12">
      <title>INTO AUTONOMOUS ARMED ROBOTS</title>
      <p>Another question related to autonomous armed robots is how those
robots can solve ethical problems on the battlefield and make the
most ethically satisfying decision. In this section, we will briefly
review several frameworks to integrate ethical reasoning into robots.</p>
      <p>Three kinds of approaches are considered:
Top-down : these approaches take a particular ethical theory and
create algorithms for the robot, allowing it to follow the
aforesaid theory. This is convenient to implement, e.g. a deontological
morality into a robot.</p>
      <p>Bottom-up : the goal is to create an environment wherein the robot
can explore different courses of action, with rewards to make it
lean towards morally satisfying actions. Those approaches focus
on the autonomous robot learning its own ethical reasoning
abilities.</p>
      <p>Hybrid : these approaches look for a merge between top-down and
bottom-up frameworks, combining their advantages without their
downsides.
4.1</p>
    </sec>
    <sec id="sec-13">
      <title>Top-down approaches</title>
      <p>
        Top-down frameworks are the most studied in the field of ethics for
robots and the number of ethical theories involved is high.
Literature identifies theories such as utilitarianism [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], divine-command
ethics [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and other logic-based frameworks [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ] [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Still, the most
famous theory among top-down approaches is the Just-War Theory
[
        <xref ref-type="bibr" rid="ref35">35</xref>
        ], which underlies the instructions and principles issued in the
Laws of War and the Rules of Engagement (for more on these
documents, see [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]). Those approaches have in common to take a set of
rules and to program them into the robot code so that their behaviour
could not violate them. The upside of those approaches is that the
rules are general, well-defined and easily understandable. The
downside is that no set of rules will ever handle every possible situation,
mostly because they do not take into account the context of the
particular mission the robot is deployed for. Thus top-down approaches
are usually too rigid and not precise enough to be applicable. Also,
since they rely on specific rules – more morality-like than ethics-like
– they are not fit to capture ethical reasoning abilities but they are
usually used to justify one’s own actions. In order to implement
ethical reasoning abilities into robots, it seems more desirable to use
top-down approaches as moral heuristics guiding ethical reasoning
[
        <xref ref-type="bibr" rid="ref53">53</xref>
        ].
4.2
      </p>
    </sec>
    <sec id="sec-14">
      <title>Bottom-up approaches</title>
      <p>
        Bottom-up frameworks are way less developed than top-down
approaches. Still, some research like [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] gives interesting options,
using self-modeling. Most of the bottom-up approaches insist on
machine learning [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] or artificial evolution using genetic algorithms
based on cooperation [
        <xref ref-type="bibr" rid="ref45">45</xref>
        ] to allow agents to reason ethically given a
specific parameter. The strength of these frameworks is that learning
allows flexibility and adaptability in complex and dynamic
environments, which is a real advantage in the field of ethics wherein there
is no predefined answers. Nevertheless the learning process takes a
lot of time and never completely removes the risk of unwanted
behaviour. Plus, the reasoning behind the action produced by the robot
cannot be traced, making the fix of undesirable behaviours barely
possible.
4.3
      </p>
    </sec>
    <sec id="sec-15">
      <title>Hybrid approaches</title>
      <p>
        Three different frameworks can be distinguished among hybrid
approaches : case-based approach [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ] [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], virtue ethics [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] [
        <xref ref-type="bibr" rid="ref53">53</xref>
        ] and
the hybrid reactive/deliberative architecture proposed by [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], using
the Laws of War and the Rules of Engagement as a set of rules to
follow. They are probably the most applicable researches to autonomous
robots and combine aspects of both top-down (producing algorithms
derived from ethical theories) and bottom-up (using agents able to
learn, evolve and explore possible ethical decisions) specifications.
The main problem with these approaches is their computing time,
since learning is often involved in the process. Nevertheless, they
appear theoretically satisfying and their applicability looks promising.
5
      </p>
    </sec>
    <sec id="sec-16">
      <title>ETHICS AND AUTHORITY SHARING</title>
      <p>In this section we will focus on the previously mentioned ethical
issues in the framework of authority sharing between a robot and a
human operator.</p>
      <p>
        Joining human and machine abilities aims at increasing the range
of actions of “autonomous” systems [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. However the relationship
between both agents is dissymmetric since the human operator’s
“failures” are often neglected when designing the system. Moreover
simultaneous decisions and actions of the artificial and the human
agents are likely to create conflicts [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]: unexpected or
misunderstood authority changes may lead to inefficient, dangerous or
catastrophic situations. Therefore in order to consider the human agent
and the artificial agent in the same way [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] and the human-machine
system as a whole [
        <xref ref-type="bibr" rid="ref56">56</xref>
        ], it seems more relevant to work on
authority and authority control [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ] than on autonomy, which concerns the
artificial agent exclusively.
      </p>
      <p>
        Therefore authority sharing between a robot and its operator can
be viewed as an “upgraded” autonomy. As far as ethical issues are
concerned, authority sharing considered as a relation between two
agents [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] may provide a better compliance with sets of laws and
moral rules, this way enabling ethical decision-making within a pair
of agents instead of leaving this ability to only one individual.
5.1
      </p>
    </sec>
    <sec id="sec-17">
      <title>Autonomy</title>
      <p>
        As previously mentioned, the autonomy of an armed robot can be
conceived as an autonomy of means only; robots are almost always
used as tools. Authority sharing can bring a change in this
organization. As a robot cannot (yet) determine its own goals, it is the human
operator’s role to provide the goals so as some methods or partial
plans to achieve them [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Still, authority sharing allows the robot to
be granted decision-making power allowing it to take authority from
the operator to accomplish some tasks neglected by him (e.g., going
back to base because of a fuel shortage) or even when the operator’s
actions are not following the mission plan and may be dangerous. For
example, some undesirable psychological and physiological ”states”
of the operator, e.g. tiredness, stress, attentional blindness [
        <xref ref-type="bibr" rid="ref37">37</xref>
        ] can
be detected by the robot, in order to allow it to take authority if the
operator is not considered able to fulfill the mission anymore.
5.2
      </p>
    </sec>
    <sec id="sec-18">
      <title>Moral responsibility</title>
      <p>Concerning moral responsibility, authority sharing forces us to make
a distinction between two instances : the one where the operator has
authority over the robot, and the reverse one. The former is simple;
since the robot is a tool, we use the vicarious liability, therefore the
operator engages his responsibility for any accident caused by the
use of the robot that could happen during the mission. The latter is
more complex and we do not claim to give absolute answers, but
mere propositions.</p>
      <p>
        What we propose is that, in order to assess moral responsibility
when the robotic agent has authority over the system, it is necessary
to define a mission-relevant set of rules, e.g. Laws of War and Rules
of Engagement [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ] [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], and a contract, as proposed by [
        <xref ref-type="bibr" rid="ref41">41</xref>
        ] or [
        <xref ref-type="bibr" rid="ref40">40</xref>
        ],
between robotic and human agents, providing specific clauses for
them to respect during the mission. These clauses must to be based
on the set of rules previously mentioned, and an agent who violates
them would be morally responsible of any accident that could happen
as a consequence of his actions.
      </p>
      <p>
        This kind of contract would provide clear conditions for authority
sharing (i.e., an agent loses authority if he violates the contract) and
could open the way to apply works on trust [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] or persuasion [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] in
robotic agents. During a mission, such contracts would engage both
agents to monitor the actions of the other agent and, if possible, to
take authority if this can prevent any infringement of the contract.
If one agent detects a possibly incoming accident due to the other
agent’s actions, e.g. aiming at a civilian, and does nothing to
prevent it, then this agent is responsible for this accident as much as
the one causing it. Because of the current state of law, i.e. dealing
only with human behaviours, if a robot is considered responsible for
”evil” or unlawful actions, then it should be treated by replacing the
parts of its program or the pieces of hardware that caused the
unwanted behaviour. Human operators, if displaying the same kind of
unlawful behaviour, should be judged by the appropriate laws. To
integrate contracts in a concrete way, we can lean towards the
perspective presented by [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] who proposes some recommendations to
warn the operator of his responsibility when using potentially lethal
force.
5.3
      </p>
    </sec>
    <sec id="sec-19">
      <title>Consciousness and moral status</title>
      <p>
        Authority sharing is not of a great help to implement consciousness
into robots. Still, [
        <xref ref-type="bibr" rid="ref37">37</xref>
        ] and [
        <xref ref-type="bibr" rid="ref50">50</xref>
        ] provide leads to allow robots to assess
the ”state” of the operator and to take authority from him if he is not
considered able to achieve the mission. This approach would help
robots to improve their situational awareness and to design systems
that are better at interacting with humans, either operator or civilians.
Enhancing the responsibility and autonomy of robots could also be a
way to push them towards the ”same functionality” proposed by [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ],
i.e. acting with enough caution to be considered equals to humans in
a specific domain, thus helping to give a moral status to robots.
5.4
      </p>
    </sec>
    <sec id="sec-20">
      <title>Ethical reasoning</title>
      <p>
        Given the current state of law and the common deployment of robots
on battlefields, granting robots with ethical reasoning have to be
rooted in a legally relevant framework, that is Just-War Theory [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ].
Laws of War and Rules of Engagement have to be the basic set of
rules for robots. Still, battlefields being complex environments ethics
needs to be integrated into robots with a hybrid approach combining
learning capabilities and experience with ethical theories. In the case
of authority sharing, two frameworks seem relevant at the moment :
case-based reasoning [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and Arkin’s reactive/deliberative
architecture [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. What seems applicable in case of an ethical conflict is to give
the authority to the operator and to use the robotic agent both to assist
him during the reasoning, i.e. by displaying relevant information on
an appropriate interface, and to act as an ethical handrail in order to
make sure that the principles of the Laws of War, e.g. discrimination
or proportionality, are respected.
6
      </p>
    </sec>
    <sec id="sec-21">
      <title>CONCLUSION AND FURTHER WORK</title>
      <p>The main drawback of the implementation of ethics into autonomous
armed robots is that, even if the technology, the autonomy and the
lethal power of robots increase, the legal and philosophical
frameworks do not take them into account, or consider them only from
an anthropocentric point of view. Authority sharing allow a coupling
between a robot and a human operator, hence a better compliance
with ethical and legal requirements for the use of autonomous robots
on battlefields. It can be achieved with vicarious liability, a good
situational awareness produced by tracking both the robot and the
operator’s ”states”, and a hybrid model of ethical reasoning – allowing
adaptability in complex battlefields environments.</p>
      <p>We are currently building an experimental protocol in order to test
some of our proposals, namely automous armed robots that embed
ethical reasoning while sharing authority with a human operator. We
have constructed two fully-simulated battlefield scenarios in which
we will test the compliance of the system with a specific principle
of the Laws of War (proportionality and discrimination). These
scenarios feature hostile actions done towards the robot or its allies, e.g.
throwing rocks or planting explosives, that need to be handled while
complying with a set of rules of engagement. During the simulation,
the operator is induced to produce an immoral behaviour,
provoking an authority conflict in which we expect the robot to detect the
said behaviour and to take authority from the operator: the authority
conflict thereby generated has to be solved by the robot via the
production of a morally correct behaviour. Since the current state of our
software does not yet allow the robotic agent to actually observe the
operator, we are working on some pre-defined evaluations of actions
in order for the robot to be able to detect unwanted behaviours, and
to act accordingly.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>K.</given-names>
            <surname>Abney</surname>
          </string-name>
          , Robotics, Ethical Theory, and
          <article-title>Metaethics: A Guide for the Perplexed</article-title>
          ,
          <volume>35</volume>
          -
          <fpage>52</fpage>
          , Robot Ethics:
          <article-title>The Ethical and Social Implications of Robotics</article-title>
          , MIT Press,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Anderson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Anderson</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Armen</surname>
          </string-name>
          , '
          <article-title>An approach to computing ethics'</article-title>
          ,
          <source>in IEEE Intelligent Systems</source>
          , pp.
          <fpage>56</fpage>
          -
          <lpage>63</lpage>
          , (July/
          <year>August 2006</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.C.</given-names>
            <surname>Arkin</surname>
          </string-name>
          , '
          <article-title>Governing lethal behavior: Embedding ethics in a hybrid deliberative/reactive robot architecture'</article-title>
          ,
          <source>Technical report</source>
          , Georgia Institute of Technology, (
          <year>2007</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R.C.</given-names>
            <surname>Arkin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ulam</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.R.</given-names>
            <surname>Wagner</surname>
          </string-name>
          , '
          <article-title>Moral decision making in autonomous systems: Enforcement, moral emotions, dignity, trust, and deception'</article-title>
          ,
          <source>in Proceedings of the IEEE</source>
          , volume
          <volume>100</volume>
          , pp.
          <fpage>571</fpage>
          -
          <lpage>589</lpage>
          , (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>P.</given-names>
            <surname>Asaro</surname>
          </string-name>
          , '
          <article-title>What should we want from a robot ethic?'</article-title>
          ,
          <source>International Review of Information Ethics</source>
          , Vol.
          <volume>6</volume>
          ,
          <fpage>9</fpage>
          -
          <lpage>16</lpage>
          , (Dec.
          <year>2006</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Baron-Cohen</surname>
          </string-name>
          , '
          <article-title>The development of a theory of mind in autism: deviance</article-title>
          and delay?', Psychiatrics Clinics of North America,
          <volume>14</volume>
          ,
          <fpage>33</fpage>
          -
          <lpage>51</lpage>
          , (
          <year>1991</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>N.</given-names>
            <surname>Bostrom</surname>
          </string-name>
          and
          <string-name>
            <given-names>E.</given-names>
            <surname>Yudkowsky</surname>
          </string-name>
          .
          <source>The Ethics of Artificial Intelligence. Draft for Cambridge Handbook of Artificial Intelligence</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bringsjord</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Taylor</surname>
          </string-name>
          , The Divine-Command Approach to Robot Ethics,
          <fpage>85</fpage>
          -
          <lpage>108</lpage>
          , Robot Ethics:
          <article-title>The Ethical and Social Implications of Robotics</article-title>
          , MIT Press,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.J.</given-names>
            <surname>Chalmers</surname>
          </string-name>
          , '
          <article-title>Facing up to the problem of consciousness'</article-title>
          ,
          <source>Journal of Consciousness Studies</source>
          ,
          <volume>2</volume>
          (
          <issue>3</issue>
          ),
          <fpage>200</fpage>
          -
          <lpage>219</lpage>
          , (
          <year>1995</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>C.</given-names>
            <surname>Cloos</surname>
          </string-name>
          , '
          <article-title>The utilibot project: An autonomous mobile robot based on utilitarianism'</article-title>
          ,
          <source>in 2005 AAAI Fall Symposium on Machine Ethics</source>
          , (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Fr. Dehais</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Tessier</surname>
          </string-name>
          , and L. Chaudron, 'Ghost:
          <article-title>Experimenting conflicts countermeasures in the pilot's activity'</article-title>
          ,
          <source>in IJCAI'03</source>
          ,
          <string-name>
            <surname>Acapulco</surname>
          </string-name>
          , Mexico, (
          <year>2003</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>D.</given-names>
            <surname>Dennett</surname>
          </string-name>
          , When HAL Kills, Who's to Blame?, chapter 16, MIT Press,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>H.T.</given-names>
            <surname>Engelhardt</surname>
          </string-name>
          ,
          <source>The Foundations of Bioethics</source>
          , Oxford Uninversity Press, Oxford,
          <year>1986</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>K.</given-names>
            <surname>Erol</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hendler</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Nau</surname>
          </string-name>
          , '
          <article-title>HTN planning: complexity and expressivity'</article-title>
          ,
          <source>in AAAI'94</source>
          , Seattle, WA, USA, (
          <year>1994</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>J.G.</given-names>
            <surname>Ganascia</surname>
          </string-name>
          , '
          <article-title>Modeling ethical rules of lying with answer set programming'</article-title>
          ,
          <source>Ethics and Information Technology</source>
          ,
          <volume>9</volume>
          ,
          <fpage>39</fpage>
          -
          <lpage>47</lpage>
          , (
          <year>2007</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M</given-names>
            <surname>Guerini</surname>
          </string-name>
          and
          <string-name>
            <given-names>O.</given-names>
            <surname>Stock</surname>
          </string-name>
          , '
          <article-title>Towards ethical persuasive agents'</article-title>
          ,
          <source>in IJCAI Workshop on Computational Models of Natural</source>
          , (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>G.</given-names>
            <surname>Harman</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Kulkarni</surname>
          </string-name>
          ,
          <article-title>Reliable Reasoning: Induction and Statistical Learning Theory</article-title>
          , MIT Press,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>H.</given-names>
            <surname>Hexmoor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Castelfranchi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Falcone</surname>
          </string-name>
          , Agent Autonomy, Kluwer Academic Publishers,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>K.</given-names>
            <surname>Himma</surname>
          </string-name>
          , '
          <article-title>Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent?'</article-title>
          , in 7th International Computer Ethics Conference, San Diego, CA, USA, (
          <year>July 2007</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <article-title>Handbook of cognitive task design</article-title>
          , ed., E. Hollnagel, Mahwah, NJ: Erlbaum,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>H.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Pavek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Novak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Albus</surname>
          </string-name>
          , and E. Messin, '
          <article-title>A framework for autonomy levels for unmanned systems ALFUS'</article-title>
          ,
          <source>in AUVSIs Unmanned Systems North America</source>
          <year>2005</year>
          , Baltimore,
          <string-name>
            <surname>MD</surname>
          </string-name>
          , USA, (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>H.</given-names>
            <surname>Jonas</surname>
          </string-name>
          ,
          <article-title>Das Prinzip Verantwortung</article-title>
          .
          <article-title>Versuch einer Ethik fr die technologische Zivilisation</article-title>
          , Insel Verlag, Frankfurt,
          <year>1979</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>D.</given-names>
            <surname>Kortenkamp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bonasso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ryan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Schreckenghost</surname>
          </string-name>
          , '
          <article-title>Adjustable autonomy for human-centered autonomous systems'</article-title>
          ,
          <source>in Proceedings of the AAAI 1997 Spring Symposium on Mixed Initiative Interaction</source>
          , (
          <year>1997</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>P.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Bekey, and</article-title>
          K. Abney, '
          <article-title>Autonomous military robotics: Risk, ethics</article-title>
          , and design',
          <source>Technical report</source>
          , California Polytechnic State University, (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>H.</given-names>
            <surname>Lipson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bongard</surname>
          </string-name>
          , and
          <string-name>
            <given-names>V.</given-names>
            <surname>Zykov</surname>
          </string-name>
          , '
          <article-title>Resilient machines through continuous self-modeling'</article-title>
          ,
          <source>Science</source>
          ,
          <volume>314</volume>
          (
          <issue>5802</issue>
          ),
          <fpage>1118</fpage>
          -
          <lpage>1121</lpage>
          , (
          <year>2006</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>H.</given-names>
            <surname>Lipson</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.C.</given-names>
            <surname>Zagal</surname>
          </string-name>
          , '
          <article-title>Self-reflection in evolutionary robotics: Resilient adaptation with a minimum of physical exploration'</article-title>
          ,
          <source>in Proceedings of the Genetic and Evolutionary Computation Conference</source>
          , pp.
          <fpage>2179</fpage>
          -
          <lpage>2188</lpage>
          , (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>G.J.</given-names>
            <surname>Lokhorst</surname>
          </string-name>
          , '
          <article-title>Computational meta-ethics: Towards the meta-ethical robot'</article-title>
          ,
          <source>Minds and machines</source>
          ,
          <volume>6</volume>
          ,
          <fpage>261</fpage>
          -
          <lpage>274</lpage>
          , (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>G.J.</given-names>
            <surname>Lokhorst</surname>
          </string-name>
          and
          <string-name>
            <surname>J. van den Hoven</surname>
          </string-name>
          , Responsibility for Military Robots,
          <fpage>145</fpage>
          -
          <lpage>156</lpage>
          , Robot Ethics:
          <article-title>The Ethical and Social Implications of Robotics</article-title>
          , MIT Press,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>B.</given-names>
            <surname>McLaren</surname>
          </string-name>
          , '
          <article-title>Computational models of ethical reasoning: Challenges, initial steps, and future directions'</article-title>
          ,
          <source>in IEEE Intelligent Systems</source>
          , pp.
          <fpage>29</fpage>
          -
          <lpage>37</lpage>
          , (July/
          <year>August 2006</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mercier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Tessier</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Dehais</surname>
          </string-name>
          , '
          <article-title>Dtection et rsolution de conflits dautorit dans un systme homme-robot'</article-title>
          ,
          <source>Revue dIntelligence Artificielle, numro spcial 'Droits et Devoirs dAgents Autonomes'</source>
          ,
          <volume>24</volume>
          ,
          <fpage>325</fpage>
          -
          <lpage>356</lpage>
          , (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>S.</given-names>
            <surname>Miller</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Selgelid</surname>
          </string-name>
          ,
          <article-title>Ethical and Philosophical Consideration of the Dual-Use Dilemma in the Biological Sciences</article-title>
          , Springer, New York,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>T.</given-names>
            <surname>Nagel</surname>
          </string-name>
          , '
          <article-title>What is it like to be a bat?', The Philosophical Review</article-title>
          ,
          <volume>83</volume>
          (
          <issue>4</issue>
          ),
          <fpage>435</fpage>
          -
          <lpage>450</lpage>
          , (
          <year>1974</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <article-title>Teleological Language in the Life Sciences</article-title>
          , ed., L. Nissen, Rowman and Littlefield,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>R.G.</given-names>
            <surname>Olson</surname>
          </string-name>
          , Deontological Ethics,
          <source>The Encyclopedia of Philosophy</source>
          , Collier Macmillan, London,
          <year>1967</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>B.</given-names>
            <surname>Orend</surname>
          </string-name>
          , The Morality of War, Broadview Press, Peterborough, Ontario,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>T.</given-names>
            <surname>Pichevin</surname>
          </string-name>
          , '
          <article-title>Drones arms et thique'</article-title>
          , in Penser la robotisation du champ de bataille, ed.,
          <string-name>
            <given-names>D.</given-names>
            <surname>Danet</surname>
          </string-name>
          , Saint-Cyr, (
          <year>November 2011</year>
          ). Economica.
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>S.</given-names>
            <surname>Pizziol</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Dehais</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Tessier</surname>
          </string-name>
          , '
          <article-title>Towards human operator state assessment', in 1st ATACCS (Automation in Command and Control Systems)</article-title>
          , Barcelona, Spain, (May
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>D.</given-names>
            <surname>Premack</surname>
          </string-name>
          and G. Woodruff, '
          <article-title>Does the chimpanzee have a theory of mind?', The Behavioral</article-title>
          and
          <source>Brain Sciences</source>
          ,
          <volume>4</volume>
          ,
          <fpage>515</fpage>
          -
          <lpage>526</lpage>
          , (
          <year>1978</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>S.</given-names>
            <surname>Rameix</surname>
          </string-name>
          , Fondements philosophiques de l'thique mdicale,
          <source>Ellipses</source>
          , Paris,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>J.</given-names>
            <surname>Rawls</surname>
          </string-name>
          ,
          <source>A Theory of Justice</source>
          , Belknap Harvard University Press, Harvard,
          <year>1971</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          [41]
          <string-name>
            <surname>J.-J. Rousseau</surname>
          </string-name>
          , Du contrat social,
          <volume>1762</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          [42]
          <string-name>
            <given-names>J.-P.</given-names>
            <surname>Sartre</surname>
          </string-name>
          ,
          <string-name>
            <surname>L'</surname>
          </string-name>
          <article-title>existentialisme est un humanisme</article-title>
          ,
          <source>Gallimard</source>
          , Paris,
          <year>1946</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          [43]
          <string-name>
            <given-names>D.</given-names>
            <surname>Schreckenghost</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ryan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Thronesbery</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bonasso</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Poirot</surname>
          </string-name>
          , '
          <article-title>Intelligent control of life support systems for space habitat'</article-title>
          ,
          <source>in Proceedings of the AAAI-IAAI Conference</source>
          , Madison,
          <string-name>
            <surname>WI</surname>
          </string-name>
          , USA, (
          <year>1998</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          [44]
          <string-name>
            <given-names>N.</given-names>
            <surname>Sharkey</surname>
          </string-name>
          , '
          <article-title>Death strikes from the sky: the calculus of proportionality'</article-title>
          ,
          <source>Technology and Society Magazine</source>
          , IEEE,
          <volume>28</volume>
          (
          <issue>1</issue>
          ),
          <fpage>16</fpage>
          -
          <lpage>19</lpage>
          , (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          [45]
          <string-name>
            <given-names>B.</given-names>
            <surname>Skyrms</surname>
          </string-name>
          , Evolution of the Social Contract, Cambridge University Press, Cambridge, UK,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          [46]
          <string-name>
            <given-names>R.</given-names>
            <surname>Sparrow</surname>
          </string-name>
          , '
          <article-title>The Turing triage test'</article-title>
          ,
          <source>Ethics and Information Technology</source>
          ,
          <volume>6</volume>
          (
          <issue>4</issue>
          ),
          <fpage>203</fpage>
          -
          <lpage>213</lpage>
          , (
          <year>2004</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          [47]
          <string-name>
            <given-names>R.</given-names>
            <surname>Sparrow</surname>
          </string-name>
          , 'Killer robots',
          <source>Journal of Applied Philosophy</source>
          ,
          <volume>24</volume>
          (
          <issue>1</issue>
          ),
          <fpage>62</fpage>
          -
          <lpage>77</lpage>
          , (
          <year>2007</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          [48]
          <string-name>
            <given-names>R.</given-names>
            <surname>Sparrow</surname>
          </string-name>
          , Can Machine Be People?,
          <fpage>301</fpage>
          -
          <lpage>315</lpage>
          , Robot Ethics:
          <article-title>The Ethical and Social Implications of Robotics</article-title>
          , MIT Press,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          [49]
          <string-name>
            <given-names>J.</given-names>
            <surname>Sullins</surname>
          </string-name>
          , '
          <article-title>When is a robot a moral agent?'</article-title>
          ,
          <source>International Journal of information Ethics</source>
          ,
          <volume>6</volume>
          (
          <issue>12</issue>
          ), (
          <year>2006</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          [50]
          <string-name>
            <given-names>C.</given-names>
            <surname>Tessier</surname>
          </string-name>
          and
          <string-name>
            <given-names>F.</given-names>
            <surname>Dehais</surname>
          </string-name>
          , '
          <article-title>Authority management and conflict solving in human-machine systems'</article-title>
          , AerospaceLab,
          <source>The Onera Journal</source>
          , Vol.
          <volume>4</volume>
          , (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          [51]
          <string-name>
            <given-names>G.</given-names>
            <surname>Veruggio</surname>
          </string-name>
          , 'Roboethics roadmap',
          <source>in EURON Roboethics Atelier</source>
          , Genoa, (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref52">
        <mixed-citation>
          [52]
          <string-name>
            <given-names>L.</given-names>
            <surname>Vikaros</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Degand</surname>
          </string-name>
          ,
          <source>Moral Development through Social Narratives and Game Design</source>
          ,
          <fpage>197</fpage>
          -
          <lpage>216</lpage>
          , Ethics and Game Design:
          <article-title>Teaching Values through Play</article-title>
          ,
          <source>IGI Global, Hershey</source>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref53">
        <mixed-citation>
          [53]
          <string-name>
            <given-names>W.</given-names>
            <surname>Wallach</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Allen</surname>
          </string-name>
          ,
          <source>Moral Machines: Teaching Robots Rights from Wrong</source>
          , Oxford University Press, New York,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref54">
        <mixed-citation>
          [54]
          <string-name>
            <given-names>K.</given-names>
            <surname>Warwick</surname>
          </string-name>
          ,
          <source>Robots with Biological Brains</source>
          ,
          <fpage>317</fpage>
          -
          <lpage>332</lpage>
          , Robot Ethics:
          <article-title>The Ethical and Social Implications of Robotics</article-title>
          , MIT Press,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref55">
        <mixed-citation>
          [55]
          <string-name>
            <given-names>K.</given-names>
            <surname>Warwick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Xydas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nasuto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Becerra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hammond</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Downes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Marshall</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Whalley</surname>
          </string-name>
          , '
          <article-title>Controlling a mobile robot with a biological brain'</article-title>
          ,
          <source>Defence Science Journal</source>
          ,
          <volume>60</volume>
          (
          <issue>1</issue>
          ),
          <fpage>5</fpage>
          -
          <lpage>14</lpage>
          , (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref56">
        <mixed-citation>
          [56]
          <string-name>
            <surname>D.D. Woods</surname>
            ,
            <given-names>E.M.</given-names>
          </string-name>
          <string-name>
            <surname>Roth</surname>
          </string-name>
          , and
          <string-name>
            <surname>K.B. Bennett</surname>
          </string-name>
          , '
          <article-title>Explorations in joint human-machine cognitive systems'</article-title>
          , in Cognition, computing, and cooperation, eds.,
          <string-name>
            <given-names>S.P</given-names>
            <surname>Robertson</surname>
          </string-name>
          , W. Zachary, and
          <string-name>
            <given-names>J.B.</given-names>
            <surname>Black</surname>
          </string-name>
          ,
          <volume>123</volume>
          -
          <fpage>158</fpage>
          , Ablex Publishing Corp. Norwood, NJ, USA, (
          <year>1990</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>