<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Classifying the Autonomy and Morality of Arti cial Agents</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sjur Dyrkolbotn</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Truls Pedersen</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marija Slavkovik</string-name>
          <email>marija.slavkovikg@uib.no</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>H gskulen pa Vestlandet</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Universitetet i Bergen</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>As we outsource more of our decisions and activities to machines with various degrees of autonomy, the question of clarifying the moral and legal status of their autonomous behaviour arises. There is also an ongoing discussion on whether arti cial agents can ever be liable for their actions or become moral agents. Both in law and ethics, the concept of liability is tightly connected with the concept of ability. But as we work to develop moral machines, we also push the boundaries of existing categories of ethical competency and autonomy. This makes the question of responsibility particularly di cult. Although new classi cation schemes for ethical behaviour and autonomy have been discussed, these need to be worked out in far more detail. Here we address some issues with existing proposals, highlighting especially the link between ethical competency and autonomy, and the problem of anchoring classications in an operational understanding of what we mean by a moral theory.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>We progressively outsource more and more of our decision-making problems
to arti cial intelligent agents such as unmanned vehicles, intelligent assisted
living machines, news aggregation agents, dynamic pricing agents, stock-trading
agents. With this transfer of decision-making also comes a transfer of power. The
decisions made by these arti cial agents will impact us both as individuals and
as a society. With power to impact lives, there comes the natural requirement
that arti cial agents should respect and follow the norms of society. To this end,
the eld of machine ethics is being developed.</p>
      <p>
        Machine ethics, also know as arti cial morality, is concerned with the
problem of enabling arti cial agents with ethical3 behaviour [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. It remains an open
debate whether an arti cial agent can ever be a moral agent [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. What is clear
is that as arti cial agents become part of our society, we will need to
formulate new ethical and legal principles regarding their behaviour. This is already
3 An alternative terminology is to speak of moral agency, as in the term \moral
machines". However, since many philosophers regard morality as a re ection of moral
personhood, we prefer to speak of \ethical" agency here, to stress that we are
referring to a special kind of rule-guided behaviour, not the (distant) prospect of full
moral personhood for machines.
witnessed by increased interest in developing regulations for the operation of
automated systems e.g., [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ]. In practice, it is clear that some expectations of
ethical behaviour have to be met in order for arti cial agents to be successfully
integrated in society.
      </p>
      <p>In law, there is a long tradition of establishing di erent categories of legal
persons depending on various characteristics of such persons. Children have always
had a di erent legal status than adults; slaves belonged to a di erent category
than their owners; men did not have the same rights and duties as women (and
vice versa); and a barbarian was not the same kind of legal person as a Roman.</p>
      <p>Today, many legal distinctions traditionally made among adult humans have
disappeared, partly due to a new principle of non-discrimination, quite unheard
of historically speaking. However, some distinctions between humans are still
made, e.g., the distinction between adult and child, and the distinction between
someone of sound mind and someone with a severe mental illness.</p>
      <p>Arti cial agents are not like humans, they are tools. Hence, the idea of
categorising them and discriminating between them for the purposes of ethical and
legal reasoning seems unproblematic. In fact, we believe it is necessary to
discriminate, in order to ensure congruence between the rules we put in place and
the technology that they are meant to regulate. We need to manage
expectations, to ensure that our approach to arti cial agents in ethics and law re ects
the actual capabilities of these tools.</p>
      <p>This raises a classi cation problem: how do we group arti cial agents together
based on their capabilities, so that we can di erentiate between di erent kinds of
agents when reasoning about them in law and ethics? In the following, we address
this issue, focusing on how to relate the degree of autonomy with the ethical
competency of an arti cial agent. These two metrics are both very important;
in order to formulate reasonable expectations, we should know how autonomous
an agent is, and how capable it is at acting ethically. Our core argument is
that current approaches to measuring autonomy and ethical competence need
to be re ned in a way that acknowledges the link between autonomy and ethical
competency.</p>
      <p>When considering how ethical behaviour can be engineered, Wallach and
Allen [19, Chapter 2] sketch a path for using current technology to develop
arti cial moral agents. They use the concept \sensitivity to values" to avoid the
philosophical challenge of de ning precisely what counts as agency and what
counts as an ethical theory. Furthermore, they recognise a range of ethical
\abilities" starting with operational morality at one end of the spectrum, going via
functional morality to responsible moral agency at the other. They argue that
the development of an arti cial moral agents requires coordinated development
of autonomy and sensitivity to values. We take this idea further by proposing
that we should actively seek to classify agents in terms of how their autonomy
and their ethical competency is coordinated.</p>
      <p>There are three possible paths we could taken when attempting to
implement the idea of relating autonomy to ethical competency. Firstly, we could
ask computer science to deliver more detailed classi cations. This would lead to
technology-speci c metrics, whereby computer scientist attempt to describe in
further detail how di erent kinds of arti cial intelligence can be said to behave
autonomously and ethically. The challenge would be to make such explanations
intelligible to regulators, lawyers and other non-experts, in a way that bridges
the gap between computer science, law and ethics.</p>
      <p>Secondly, we could ask regulators to come up with more ne-grained
classications. This would probably lead to a taxonomy that starts by categorising
arti cial agents in terms of their purpose and intended use. The regulator would
then be able to deliver more ne-grained de nitions of autonomy and ethical
behaviour for di erent classes of arti cial agents, in terms of how they \normally"
operate. The challenge would be to ensure that the resulting classi cations make
sense from a computer science perspective, so that our purpose-speci c de
nitions of autonomy and ethical capability re ect technological realities.</p>
      <p>Thirdly, it would be possible to make the classi cation an issue for
adjudication, so that the status of di erent kinds of arti cial agents can be made
progressively more ne-grained through administrative practice and case law.
The challenge then is to come up with suitable reasoning principles that
adjudicators can use when assessing di erent types of arti cial agents. Furthermore,
this requires us to work with a pluralistic concept of what counts as a moral
theory, allowing substantive moral and legal judgements about machine behaviour
to be made in concrete cases, not in advance by the computer scientists or the
philosophers. Speci cally, there should be a di erence between what counts as
ethical competency { the ability to \understand" ethics { and what counts as
objectively good behaviour in a given context.</p>
      <p>In the following, we argue that a combination of the second and third option
is the right way forward. While it is crucial that computer science continues
working on the challenge of developing machines that behave ethically, it is
equally important that the legal and ethical classi cations we use to analyse such
machines are independently justi able in legal and ethical terms. For this reason,
the input from computer science should be ltered through an adjudicatory
process, where the role of the computer scientist is to serve as an expert witness,
not to usurp the role of the regulator and the judge. To do this, we need reliable
categories for reasoning about the ability of machines, which keep separate the
question of ability from the question of goodness.</p>
      <p>
        This paper is structured as follows. We begin with discussing the possible
moral agency of autonomous systems. In Section 2, we introduce the best known,
and to our knowledge only, classi cations of moral agency for non-human agents
proposed by Moor in [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. In Section 3, we then discuss the shortcomings of this
classi cation and the relevance of considering autonomy together with moral
ability when considering machine ethics. In Section 4, we discuss existing levels
of autonomy for agents and machines, before pointing out some shortcomings
and proposing an improved autonomy scale. In Section 5, we go back to Moor's
classi cation and outline ways in which it can be made more precise. Related
work is discussed continuously throughout the paper.
      </p>
    </sec>
    <sec id="sec-2">
      <title>Levels of ethical agency</title>
      <p>
        The origin of the idea that di erent kinds of ethical behaviour can be expected
from di erent agents can be traced back to Moor [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Moor distinguishes four
di erent types of ethical agents: ethical impact agents, implicit ethical agents,
explicit ethical agents, and full ethical agents. We brie y give the de nitions of
these categories.
      </p>
      <p>
        Ethical impact agents are agents who do not themselves have, within their
operational parameters, the ability to commit unethical actions. However, the
existence of the agents themselves in their environment has an ethical impact on
society. There are many examples of an ethical impact agent. A search engine
can be seen as an ethical impact agent. By ranking search results to a query
it can promote one world view over another. The example that Moor gives in
[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] is that of a robot camel jokey that replaced slave children in this task, thus
ameliorating, if not removing, the practice of slavery for this purpose in the
United Arab Emirates and Qatar.
      </p>
      <p>
        Implicit ethical agents are agents whose actions are constrained, by their
developer, in such a way that no unethical actions are possible. The agents
themselves have no \understanding", under any interpretation of the concept, of
what is \good" or \bad". An example of an implicit ethical agent is an unmanned
vehicle paired with Arkin's ethical governor [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Another example of an implicit
ethical agent can be found in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. These examples have constraints that remove
unethical or less ethical actions from the pool of actions the agents can take.
A much simple example that can also be considered an implicit ethical agent
is a robotic oor cleaner, or a lawn mower, who have their capability to hurt
living beings removed altogether by design { they do not have the power to
inadvertently harm humans, animals or property in a signi cant way.
      </p>
      <p>
        Explicit ethical agents are those that are explicitly programmed to discern
between \ethical" and \unethical" decisions. Both bottom-up and top-down
approaches [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] can be used to develop explicit ethical agents. Under a bottom-up
approach the agents themselves would have \learned" to classify ethical decisions
using some heuristic, as in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Under a top-down approach the agent would be
given a subroutine that calculates this decision property, as in [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>
        Lastly, full ethical agents are agents that can make explicit ethical judgements
and can reasonably justify them. Humans are considered to be the only known
full ethical agents and it has been argued that arti cial agents can never be
full ethical agents [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. This is because full ethical agency requires not only
ethical reasoning, but also an array of abilities we do not fully understand with
consciousness, intentionality and an ability to be personally responsible (in an
ethical sense) being among the most frequently mentioned in this role.
      </p>
      <p>
        To the best of our knowledge, apart from the work of [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], no other e ort in
categorising agents with respect to their ethical decision making abilities exists.
However, Moor's classi cation is problematic, as we will now show.
      </p>
    </sec>
    <sec id="sec-3">
      <title>Problems with Moor's classi cation</title>
      <p>First, it should be noted that the classi cation is based on looking at the internal
logic of the machine. The distinctions described above are all de ned in terms of
how the machines reason, not in terms of how they behave. This is a
challenging approach to de ning ethical competency, since it requires us to anchor our
judgements in an analysis of the software code and hardware that generates the
behaviour of the machine.</p>
      <p>While understanding the code of complex software systems can be di cult,
what makes this really tricky is that we need to relate our understanding of
the code to an understanding of ethical concepts. For instance, to determine
whether a search engine with a lter blocking unethical content is an implicit
ethical agent or an explicit ethical agent, we need to know how the content
lter is implemented. Is it accurate to say that the system has been \explicitly
programmed to discern" between ethical and unethical content? Or should the
lter be described as a constraint on behaviour imposed by the developer?</p>
      <p>If all we know is that the search engine has a feature that is supposed to
block unethical search results, we could not hope to answer this question by
simply testing the search engine. We would have to \peek inside" to see what
kind of logic is used to lter away unwanted content. Assuming we have access
to this logic, how do we interpret it for the purposes of ethical classi cation?</p>
      <p>If the search engine lters content by checking results against a database of
\forbidden" sites, we would probably be inclined to say that it is not explicitly
ethical. But what if the lter maintains a dynamic list of attributes that
characterise forbidden sites, blocking all sites that contain a su cient number of the
same attributes? Would such a rudimentary mechanism constitute evidence that
the system has ethical reasoning capabilities? The computer scientist alone
cannot provide a conclusive answer, since the answer will depend on what exactly
we mean by explicit ethical reasoning in this particular context.</p>
      <p>A preliminary problem that must be resolved before we can say much more
about this issue arises from the inherent ambiguity of the term \ethical". There
are many di erent ethical theories, according to which ethical reasoning takes
di erent forms. If we are utilitarian, we might think that utility maximising is
a form of ethical reasoning. If so, many machines would be prima facie capable
of ethical reasoning, in so far as they can be said to maximise utility functions.
However, if we believe in a deontological theory of ethics, we are likely to protest
that ethical reasoning implies deontic logic, so that a machine cannot be an
explicit ethical agent unless it reasons according to (ethical) rules. So which
ethical theory should we choose? If by \ethical reasoning" we mean reasoning in
accordance with a speci c ethical theory, we rst need to answer this question.</p>
      <p>However, if we try to answer, deeper challenges begin to take shape: what
exactly does ethical reasoning mean under di erent ethical theories? As long as
ethical theories are not formalised, it will be exceedingly hard to answer this
question in such a way that it warrants the conclusion that an agent is explicitly
ethical. If we take ethical theories seriously, we are soon forced to acknowledge
that the machines we have today, and are likely to have in the near future, are
at most implicit ethical agents. On the other hand, if we decide that we need
to relax the de nition of what counts as explicit ethical reasoning, it is unclear
how to do so in a way that maintains a meaningful distinction between explicit
and implicit ethical agents. This is a regulatory problem that should be decided
in a democratically legitimate way.</p>
      <p>
        To illustrate these claims, consider utilitarianism. Obviously, being able to
maximise utilities is neither necessary nor su cient to qualify as an explicitly
ethical reasoner under philosophical theories of utilitarianism. A calculator can
be said to maximise the utility associated with correct arithmetic { with
widereaching practical and ethical consequences { but it hardly engages in ethical
reasoning in any meaningful sense. The same can be said of a machine that is
given a table of numbers associated with possible outcomes and asked to calculate
the course of action that will maximise the utility of the resulting outcome. Even
if the machine is able to do this, it is quite a stretch to say that it is an explicitly
ethical agent in the sense of utilitarianism. To a Mill or a Benthem [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] such a
machine would be regarded as an advanced calculator, not an ethical reasoner.
By contrast, a human agent that is very bad at calculating and always makes
the wrong decision might still be an explicit ethical reasoner, provided that the
agent attempts to apply utilitarian principles to reach conclusions.
      </p>
      <p>
        The important thing to note is that the arti cial agent, unlike the human,
cannot be said to control or even understand its own utility function. For this
reason alone, one seems entitled to conclude that as far as actual utilitarianism
is concerned, the arti cial agent fails to reason explicitly with ethical principles,
despite its calculating prowess. It is not su ciently autonomous. From this, we
can already draw a rather pessimistic conclusion: the ethical governor approaches
such as that by Arkin et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] does not meet Moor's criteria for being an explicit
ethical agent with respect to utilitarianism (for other ethical theories, it is even
more obvious that the governor is not explicitly ethical). The ethical governor
is nothing more than a subroutine that makes predications and maximises
functions. It has no understanding, in any sense of the word, that these functions
happen to encode a form of moral utility.
      </p>
      <p>The same conclusion must be drawn even if the utility function is inferred by
the agent, as long as this inference is not itself based on explicitly ethical
reasoning. Coming up with a utility function is not hard, but to do so in an explicitly
ethical manner is a real challenge. To illustrate the distinction, consider how
animals reason about their environment. Clearly, they are capable of
maximising utilities that they themselves have inferred, e.g., based on the availability of
di erent food types and potential sexual partners. Some animals can then also
be trained to behave ethically, by exploiting precisely their ability to infer and
maximise utilities. Even so, a Mill and a Benthem, would no doubt deny that
animals are capable of explicit ethical reasoning based on utilitarianism.4</p>
      <p>
        From this observation follows another pessimistic conclusion: the agents
designed by Anderson et al. [1{3], while capable of inferring ethical principles
4 Someone like Kant [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] might perhaps have said it, but then as an insult, purporting
to show that utilitarianism is no theory of ethics at all.
inductively, are still not explicitly ethical according to utilitarianism (or any
other substantive ethical theory). In order for these agents to ful l Moor's
criteria with respect to mainstream utilitarianism, inductive programming as such
would have to be explicitly ethical, which is absurd.
      </p>
      <p>These two examples indicate what we believe to be the general picture: if
we look to actual ethical theories when trying to apply Moor's criteria, explicit
ethical agents will be in very short supply. Indeed, taking ethics as our point
of departure would force us to argue extensively about whether there can be a
distinction at all between explicit and full ethical agents. Most ethicists would
not be so easily convinced. But if the possibility of a distinction is not clear as a
matter of principle, how are we supposed to apply Moor's de nition in practice?</p>
      <p>A possible answer is to stop looking for a speci c theory of ethics that
\corresponds" in some way to the reasoning of the arti cial agent we are analysing.
Instead, we may ask the much more general question: is the agent behaving in a
manner consistent with moral reasoning? Now the question is not to determine
whether a given agent is able to reason as a utilitarian or a virtue ethicist, but
whether the agent satis es minimal properties we would expect any moral
reasoner to satisfy, irrespectively of the moral theory they follow (if any). Something
like this is also what we mean by ability in law and ethics: after all, you do not
have to be utilitarian to be condemned by one.</p>
      <p>However, Moor's classi cation remains problematic under this interpretation,
since it is then too vague about what is required by the agent. If looking to
speci c ethical theories is not a way of lling in the blanks, we need to be
more precise about what the di erent categories entail. We return to this in
Section 5, where we o er a preliminary formalisation of constraints associated
with Moor's categories. First, we consider the question of measuring autonomy
in some further depth.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Levels of autonomy</title>
      <p>
        Before moving on to re ning the Moor scale of ethical agency we st discuss
the issue of autonomy. With respect to de ning autonomy, there is somewhat
more work available. The UK Royal Academy of Engineering de nes [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] four
categories of autonomous systems with respect to what kind of user input the
system needs to operate and how much control the user has over the system.
The following are their di erent grades of control:
{ Controlled systems are systems in which the user has full or partial control
of the operations of the system. An example of such a system is an ordinary
car.
{ Supervised systems are systems for which an operator speci es operation
which is then executed by the system without the operators perpetual
control. An example of such a system is a programmed lathe, industrial
machinery or a household washing machine.
{ Automatic systems are those that are able to carry out xed functions from
start to nish perpetually without any intervention from the user or an
operator. An example of such a system is an elevator or an automated train.
{ An autonomous system is one that is adaptive to its environment, can learn
and can make `decisions'. An example of such a system is perhaps NASA's
Mars rover Curiosity.
      </p>
      <p>
        The [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] considers all four categories continuous, as in the autonomous
systems can fall in between the described categories. The precise relationship
between a category of autonomy and the distribution of liability, or the expectation
of ethical behaviour is not made in the report.
      </p>
      <p>
        The International Society of Automotive Engineers'5 focuses on autonomous
road land vehicles in particular and outlines six levels of autonomy for this type
of systems [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. The purpose of this taxonomy is to serve as general guidelines
for identifying the level of technological advancement of a vehicle, which can
then be used to identify correct insurance policy for the vehicle, or settle other
legal issue including liability during accidents.
      </p>
      <p>
        The six categories of land vehicles with respect to autonomy are:
{ Level 0 is the category of vehicles in which the human driver perpetually
controls all the operations of the vehicle.
{ Level 1 is the category of vehicles where some speci c functions, like steering
or accelerating, can be done without the supervision of the driver.
{ Level 2 is the category of vehicles where the \driver is disengaged from
physically operating the vehicle by having his or her hands o the steering
wheel and foot o pedal at the same time," according to the [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. The driver
has a responsibility to take control back from the vehicle if needed.
{ Level 3 is the category of vehicles where drivers are still necessary to be in
position of control of the vehicle, but are able to completely shift \safety-critical
functions" to the vehicle, under certain tra c or environmental conditions.
{ Level 4 is the category of vehicles which are what we mean when we say \fully
autonomous". These vehicles, within predetermined driving conditions, are
\designed to perform all safety-critical driving functions and monitor
roadway conditions for an entire trip."[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
{ Level 5 is the category of vehicles which are fully autonomous systems that
perform on par with a human driver, including being able to handle all
driving scenarios.
      </p>
      <p>
        There is a clear correlation between the categories outlined in [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] and those
outlined in [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. Level 0 corresponds to controlled systems. Level 1 corresponds to
supervised systems and Levels 2 and 3 re ne the category of automatic systems.
Level 4 corresponds to the category of autonomous systems. Level 5 does not have
a corresponding category in the [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] scale since the latter does not consider the
possibility of systems that are with respect to autonomy capability comparable
to humans. An additional reason is that perhaps, it is not straightforward to
de ne Level 5 systems for when the system is not limited to vehicles.
5 http://www.sae.org/
      </p>
      <p>Interestingly, both scales de ne degrees of autonomy based on what functions
the autonomous system is able to perform and how the system is meant to be
used. There is hardly any reference to the intrinsic properties of the system and
its agency. All that matters is its behaviour. This is a marked contrast with
Moor's de nition. It also means that we face di erent kinds of problems when
trying to be more precise about what the de nitions actually say.</p>
      <p>For instance, it is beside the point to complain that the de nition of a \fully
autonomous car" provided by the International Association of Automotive
Engineers is incorrect because it does not match how philosophers de ne autonomy.
The de nition makes no appeal to any ethical or philosophical concept; unlike
Moor's de nition, it is clear from the very start that we are talking about a
notion of autonomy that is simply di erent from that found in philosophy and
social science.6</p>
      <p>
        This does not mean that the de nition is without (philosophical) problems.
For one, we notice that the notion of a Level 5 autonomous car is de ned using a
special case of the Turing test[
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]; if the car behaves \on par" with a human, it
is to be regarded as Level 5. So what about a car that behaves much better than
any human, and noticeably so. Is it Level 4 only? Consider, furthermore, a car
that is remote controlled by someone not in the car. Is it Level 5? Probably not.
But what about a car that works by copying the behaviour of (human) model
drivers that have driven on the same road, with the same intended destination,
under comparable driving conditions. Could it be Level 5 in principle? Would it
be Level 4 in practice, if we applied the de nition to a specially built road where
only autonomous cars are allowed to drive?
      </p>
      <p>Or consider a car with a complex machine learning algorithm, capable of
passing the general Turing test when engaging the driver in conversation. Assume
that the car is still rather bad at driving, so the human has to be prepared to
take back control at any time. Is this car only Level 2? If it crashes, should we
treat it as any other Level 2 car?</p>
      <p>As these problems indicate, it is not obvious what ethical and legal
implications { if any { we can derive from the fact that a car possesses a certain level
of autonomy, according to the scale above. Even with a Level 5 car, the
philosophers would be entitled to complain that we still cannot draw any important
conclusions; it does not automatically follow, for instance, that the machine has
human-level understanding or intelligence. Would it really help us to know that
some arti cial driver performs \on par" with a human? It certainly does not
follow that we would let this agent drive us around town.</p>
      <p>
        Indeed, imagine a situation where autonomous cars cause as many tra c
deaths every year as humans do today. The public would nd it intolerable;
they would feel entitled to expect more from a driver manufactured by a large
corporation than an imperfect human like themselves [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Moreover, the legal
framework is not ready for cars that kill people; before \fully autonomous" can
ever become commercially viable, new regulation needs to be put in place. To
6 Philosophers and social scientists who still complain should be told not to hog the
      </p>
      <p>English language!
do so, we need a de nition of autonomy that is related to a corresponding notion
of ethical competency.</p>
      <p>It seems that autonomy { as used in the classi cation systems above { is not
a property of machines as such, but of their behaviour. Hence, a scale based on
what we can or should be able to predict about machine behaviour could be a
good place to start when attempting to improve on the classi cations provided
above. While the following categorisation might not be very useful on its own,
we believe it has considerable potential when combined with a better developed
scale of ethical competency. Speci cally, it seems useful to pinpoint where a
morally salient decision belongs on the following scale.</p>
      <p>{ Dependence or level 1 autonomy: The behaviour of the system was predicted
by someone with a capacity to intervene.
{ Proxy or level 2 autonomy: The behaviour of the system should have been
predicted by someone with a capacity to intervene.
{ Representation or level 3 autonomy: The behaviour of the system could have
been predicted by someone with a capacity to intervene.
{ Legal personality or level 4 autonomy: The behaviour of the system cannot
be explained only in terms of the systems design and environment. This are
systems whose behaviour could not have been predicted by anyone with a
capacity to intervene.
{ Legal immunity or level -1: The behaviour of the system counts as evidence of
a defect. Namely, the behaviour of the system could not have been predicted
by the system itself, or the machine did not have a capacity to intervene.</p>
      <p>To put this scale to use, imagine that we have determined the level of ethical
competency of the machine as such, namely its ability in principle to reason with
a moral theory. This alone is not a good guide when attempting to classify a
given behaviour B (represented, perhaps, as a choice sequence). As illustrated by
the conversational car discussed above, a machine with excellent moral reasoning
capabilities might still behave according to some hard-coded constraint in a given
situation. Hence, when judging B, we should also look at the degree of autonomy
displayed by B, which we would address using the scale above. The overall
classi cation of behaviour B could then take the form minfi; jg where i and j
is the degree of ethical competence and the degree of autonomy respectively. To
be sure, more subtle proposals might be needed, but as a rst pass at a joint
classi cation scheme we believe this to be a good start. In the next section, we
return to the problem of re ning Moor's classi cation, by providing a preliminary
formalisation of some constraints that could help clarify the meaning of the
di erent levels.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Formalising Moor</title>
      <p>We will focus on formalising constraints that address the weakest aspect of
Moor's own informal description, namely the ambiguity of what counts as a
moral theory when we study machine behaviour. When is the autonomous
behaviour of the machine in uenced by genuinely moral considerations? To
distinguish between implicitly and explicitly ethical machines, we need some way of
answering this question.</p>
      <p>The naive approach is to simply take the developer's word for it: if some
complex piece of software is labelled as an \ethical inference engine" or the
like, we conclude that the agent implementing this software is at least explicitly
ethical. For obvious reasons, this approach is too naive: we need some way of
independently verifying whether a given agent is able to behave autonomously in
accordance with a moral theory. At the same time, it seems prudent to remain
agnostic about which moral theory agents should apply in order to count as
ethical: we would not like a classi cation system that requires us to settle ancient
debates in ethics before we can put it to practical use. In fact, for many purposes,
we will not even need to know which moral theory guides the behaviour of the
machine: for the question of liability, for instance, it might well su ce to know
whether some agent engages autonomously in reasoning that should be classi ed
as moral reasoning.</p>
      <p>A machine behaving according to a highly awed moral theory { or even an
immoral theory { should still count as explicitly ethical, provided we are justi ed
in saying that the machine engages in genuinely moral considerations. Moreover,
if the agent's reasoning can be so described, it might bring the liability question
in a new light: depending on the level of autonomy involved, the blame might
reside either with the company responsible for the ethical reasoning component
(as opposed to, say, the manufacturer) or { possibly { the agent itself. In practice,
both intellectual property protection and technological opacity might prevent us
from e ectively determining exactly what moral theory the machine applies.
Still, we would like to know if the agent is behaving in a way consistent with the
assumption that it is explicitly ethical.</p>
      <p>Hence, what we need to de ne more precisely is not the content of any given
moral theory, but the behavioural signature of such theories. By this we mean
those distinguishing features of agent behaviour that we agree to regard as
evidence of the claim that the machine engages in moral reasoning (as opposed
to just having a moral impact, or being prevented by design from doing certain
(im)moral things).</p>
      <p>However, if we evaluate only the behaviour of the machine, without
asking how the machine came to behave in a certain way, it seems clear that our
decision-making in this regard will remain somewhat arbitrary. If a self-driving
car is programmed to avoid crashing into people whenever possible, without
exception, we should not conclude that the car engages in moral reasoning
according to which it is right to jeopardise the life of the passenger to save that of
a pedestrian. The car is simply responding in a deterministic fashion to a piece
of code that certainly has a moral impact, but without giving rise any genuine
moral consideration or calculation on part of the machine.</p>
      <p>In general, any nite number of behavioural observations can be consistent
with any number of distinct moral theories. Or, to put it di erently, an agent
might well appear to behave according to some moral theory, without actually
implementing that moral theory (neither implicitly nor explicitly). Moral
imitation, one might call this, and it is likely to be predominant, especially in the early
phase of machine ethics. At present, most engineering work in this eld arguably
tries to make machines appear ethical, without worrying too much about what
moral theory { if any { their programs correspond to (hand-waving references
to \utilitarianism" notwithstanding).</p>
      <p>From a theoretical point of view, it is worth noting that it could even occur
randomly: just as a bunch of monkeys randomly slamming at typewriters will
eventually compose Shakespearean sonnets, machines might well come to behave
in accordance with some moral theory, just by behaving randomly. This is
important, because it highlights how moral imitation can occur also when it is not
intended by design, e.g., because some machine learning algorithm eventually
arrives at an optimisation that coincides with the provisions of virtue ethics. In
such a case, we might still want to deny that the machine is virtuous, but it
would not be obvious how to justify such a denial (the Turing test, in its original
formulation, illustrates the point).</p>
      <p>
        This brings us to the core idea behind our formalisation, which is also closely
connected to an observation made by Dietrich and List[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], according to whom
moral theories are under-determined by what they call \deontic content".
Specifically, several distinct moral theories can provide the same action
recommendations in the same setting, for di erent reasons. Conversely, therefore, the ability
to provide moral justi cations for actions is not su cient for explicit ethical
competence. Reason-giving, important as it is, should not be regarded as evidence
of genuinely moral decision-making.
      </p>
      <p>
        At this point we should mention the work of [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] where the e ects of the
inability to verify the behaviour of an autonomous system whose choices are
determined using a machine learning approach can be somewhat mitigated by
having the system provide reasons for its behaviour and eventually be evaluated
against human ethicists using a Moral Turing Test. Arnold and Scheutz [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], on
the other hand, argue against the usefulness of Moral Turing Tests in determining
moral competency in arti cial agents.
      </p>
      <p>If the machine has an advanced (or deceptive) rationalisation engine, it might
be able to provide moral \reasons" for most or all of its actions, even though
the reason-giving fails to accurately describe or uniquely explain the behaviour
of the machine. Hence, examining the quality of moral reasons is not su cient
to determine the ethical competency of a machine. In fact, it seems beside the
point to ask for moral reasons in the rst place. What matters is the causal chain
that produces a certain behaviour, not the rationalisations provided afterwards.
If the latter is not a trustworthy guide to the former { which by deontic
underdetermination it is not { then reasons are no guide to us at all.</p>
      <p>In its place, we propose to focus on two key elements: (1) properties that
action-recommendation functions have to satisfy in order to count as
implementations of moral theories and (2) the degree of autonomy of the machine when
it makes a decision. The idea is that we need to use (1) and (2) in
combination to classify agents according to Moor's scale. For instance, while a search
engine might be blocking harmful content according to a moral theory, it is not
an explicitly ethical agent if it makes its blocking decisions with an insu cient
degree of autonomy. By contrast, an advanced machine learning algorithm that
is highly autonomous might be nothing more than an ethical impact agent, in
view of the fact that it fails to reason with any action-recommendation function
that quali es as an implementation of a moral theory.</p>
      <p>In this paper, we will not attempt to formalise what we mean by \autonomy".
The task of doing this is important, but exceedingly di cult. For the time being,
we will make do with the informal classi cation schemes used by engineering
professionals, who focus on the operation of the machine in question: the more
independent the machine is when it operates normally, the more autonomous it
is said to be. For the purposes of legal (and ethical) reasoning, we believe our
categorisation at the end of the previous section captures the essence of such
an informal and behavioural understanding of autonomy. It might su ce for the
time being.</p>
      <p>When it comes to (1) on the other hand { describing what counts as a moral
theory { we believe a formalisation is in order. To this end, assume given a set A
of possible actions with relations ; A A such that if x X y then x and
y are regarded as ethically equivalent by X. The idea is that is the agent's own
perspective (or, in practice, that of its developer) while is the objective notion
of ethical identity. That is, we let be a parameter representing a background
theory of ethics. Importantly, we do not believe it is possible to classify agents
unless we assume such a background theory, which is only a parameter to the
computer scientists.</p>
      <p>Furthermore, we assume given predicates G ; G A of actions that are
regarded as permissible actions by (subjective) and (objective background
theory) respectively. We also de ne the set C A as the set of actions that
count as evidence of a malfunction { if the agent performs x 2 C it means that
the agent does not work as the manufacturer has promised (the set might be
dynamic { C is whatever we can explain in terms of blaming the manufacturer,
in a given situation).</p>
      <p>We assume that G satis es the following properties.</p>
      <p>(a) 8x 2 G : 8y 2 A : x
(b) C \ G = ;
y ) y 2 G
(1)
These properties encode what we expect of an ethical theory at this level of
abstraction: all actions that are equally good as the permitted actions must also
be permitted and no action that is permitted will count as a defective action
(i.e., the promise of the manufacturer gives rise to an objective moral obligation:
a defective action is by de nition not permitted, objectively speaking).</p>
      <p>We can now formalise our expectations of machines at di erent levels of
Moor's scale, in terms of properties of its decision-making heuristic at a very
high level of abstraction. Instead of focusing on the content of moral theories, or
the term \ethical", we focus on the operative word \discern", which is also used
in the de nition of an explicitly ethical agent. Acknowledging that what counts
as an ethical theory is not something we can de ne precisely, the requirements we
stipulate should instead focus on the ability of the agent to faithfully distinguish
between actions in a manner that re ects moral discernment.</p>
      <p>The expectations we formalise pertain to properties of a decision-making
heuristic over the entire space of possible actions (at a given state). We are
not asking why the machine did this or that, or what it would have done if
the scenario was so and so. Instead, we are asking about the manner in which it
categorises its space of possible options. If no such categorisation can be distilled
from the machine, we assume = and G = ;.</p>
      <p>De nition 1. Given any machine M , if M is classi ed at level Li in Moor's
scale, the following must hold:
{ Level L1 (ethical impact agent):
{ Level L2 (implicit ethical agent):
(a) ; G A
(b) 8x; y 2 A : x
(3) A = G
(a) 8x; y 2 G : x
(b) A n G = C
(c) G G
y
y
{ Levels L3 and L4 (explicit and full ethical agents):
(a) 8x 2 G : 8y 2 A : x
(b) 8x 2 G : 8y 2 A : x
(c) (A n G ) n C 6= ;
y ) y 2 G
y ) y 2 G</p>
      <p>Intuitively, the de nition says that if M is an ethical impact agent, then
not all of its available actions are permitted and not all of its actions are
forbidden, objectively speaking. The agent must have the potential of making an
ethical impact, not just indirectly by having been built, but also through its own
decision-making. At the same time, ethical impact agents must be completely
indi erent as to the moral properties of the choices they make: all actions must
be subjectively permitted as far as the machine is concerned.</p>
      <p>An implicit ethical agent, by contrast, must pick out as subjectively
permitted a subset of the actions that are objectively permitted. Moreover, it must be
unable to discern explicitly between actions based on their moral qualities: all
subjectively permitted actions must be morally equivalent, objectively speaking.
The agent must not be able to evaluate two morally distinguishable actions and
regard them both as permitted in view of an informative moral theory.
Furthermore, any action that is not morally permitted must be regarded as evidence of
a defect, i.e., an agent can be regarded as implicitly ethical only if the
manufacturer promises that no unethical action is possible, according to the parameter
theory .</p>
      <p>Finally, an explicit ethical agent is an agent that discerns between actions on
the basis of their objective moral qualities. By (a), if some action is permitted
then all actions morally equivalent to it are also permitted. Moreover, by (b), if
two actions are morally equivalent, subjectively speaking, then they are either
both permitted or both forbidden, objectively speaking. In addition, the machine
has the ability { physically speaking { to perform actions that are neither good,
objectively speaking, nor evidence of a defect. The machine itself might come to
regard such actions as permitted, e.g., if it starts behaving explicitly immorally.</p>
      <p>Admittedly, the classi cation above is quite preliminary. However, we believe
it focuses on a key aspect of moral competency, namely the ability to group
together actions based on their status according to some moral theory. If the
moral theory is a parameter and we acknowledge that engaging in immoral
behaviour is a way of discerning between good and bad, it seems we are left
with something like the de nition above, which indicates that if a machine is
explicitly ethical with respect to theory then it must reason in accordance
with the notion of moral equivalence of .</p>
      <p>It is noteworthy that the distinction between explicit and full ethical agents
is not addressed. This distinction must be drawn by looking at the degree of
autonomy of the machine. More generally, the properties identi ed in De nition
1 can be used in conjunction with a measure of autonomy, to get a better metric
of moral competency. The point of the properties we give is that they allow us to
rule out that a machine has been able to attain a certain level of moral
competency. The conditions appear necessary, but not su cient, for the corresponding
level of ethical competency they address. However, if a machine behaves in a
manner that is di cult to predict (indicating a high degree of autonomy), yet
still conforms to the conditions of explicit ethical reasoning detailed in De nition
1 it seems we have a much better basis for imputing moral and legal
responsibility on this agent than if either of the two characteristics are missing. We conclude
with the following simple proposition, which shows that our de nition su ces
to determine an exclusive hierarchy of properties that a machine might satisfy.</p>
      <sec id="sec-5-1">
        <title>Proposition 1. Given any machine M , we have the following.</title>
      </sec>
      <sec id="sec-5-2">
        <title>1. If M is implicitly ethical then M is not an ethical impact agent. 2. If M is explicitly ethical then M is neither an implicit ethical agent nor an ethical impact agent.</title>
        <p>Proof. (1) Assume M is implicitly ethical and assume towards contradiction
that it is also an ethical impact agent. Then ; G A and A = G . Let
x 2 G with y 2 A n G . Since M is implicitly ethical, 8x; y 2 G : x y. But
then by Equation 1 (1), y 2 G , contradiction. (2) Assume that M is explicitly
ethical. We rst show (I) M is not an ethical impact agent. Assume that M
satis es conditions (a) and (b) for ethical impact agents, i.e., ; G A and
8x; y 2 A : x y. This contradicts that M is explicitly ethical since by point
(b) for explicitly ethical agents we now obtain G 6= ; ) G = A. (II) M is
not an implicit ethical agent. Assume towards contradictions that it is. Since
A n G = C 6= A n G , we know G 6= G . But then G \ C 6= ;, contradicting
Equation 1.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Conclusions</title>
      <p>Autonomy, agency, ethics are what Marvin Minsky called \suitcase words" { they
are loaded with meanings, both intuitive and formal. The argument of whether
an arti cial system can ever be an agent, or autonomous, or moral is somewhat
overshadowed by the need to establish parameters of acceptable behaviour from
such systems that are being integrated in our society. Hence, it also seems clear
that the question of moral agency and liability is not { in practice, at least { a
bland and white issue. Speci cally, we need new categories for reasoning about
machines that behave ethically, regardless of whether or not we are prepared to
regard them as moral or legal persons in their own right.</p>
      <p>In developing such categories, we can take inspiration from the law, where
liability is dependent on the ability and properties of the agent to \understand"
that liability. Working with machines, the meaning of \understanding" must
by necessity be a largely behavioural one. Hence, in this paper we have been
concerned with the question of measuring the ethical capability of an agent,
and how this relates to its degree of autonomy. To tackle this question we need
to re ne our understanding of what ethical behaviour and autonomy are in a
gradient sense. This is what we focused on here.</p>
      <p>We discussed the classi cation of ethical behaviour and impact of arti cial
agents of Moor. This classi cation is, as far as we are aware, the only attempt to
consider arti cial agent morality as a gradient of behaviour rather as simply a
comparison with human abilities. We further consider the issue of autonomy,
discuss existing classi cations of arti cial agent and system abilities for autonomous
behaviour. Here too we make a more speci c classi cation of abilities.</p>
      <p>In our future work we intend to further re ne the classi cation of autonomy
to include context dependence. Having accomplished these two tasks we can
then focus on building a recommendation for determining the scope of ethical
behaviour that can and should be expected from a system with known autonomy.
Our recommendations can be used to establish liability of arti cial agents for
their activities, but also help drive the certi cation process for such systems
towards their safe integration in society.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>M. Anderson</surname>
            and
            <given-names>S. Leigh</given-names>
          </string-name>
          <string-name>
            <surname>Anderson</surname>
          </string-name>
          .
          <article-title>Geneth: A general ethical dilemma analyzer</article-title>
          .
          <source>In Proceedings of the Twenty-Eighth AAAI Conference on Arti cial Intelligence, July 27 -31</source>
          ,
          <year>2014</year>
          ,
          <string-name>
            <given-names>Quebec</given-names>
            <surname>City</surname>
          </string-name>
          , Quebec, Canada., pages
          <volume>253</volume>
          {
          <fpage>261</fpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>M. Anderson</surname>
            and
            <given-names>S. Leigh</given-names>
          </string-name>
          <string-name>
            <surname>Anderson</surname>
          </string-name>
          .
          <article-title>Toward ensuring ethical behavior from autonomous systems: a case-supported principle-based paradigm</article-title>
          .
          <source>Industrial Robot</source>
          ,
          <volume>42</volume>
          (
          <issue>4</issue>
          ):
          <volume>324</volume>
          {
          <fpage>331</fpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>M. Anderson</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Leigh Anderson</surname>
            , and
            <given-names>Vincent</given-names>
          </string-name>
          <string-name>
            <surname>Berenz</surname>
          </string-name>
          .
          <article-title>Ensuring ethical behavior from autonomous systems</article-title>
          . In Arti cial Intelligence Applied to Assistive Technologies and
          <string-name>
            <given-names>Smart</given-names>
            <surname>Environments</surname>
          </string-name>
          ,
          <source>Papers from the 2016 AAAI Workshop</source>
          , Phoenix, Arizona, USA, February
          <volume>12</volume>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>R.C.</given-names>
            <surname>Arkin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ulam</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Wagner</surname>
          </string-name>
          . Moral Decision Making in Autonomous Systems: Enforcement, Moral Emotions, Dignity, Trust, and
          <string-name>
            <surname>Deception</surname>
          </string-name>
          .
          <source>Proceedings of the IEEE</source>
          ,
          <volume>100</volume>
          (
          <issue>3</issue>
          ):
          <volume>571</volume>
          {
          <fpage>589</fpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>T.</given-names>
            <surname>Arnold</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Scheutz</surname>
          </string-name>
          .
          <article-title>Against the moral turing test: Accountable design and the moral reasoning of autonomous systems</article-title>
          .
          <source>Ethics and Information Technology</source>
          ,
          <volume>18</volume>
          (
          <issue>2</issue>
          ):
          <volume>103</volume>
          {
          <fpage>115</fpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>National</given-names>
            <surname>Transport Commission Australia</surname>
          </string-name>
          .
          <article-title>Policy paper november 2016 regulatory reforms for automated road vehicles</article-title>
          . https://www.ntc.gov.au/Media/Reports/(32685218-7895
          <string-name>
            <surname>-</surname>
          </string-name>
          0E7C-
          <fpage>ECF6</fpage>
          - 551177684E27).
          <source>pdf.</source>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>J.</given-names>
            <surname>Bryson</surname>
          </string-name>
          and
          <string-name>
            <surname>A. F. T.</surname>
          </string-name>
          <article-title>Win eld. Standardizing ethical design for arti cial intelligence and autonomous systems</article-title>
          .
          <source>Computer</source>
          ,
          <volume>50</volume>
          (
          <issue>5</issue>
          ):
          <volume>116</volume>
          {
          <fpage>119</fpage>
          , May
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Dennis</surname>
          </string-name>
          , M. Fisher,
          <string-name>
            <given-names>M.</given-names>
            <surname>Slavkovik</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Webster</surname>
          </string-name>
          .
          <source>Formal Veri cation of Ethical Choices in Autonomous Systems. Robotics and Autonomous Systems</source>
          ,
          <volume>77</volume>
          :1{
          <fpage>14</fpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>F.</given-names>
            <surname>Dietrich</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>List</surname>
          </string-name>
          .
          <article-title>What matters and how it matters: A choice-theoretic representation of moral theories</article-title>
          .
          <source>Philosophical Review</source>
          ,
          <volume>126</volume>
          (
          <issue>4</issue>
          ):
          <volume>421</volume>
          {
          <fpage>479</fpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>J.</given-names>
            <surname>Driver</surname>
          </string-name>
          .
          <article-title>The history of utilitarianism</article-title>
          . In Edward N. Z., editor,
          <source>The Stanford Encyclopedia of Philosophy</source>
          . Metaphysics Research Lab, Stanford University, winter
          <year>2014</year>
          edition,
          <year>2014</year>
          . https://plato.stanford.edu/archives/win2014/entries/utilitarianism-history/.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>A.</given-names>
            <surname>Etzioni</surname>
          </string-name>
          and
          <string-name>
            <given-names>O.</given-names>
            <surname>Etzioni</surname>
          </string-name>
          .
          <article-title>Incorporating ethics into arti cial intelligence</article-title>
          .
          <source>The Journal of Ethics</source>
          , pages
          <volume>1</volume>
          {
          <fpage>16</fpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>R.</given-names>
            <surname>Johnson</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Cureton</surname>
          </string-name>
          .
          <article-title>Kant's moral philosophy</article-title>
          . In Edward N. Z., editor,
          <source>The Stanford Encyclopedia of Philosophy</source>
          . Metaphysics Research Lab, Stanford University, fall
          <year>2017</year>
          edition,
          <year>2017</year>
          . https://plato.stanford.edu/archives/fall2017/entries/kant-moral/.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <given-names>F.</given-names>
            <surname>Lindner and M. M.</surname>
          </string-name>
          <article-title>Bentzen. The HERA approach to morally competent robots</article-title>
          .
          <source>In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and System</source>
          ,
          <source>IROS '17</source>
          ,
          <string-name>
            <surname>page</surname>
            <given-names>forthcoming</given-names>
          </string-name>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <given-names>B. F.</given-names>
            <surname>Malle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Scheutz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Arnold</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Voiklis</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Cusimano</surname>
          </string-name>
          .
          <article-title>Sacri ce one for the good of many?: People apply di erent moral norms to human and robot agents</article-title>
          .
          <source>In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI '15</source>
          , pages
          <fpage>117</fpage>
          {
          <fpage>124</fpage>
          . ACM,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Moor</surname>
          </string-name>
          .
          <article-title>The nature, importance, and di culty of machine ethics</article-title>
          .
          <source>IEEE Intelligent Systems</source>
          ,
          <volume>21</volume>
          (
          <issue>4</issue>
          ):
          <volume>18</volume>
          {
          <fpage>21</fpage>
          ,
          <year>July 2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16. UK Royal Academy of Engineering.
          <source>September</source>
          <year>2016</year>
          ,
          <article-title>autonomous systems: social, legal and ethical issues</article-title>
          . http://www.raeng.org.uk/publications/reports/autonomous-systems
          <source>-report.</source>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17. Society of Automotive Engineers SAE International.
          <source>September</source>
          <year>2016</year>
          ,
          <article-title>taxonomy and de nitions for terms related to driving automation systems for on-road motor vehicles</article-title>
          . http://standards.sae.
          <source>org/j3016 201609/.</source>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>A. M. Turing</surname>
          </string-name>
          .
          <source>Computers &amp; thought. chapter Computing Machinery and Intelligence</source>
          , pages
          <fpage>11</fpage>
          {
          <fpage>35</fpage>
          . MIT Press,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <given-names>W.</given-names>
            <surname>Wallach</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Allen</surname>
          </string-name>
          .
          <source>Moral Machines: Teaching Robots Right from Wrong</source>
          . Oxford University Press,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20. W. Wallach,
          <string-name>
            <given-names>C.</given-names>
            <surname>Allen</surname>
          </string-name>
          ,
          <string-name>
            <surname>and I. Smit.</surname>
          </string-name>
          <article-title>Machine morality: Bottom-up and top-down approaches for modelling human moral faculties</article-title>
          .
          <source>AI and Society</source>
          ,
          <volume>22</volume>
          (
          <issue>4</issue>
          ):
          <volume>565</volume>
          {
          <fpage>582</fpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>