<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Ethics and Authority Sharing for Autonomous Armed Robots</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Florian</forename><surname>Gros</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">French Aerospace Lab</orgName>
								<address>
									<settlement>Onera, Toulouse</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Catherine</forename><surname>Tessier</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">French Aerospace Lab</orgName>
								<address>
									<settlement>Onera, Toulouse</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author role="corresp">
							<persName><forename type="first">Thierry</forename><surname>Pichevin</surname></persName>
							<email>thierry.pichevin@st-cyr.terre-net.defense.gouv.fr</email>
							<affiliation key="aff1">
								<orgName type="institution" key="instit1">CREC</orgName>
								<orgName type="institution" key="instit2">Ecoles de Saint-Cyr Coetquidan</orgName>
								<address>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Ethics and Authority Sharing for Autonomous Armed Robots</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">7C442F1FCB13BB4D513D04E3D7DEFBE1</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T14:27+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The goal of this paper is to review several ethical questions that are relevant to the use of autonomous armed robots and to authority sharing between such robots and the human operator. First, we discern the commonly confused meanings of morality and ethics. We continue by proposing leads to answer some of the most common ethical questions raised by literature, namely the autonomy, responsibility and moral status of autonomous robots, as well as their ability to reason ethically. We then present the possible advantages that authority sharing with the operator could provide with respect to these questions.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>There are many questions and controversies commonly raised by the use of increasingly autonomous robots, especially in military contexts <ref type="bibr" target="#b50">[51]</ref>. In this domain autonomy is can be explored because of the need for reducing the atrocities of war, e.g. loss of human lives, violation of human rights, and for increasing battle performance to avoid unnecessary violence <ref type="bibr" target="#b2">[3]</ref>. Since full autonomy is far from achieved, robots are usually supervised by human operators. This coupling between a human and a robotic agent involves a shared authority on the robot's resources <ref type="bibr" target="#b29">[30]</ref>, allowing for adaptability of the system in complex and dynamic battle contexts. Even with humans in the process, the deployment of autonomous armed robots raises ethical questions such as the responsibility of robots using lethal force incorrectly <ref type="bibr" target="#b46">[47]</ref>, the extent of their autonomous abilities and the related dangers, their ability to comply with a set of moral rules and to reason ethically <ref type="bibr" target="#b43">[44]</ref>, and the status of robots with regard to law due to the ever-increasing autonomy and human resemblance that robots display <ref type="bibr" target="#b27">[28]</ref>.</p><p>In this paper we will highlight the distinction between morality and ethics (section 2). Then several ethical issues raised by the deployment of autonomous armed robots, such as autonomy, responsibility, consciousness and moral status will be discussed (section 3). As another kind of ethical questions, a review of the frameworks used to implement ethical reasoning into autonomous armed robots will be presented afterwards (section 4). Finally, we will consider the ethical issues and implementations mentioned earlier in the framework of authority sharing between a robot and a human operator (section 5).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">MORALITY AND ETHICS</head><p>The concepts of morality and ethics are often used in an identical fashion. If we want to talk about ethics for autonomous robots, we</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Morality</head><p>If we ignore meta-ethical debates that aim at defining morality and its theoretical grounds precisely, we can conceive morality as principles of good or bad behaviour, an evaluation of an action in terms of right and wrong <ref type="bibr" target="#b51">[52]</ref>. This evaluation can be considered either absolute or coming from a particular conception of life, a typical moral rule being "Killing is wrong". It is important to note that in this work, we focus on moral action, whether it results from rules, or from intentions of the subject doing the action.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Deontology and teleology</head><p>One of the bases for morality is the human constant need to believe in a meaning of one's actions. In most philosophical debates, this sense pertains to two often opposed categories : teleology and deontology.</p><p>For teleology, the moral action has to be good, the goal being to maximize the good and to minimize the evil produced by the action <ref type="bibr" target="#b32">[33]</ref>. In this case, morality is commonly viewed as external to the agent, because it comes within the scope of a finalized world defining the rules and the possible actions and their goals, therefore defining the evaluation of actions.</p><p>For deontology, the moral action is done by duty, and must comply with rules regardless of the consequences of the action, whether they are foreseen or not, good or bad <ref type="bibr" target="#b33">[34]</ref>. A case by case evaluation is not necessarily relevant here, because it is the humans' responsibility to dictate the rational and universal principles they want to live by.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Ethics</head><p>Ethics appears as soon as a conflict between existing legal or moral rules emerges, or when there is no rule to guide one's actions <ref type="bibr" target="#b35">[36]</ref>. For example, if a soldier has received an order not to hurt any civilian, but to neutralize any armed person, what should he do if he encounters an armed civilian? We can thus consider ethics as the commitment to resolving moral controversies <ref type="bibr" target="#b12">[13]</ref> where the agent, with good will, has to solve the conflicts he is faced with.</p><p>Those conflicts often oppose deontological and teleological principles, namely what has to be privileged between right and good ? The goal of ethics is not to pick one side and stand by it forever, but to be able to keep a balance between right and good when solving complex problems. Solving an ethical conflict then requires, apart from weighing good and evil, a sense of creativity in front of a complex situation and to be able to provide alternative solutions to moral rules imperatives <ref type="bibr" target="#b30">[31]</ref>.</p><p>To provide an illustration of the distinction between morality and ethics, we will consider that any moral conflict needs ethical reasoning abilites to be solved. Speaking of ethical rules would not make sense since ethics apply when rules are absent or in conflict.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">AUTONOMY, RESPONSIBILITY, MORAL STATUS : PROSPECTS FOR ROBOTS</head><p>Technologies leave us presently in an intermediate position where robots can perceive their environment, act and make decisions by themselves, but lack a more complete kind of autonomy or the technological skill to be able to analyze their environment precisely and understand what happens in a given situation. Still research advances urge us to think about how to consider autonomous robots in a moral, legal and intellectual frame, both for the time being and when robots are actually skilled enough to be considered similar to humans. In this section, we will review important questions for autonomous robots i.e. autonomy, responsibility, moral status and see which answers are plausible. Then we will relate these questions to authority sharing.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Autonomy</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.1">Kant and the autonomy of will</head><p>When considering autonomy, one of the most influential view in occidental culture is Kant's. For him, human beings bend reality to themselves with their perception and reason, they escape natural or divine laws. Only reason enables humans to create laws that will determine humankind. Then laws cannot depend on external circumstances as reason only can provide indications in order to determine what is right or wrong. Consequently laws have to be created by a good will, i.e. a will imposing rules on itself not to satisfy an interest, but by duty towards other humans. Therefore no purpose can be external to humankind, and laws are meaningful to humans only if they are universal. This leads to the well-known moral "categorical" imperative <ref type="foot" target="#foot_0">3</ref> , that immediatly determines what it orders because it enounces only the idea of an universal law and the necessity for the will to follow it <ref type="bibr" target="#b38">[39]</ref>.</p><p>Humans being the authors of the law they obey, it is possible to consider them as an end, and the will as autonomous. Thus, to be universal, a law has to respect humans as ends in themselves, inducing a change in the categorical imperative. If the law was external to humans, they would not be ends in themselves, but mere instruments used by another entity. Such a statement would deny the human ability to escape divine or natural laws, which is not acceptable for the kantian theory. We can only conceive law as completely universal, respecting humans as ends in themselves. To sum up, the kantian autonomy is the ability for an agent to define his own laws as ways to fulfill his goals and to govern his own actions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.2">Autonomy and robots</head><p>In the case of an Unmanned System, autonomy usually stands for decisional autonomy. It can be defined as the ability for an agent to minimize the need for supervision and to evolve alone in its environment <ref type="bibr" target="#b42">[43]</ref>, or more precisely, its "own ability of sensing, perceiving, analyzing, communicating, planning, decision making, and acting/executing, to achieve its goals as assigned by its human operators" <ref type="bibr" target="#b20">[21]</ref>.</p><p>We can see a difference between those definitions and Kant's. Robot autonomy is perceived differently for robots than for humans, as an autonomy of means, not of end. The reason for this is that robots are not sophisticated enough to be able to define their own goals and to achieve them. Robots are therefore viewed as mere tools whose autonomy is only intended to alleviate the operators' workload.</p><p>Consequently, to be envisioned as really autonomous, robots should be able to determine their own goals once deployed, thus to have will and be ends in themselves. The real question to ask here is if it is really desirable to build such fully autonomous robots, especially if they are to be used on a battlefield. If the objective is solely to display better performance than human soldiers, full autonomy is probably inappropriate, since being able to control robots and their goals from the beginning to the end of their deployment is one of the main reasons for actually using them.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Responsibility</head><p>If we want to use autonomous robots, we have to know to what extent a subject is considered responsible for his actions. It is especially important when applied to armed robots, since they can be involved in accidents where lives are at stake.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.1">Philosophical approaches to responsibility</head><p>Classically responsibility has been considered from a broad variety of angles, whether being a relationship to every other human being in order to achieve a goal of salvation given by a divine entity (Augustine of Hippo), a logic consequence of the application of the categorical imperative (Kant), a duty towards the whole humanity as the only way to give a sense, a determination to one's actions and to define oneself in the common human condition (Sartre, <ref type="bibr" target="#b41">[42]</ref>), or an obligation to maintain human life on Earth as long as possible by one's actions <ref type="bibr">(Jonas,</ref><ref type="bibr" target="#b21">[22]</ref>).</p><p>The problem with those approaches is that they are thought for humans and consequently they require, more or less, an autonomy of end. As discussed above, this is not a direct possibility for robots. We then need to envision robot responsibility in their own "area" of autonomy, namely an autonomy of means, where the actions are not performed by humans. To discuss this problem, it is necessary to distinguish two types of responsibility : causal responsibility and moral responsibility.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.2">Causal responsibility vs. moral responsibility</head><p>By moral responsibility, we mean the ability, for a conscious and willing agent, to make a decision without referring to a higher authority, to give the purposes of his actions, and to be judged by these purposes. To sum up, the agent has to possess a high-level intentionality <ref type="bibr" target="#b11">[12]</ref>. This moral responsibility is not to be confused with causal responsibility, which establishes the share of a subject (or an object) in a causal chain of events. The former is the responsibility of a soldier who willingly shot an innocent person, the latter is the responsibility of a malfunctioning toaster that started a fire in a house.</p><p>Every robot has some kind of causal responsibility. Still, trying to determine the causal responsibility of a robot (or of any agent) for a given event is way too complex because it requires to analyze every action the robot did that could have led to this event. What we are really interested in is to define what would endow robots with a moral responsibility for their actions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.3">Reduced responsibility, a solution ?</head><p>Some approaches that are currently considered for the responsibility of autonomous robots are based on their status of "tools", not of autonomous agents. Thus, their share of responsibility is reduced or transferred to another agent.</p><p>The first approach is to consider robots as any product manufactured and designed by an industry. In case of a failure, the responsibility of the industry (as a moral person) is substituted to the responsibility of the robot. The relevant legal term here is negligence <ref type="bibr" target="#b23">[24]</ref>. It implies that manufacturers and designers have failed to do what was legally or morally required, thus can be held accountable of the damage caused by their product. The downside of this approach is that it can lean towards a causal responsibility which -as said earlier -is more difficult to assess than a moral responsibility. Besides, developing a robot that is sure enough to be used on a battlefield would demand too much time for it to represent a good business, and it wouldn't even be enough to be safely used, a margin of error still existing no matter how sophisticated a robot is.</p><p>Another approach then would be to apply the slave morality to autonomous robots <ref type="bibr" target="#b23">[24]</ref>  <ref type="bibr" target="#b27">[28]</ref>. A slave, by itself, is not considered responsible for his actions, but his master is. At a legal level, it is considered as vicarious liability, illustrated by the well-known maxim Qui facit per alium facit per se 4 . If we want to apply this to autonomous armed robots, their responsibility would be substituted to their nearest master, namely the closest person in the chain of command who decided and authorized the deployment of the robots. This way, a precise person takes responsibility for the robots actions, which spares investigations through the chain of command to assess causal responsibilities.</p><p>Finally, if we consider an autonomous robot to be able to comply with some moral rules, to reason as well as to act, it is possible to envision the robot as possessing, not moral responsibility, but moral intelligence <ref type="bibr" target="#b4">[5]</ref>. The robotic agent is then considered to be able to adhere to an ethical system. Therefore there is a particular morality within the robot that is specific to the task it is designed for.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.4">Other leads for a moral responsibility</head><p>No robot has been meeting the necessary requirements for moral responsibility, and no law has been specifically written for robots. The question is then to determine what is necessary for robots to achieve moral responsibility and what to do when they break laws.</p><p>For <ref type="bibr" target="#b18">[19]</ref> and <ref type="bibr" target="#b0">[1]</ref>, the key to moral responsibility is the access to a moral status. Besides an emotional system, this requires the ability of rational deliberation, allowing oneself to know what one is doing, to be conscious of one's actions in addition to make decisions. Severals leads for robots to access to a moral status are detailed in the next section.</p><p>As far as responsibility is concerned, a commonly used argument is that robots cannot achieve moral responsibility because they cannot suffer, and therefore cannot be punished <ref type="bibr" target="#b46">[47]</ref>. Still, if we consider punishment for what it is, i.e. a convenient way to change (or to compensate for) a behaviour deemed undesirable or unlawful, we can agree that it is not the sine qua non requirement for responsibility. There are other ways to change one's behaviour, one of the most known examples being treatment, i.e. spotting the "component" that produces the unwanted behaviour and tweak it or replace it to correct the problem <ref type="bibr" target="#b27">[28]</ref>. Beating one's own car because of a malfunction 4 "He who acts through another does the act himself." would be absurd, in this case it is more fitting to replace the malfunctioning component. The same applies with certain types of law infringement (leading to psychological treatment or therapy), so it could apply to robots as well, e.g. by changing the program of the defective vehicle. Waiting for technology to progress to finally being able to punish robots so that they could have moral responsibility is not a desirable solution, but using vicarious liability, treatment and moral status appears to be a sound basis.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Consciousness and moral status for autonomous robots</head><p>We have said earlier that for a robot to be considered responsible for its actions, it must be attributed a moral status, so it needs consciousness <ref type="bibr" target="#b18">[19]</ref>. The purpose of this section is to see how this can be achieved and how moral status can be applicable to robots in order to help them to have moral responsibility.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.1">Consciousness</head><p>Since there is an abundant literature on the topic of consciousness, and still no real consensus among the scientific community on how define consciousness, the purpose of this section is not to give an exhaustive nor accurate definition of consciousness, but merely to see what seems relevant to robots. However, if we want to use consciousness, we can consider it as described by <ref type="bibr" target="#b31">[32]</ref>, namely the ability to know what it is like to have such or such mental state from one's own perspective, to subjectively experience one's own environment and internal states.</p><p>The first approach for robots consciousness is the theory of mind <ref type="bibr" target="#b37">[38]</ref>  <ref type="bibr" target="#b5">[6]</ref>. It is based on the assumption that humans tend to grant intentionnality to any being displaying enough similarities of action with them (emotions ou functional use of language). It is then possible for humans, by analogy with their experience of their own consciousness, to assume that those beings have a consciousness as well. This approach is already developing with conversational agents or robots mimicking emotions, even if it can be viewed as a trick of human reasoning more than an "absolutely true" model of consciousness.</p><p>The second approach considers consciousness as a purely biological phenomenon, and has gained influence with the numerous discoveries of neurosciences. Even if we do not know what really explains consciousness (see the Hard problem of consciousness <ref type="bibr" target="#b8">[9]</ref>), considering it as a property of the brain may allow conscious robots to be developed, as did <ref type="bibr" target="#b54">[55]</ref>  <ref type="bibr" target="#b53">[54]</ref> by recreating a brain from collected brain cells. There is still a lot of work to do here, as well as many ethical questions to answer, but it definitely looks promising. Indeed, if a being, even with a robotic body, has a brain that is similar to a human's, in a materialist perspective, this being is conscious.</p><p>The last approach is the one proposed by <ref type="bibr" target="#b24">[25]</ref> [26] to build selfaware robots that can explore their own physical capacities to find their own model and to determine their own way to move accordingly. Those robots are probably the closest ones to consciousness as defined by <ref type="bibr" target="#b31">[32]</ref>. They are still far from being used on a battlefield, but this method of self-modelling could be applied to more "evolved" robots for ethical decision-making. This way a robot could explore its own capacities for action and could build an ethical model of itself.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.2">Moral status</head><p>An individual is granted moral status if it has to be treated never as a means, but only as an end, as prescribed by Kant's categorical imperative. To define this moral status, two criteria are commonly used <ref type="bibr" target="#b6">[7]</ref>, namely sentience (or qualia, the ability to experience reality as a subject) and sapience (a set of abilities associated with high-level intelligence). Still, none of those attributes have been successfully implemented in robots. Even though it could be counter-productive to integrate qualia to robots in some situations (e.g. coding fear into an armed robot), it can be interesting to model some of them into robots, like <ref type="bibr" target="#b3">[4]</ref> did for moral emotions like guilt. This could provide a solid ground for access of robots to moral status. <ref type="bibr" target="#b6">[7]</ref> have proposed two principles stating that two different agents can have the same moral status if they possess enough similarities : if two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation (Principle of Substrate Non-Discrimination) or on how they came to existence (Principle of Ontogeny Non-Discrimination), then they have the same moral status.</p><p>Put simply, those principles are pretty similar to what the theory of mind proposes, that is if robots can exhibit the same functions as human's, then they can be considered as having a moral status, no matter what their body is made of (silicon, flesh, etc.) or how they matured (through gestation or coding). Still, proving that robots can have the same conscious experience as humans is currently impossible, so we can consider a more applicable version of those principles: <ref type="bibr" target="#b48">[49]</ref> proposes that robots have moral agency if they are responsible with respect to another moral agent, if they possess a relative level of autonomy and if they can show intentional behaviour. This definition is vague but is grounded on the fact that moral status is attributed. What matters is that the robot is advanced enough to be similar to humans, but it does not have to be identical.</p><p>Another solution for autonomous robots with a moral status is to create a sort of Turing Test comparing the respective "value" of a human life with the existence of a robot. This is called by <ref type="bibr" target="#b45">[46]</ref> the Triage Turing Test and shows that robots will have the same moral status as humans when it is at least as wrong to "kill" a robot as to kill a human. Advanced reflections on this topic can be found in <ref type="bibr" target="#b47">[48]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">IMPLEMENTING ETHICAL REASONING INTO AUTONOMOUS ARMED ROBOTS</head><p>Another question related to autonomous armed robots is how those robots can solve ethical problems on the battlefield and make the most ethically satisfying decision. In this section, we will briefly review several frameworks to integrate ethical reasoning into robots. Three kinds of approaches are considered:</p><p>• Top-down : these approaches take a particular ethical theory and create algorithms for the robot, allowing it to follow the aforesaid theory. This is convenient to implement, e.g. a deontological morality into a robot. • Bottom-up : the goal is to create an environment wherein the robot can explore different courses of action, with rewards to make it lean towards morally satisfying actions. Those approaches focus on the autonomous robot learning its own ethical reasoning abilities. • Hybrid : these approaches look for a merge between top-down and bottom-up frameworks, combining their advantages without their downsides.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Top-down approaches</head><p>Top-down frameworks are the most studied in the field of ethics for robots and the number of ethical theories involved is high. Literature identifies theories such as utilitarianism <ref type="bibr" target="#b9">[10]</ref>, divine-command ethics <ref type="bibr" target="#b7">[8]</ref> and other logic-based frameworks <ref type="bibr">[27] [15]</ref>. Still, the most famous theory among top-down approaches is the Just-War Theory <ref type="bibr" target="#b34">[35]</ref>, which underlies the instructions and principles issued in the Laws of War and the Rules of Engagement (for more on these documents, see <ref type="bibr" target="#b2">[3]</ref>). Those approaches have in common to take a set of rules and to program them into the robot code so that their behaviour could not violate them. The upside of those approaches is that the rules are general, well-defined and easily understandable. The downside is that no set of rules will ever handle every possible situation, mostly because they do not take into account the context of the particular mission the robot is deployed for. Thus top-down approaches are usually too rigid and not precise enough to be applicable. Also, since they rely on specific rules -more morality-like than ethics-like -they are not fit to capture ethical reasoning abilities but they are usually used to justify one's own actions. In order to implement ethical reasoning abilities into robots, it seems more desirable to use top-down approaches as moral heuristics guiding ethical reasoning <ref type="bibr" target="#b52">[53]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Bottom-up approaches</head><p>Bottom-up frameworks are way less developed than top-down approaches. Still, some research like <ref type="bibr" target="#b25">[26]</ref> gives interesting options, using self-modeling. Most of the bottom-up approaches insist on machine learning <ref type="bibr" target="#b16">[17]</ref> or artificial evolution using genetic algorithms based on cooperation <ref type="bibr" target="#b44">[45]</ref> to allow agents to reason ethically given a specific parameter. The strength of these frameworks is that learning allows flexibility and adaptability in complex and dynamic environments, which is a real advantage in the field of ethics wherein there is no predefined answers. Nevertheless the learning process takes a lot of time and never completely removes the risk of unwanted behaviour. Plus, the reasoning behind the action produced by the robot cannot be traced, making the fix of undesirable behaviours barely possible.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Hybrid approaches</head><p>Three different frameworks can be distinguished among hybrid approaches : case-based approach <ref type="bibr" target="#b28">[29]</ref> [2], virtue ethics <ref type="bibr" target="#b23">[24]</ref> [53] and the hybrid reactive/deliberative architecture proposed by <ref type="bibr" target="#b2">[3]</ref>, using the Laws of War and the Rules of Engagement as a set of rules to follow. They are probably the most applicable researches to autonomous robots and combine aspects of both top-down (producing algorithms derived from ethical theories) and bottom-up (using agents able to learn, evolve and explore possible ethical decisions) specifications.</p><p>The main problem with these approaches is their computing time, since learning is often involved in the process. Nevertheless, they appear theoretically satisfying and their applicability looks promising.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">ETHICS AND AUTHORITY SHARING</head><p>In this section we will focus on the previously mentioned ethical issues in the framework of authority sharing between a robot and a human operator. Joining human and machine abilities aims at increasing the range of actions of "autonomous" systems <ref type="bibr" target="#b22">[23]</ref>. However the relationship between both agents is dissymmetric since the human operator's "failures" are often neglected when designing the system. Moreover simultaneous decisions and actions of the artificial and the human agents are likely to create conflicts <ref type="bibr" target="#b10">[11]</ref>: unexpected or misunderstood authority changes may lead to inefficient, dangerous or catastrophic situations. Therefore in order to consider the human agent and the artificial agent in the same way <ref type="bibr" target="#b19">[20]</ref> and the human-machine system as a whole <ref type="bibr" target="#b55">[56]</ref>, it seems more relevant to work on authority and authority control <ref type="bibr" target="#b29">[30]</ref> than on autonomy, which concerns the artificial agent exclusively.</p><p>Therefore authority sharing between a robot and its operator can be viewed as an "upgraded" autonomy. As far as ethical issues are concerned, authority sharing considered as a relation between two agents <ref type="bibr" target="#b17">[18]</ref> may provide a better compliance with sets of laws and moral rules, this way enabling ethical decision-making within a pair of agents instead of leaving this ability to only one individual.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Autonomy</head><p>As previously mentioned, the autonomy of an armed robot can be conceived as an autonomy of means only; robots are almost always used as tools. Authority sharing can bring a change in this organization. As a robot cannot (yet) determine its own goals, it is the human operator's role to provide the goals so as some methods or partial plans to achieve them <ref type="bibr" target="#b13">[14]</ref>. Still, authority sharing allows the robot to be granted decision-making power allowing it to take authority from the operator to accomplish some tasks neglected by him (e.g., going back to base because of a fuel shortage) or even when the operator's actions are not following the mission plan and may be dangerous. For example, some undesirable psychological and physiological "states" of the operator, e.g. tiredness, stress, attentional blindness <ref type="bibr" target="#b36">[37]</ref> can be detected by the robot, in order to allow it to take authority if the operator is not considered able to fulfill the mission anymore.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2">Moral responsibility</head><p>Concerning moral responsibility, authority sharing forces us to make a distinction between two instances : the one where the operator has authority over the robot, and the reverse one. The former is simple; since the robot is a tool, we use the vicarious liability, therefore the operator engages his responsibility for any accident caused by the use of the robot that could happen during the mission. The latter is more complex and we do not claim to give absolute answers, but mere propositions.</p><p>What we propose is that, in order to assess moral responsibility when the robotic agent has authority over the system, it is necessary to define a mission-relevant set of rules, e.g. Laws of War and Rules of Engagement <ref type="bibr" target="#b34">[35]</ref> [3], and a contract, as proposed by <ref type="bibr" target="#b40">[41]</ref> or <ref type="bibr" target="#b39">[40]</ref>, between robotic and human agents, providing specific clauses for them to respect during the mission. These clauses must to be based on the set of rules previously mentioned, and an agent who violates them would be morally responsible of any accident that could happen as a consequence of his actions.</p><p>This kind of contract would provide clear conditions for authority sharing (i.e., an agent loses authority if he violates the contract) and could open the way to apply works on trust <ref type="bibr" target="#b3">[4]</ref> or persuasion <ref type="bibr" target="#b15">[16]</ref> in robotic agents. During a mission, such contracts would engage both agents to monitor the actions of the other agent and, if possible, to take authority if this can prevent any infringement of the contract. If one agent detects a possibly incoming accident due to the other agent's actions, e.g. aiming at a civilian, and does nothing to prevent it, then this agent is responsible for this accident as much as the one causing it. Because of the current state of law, i.e. dealing only with human behaviours, if a robot is considered responsible for "evil" or unlawful actions, then it should be treated by replacing the parts of its program or the pieces of hardware that caused the unwanted behaviour. Human operators, if displaying the same kind of unlawful behaviour, should be judged by the appropriate laws. To integrate contracts in a concrete way, we can lean towards the perspective presented by <ref type="bibr" target="#b2">[3]</ref> who proposes some recommendations to warn the operator of his responsibility when using potentially lethal force.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.3">Consciousness and moral status</head><p>Authority sharing is not of a great help to implement consciousness into robots. Still, <ref type="bibr" target="#b36">[37]</ref> and <ref type="bibr" target="#b49">[50]</ref> provide leads to allow robots to assess the "state" of the operator and to take authority from him if he is not considered able to achieve the mission. This approach would help robots to improve their situational awareness and to design systems that are better at interacting with humans, either operator or civilians. Enhancing the responsibility and autonomy of robots could also be a way to push them towards the "same functionality" proposed by <ref type="bibr" target="#b6">[7]</ref>, i.e. acting with enough caution to be considered equals to humans in a specific domain, thus helping to give a moral status to robots.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.4">Ethical reasoning</head><p>Given the current state of law and the common deployment of robots on battlefields, granting robots with ethical reasoning have to be rooted in a legally relevant framework, that is Just-War Theory <ref type="bibr" target="#b34">[35]</ref>. Laws of War and Rules of Engagement have to be the basic set of rules for robots. Still, battlefields being complex environments ethics needs to be integrated into robots with a hybrid approach combining learning capabilities and experience with ethical theories. In the case of authority sharing, two frameworks seem relevant at the moment : case-based reasoning <ref type="bibr" target="#b1">[2]</ref> and Arkin's reactive/deliberative architecture <ref type="bibr" target="#b2">[3]</ref>. What seems applicable in case of an ethical conflict is to give the authority to the operator and to use the robotic agent both to assist him during the reasoning, i.e. by displaying relevant information on an appropriate interface, and to act as an ethical handrail in order to make sure that the principles of the Laws of War, e.g. discrimination or proportionality, are respected.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">CONCLUSION AND FURTHER WORK</head><p>The main drawback of the implementation of ethics into autonomous armed robots is that, even if the technology, the autonomy and the lethal power of robots increase, the legal and philosophical frameworks do not take them into account, or consider them only from an anthropocentric point of view. Authority sharing allow a coupling between a robot and a human operator, hence a better compliance with ethical and legal requirements for the use of autonomous robots on battlefields. It can be achieved with vicarious liability, a good situational awareness produced by tracking both the robot and the operator's "states", and a hybrid model of ethical reasoning -allowing adaptability in complex battlefields environments.</p><p>We are currently building an experimental protocol in order to test some of our proposals, namely automous armed robots that embed ethical reasoning while sharing authority with a human operator. We have constructed two fully-simulated battlefield scenarios in which we will test the compliance of the system with a specific principle of the Laws of War (proportionality and discrimination). These scenarios feature hostile actions done towards the robot or its allies, e.g. throwing rocks or planting explosives, that need to be handled while complying with a set of rules of engagement. During the simulation, the operator is induced to produce an immoral behaviour, provoking an authority conflict in which we expect the robot to detect the said behaviour and to take authority from the operator: the authority conflict thereby generated has to be solved by the robot via the production of a morally correct behaviour. Since the current state of our software does not yet allow the robotic agent to actually observe the operator, we are working on some pre-defined evaluations of actions in order for the robot to be able to detect unwanted behaviours, and to act accordingly.</p></div>			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_0">"act only according to that maxim by which you can at the same time will that it be a universal law"</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">K</forename><surname>Abney</surname></persName>
		</author>
		<title level="m">Robotics, Ethical Theory, and Metaethics: A Guide for the Perplexed</title>
				<imprint>
			<publisher>MIT Press</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="35" to="52" />
		</imprint>
	</monogr>
	<note>Robot Ethics: The Ethical and Social Implications of Robotics</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">An approach to computing ethics</title>
		<author>
			<persName><forename type="first">M</forename><surname>Anderson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Anderson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Armen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Intelligent Systems</title>
				<imprint>
			<date type="published" when="2006-08">July/August 2006</date>
			<biblScope unit="page" from="56" to="63" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Governing lethal behavior: Embedding ethics in a hybrid deliberative/reactive robot architecture</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">C</forename><surname>Arkin</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2007">2007</date>
		</imprint>
		<respStmt>
			<orgName>Georgia Institute of Technology</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical report</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Moral decision making in autonomous systems: Enforcement, moral emotions, dignity, trust, and deception</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">C</forename><surname>Arkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Ulam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">R</forename><surname>Wagner</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE</title>
				<meeting>the IEEE</meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="volume">100</biblScope>
			<biblScope unit="page" from="571" to="589" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">What should we want from a robot ethic?</title>
		<author>
			<persName><forename type="first">P</forename><surname>Asaro</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Review of Information Ethics</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="9" to="16" />
			<date type="published" when="2006-12">Dec. 2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">The development of a theory of mind in autism: deviance and delay?</title>
		<author>
			<persName><forename type="first">S</forename><surname>Baron-Cohen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychiatrics Clinics of North America</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="33" to="51" />
			<date type="published" when="1991">1991</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">The Ethics of Artificial Intelligence</title>
		<author>
			<persName><forename type="first">N</forename><surname>Bostrom</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Yudkowsky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Draft for Cambridge Handbook of Artificial Intelligence</title>
				<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Bringsjord</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Taylor</surname></persName>
		</author>
		<title level="m">Robot Ethics: The Ethical and Social Implications of Robotics</title>
				<imprint>
			<publisher>MIT Press</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="85" to="108" />
		</imprint>
	</monogr>
	<note>The Divine-Command Approach to Robot Ethics</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Facing up to the problem of consciousness</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">J</forename><surname>Chalmers</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Consciousness Studies</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="200" to="219" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">The utilibot project: An autonomous mobile robot based on utilitarianism</title>
		<author>
			<persName><forename type="first">C</forename><surname>Cloos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AAAI Fall Symposium on Machine Ethics</title>
				<imprint>
			<date type="published" when="2005">2005. 2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Ghost: Experimenting conflicts countermeasures in the pilot&apos;s activity</title>
		<author>
			<persName><surname>Fr</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Dehais</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Tessier</surname></persName>
		</author>
		<author>
			<persName><surname>Chaudron</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IJCAI&apos;03</title>
				<meeting><address><addrLine>Acapulco, Mexico</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">When HAL Kills, Who&apos;s to Blame?</title>
		<author>
			<persName><forename type="first">D</forename><surname>Dennett</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1996">1996</date>
			<publisher>MIT Press</publisher>
			<biblScope unit="volume">16</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">The Foundations of Bioethics</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">T</forename><surname>Engelhardt</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1986">1986</date>
			<publisher>Oxford Uninversity Press</publisher>
			<pubPlace>Oxford</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">HTN planning: complexity and expressivity</title>
		<author>
			<persName><forename type="first">K</forename><surname>Erol</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hendler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Nau</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AAAI&apos;94</title>
				<meeting><address><addrLine>Seattle, WA, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Modeling ethical rules of lying with answer set programming</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">G</forename><surname>Ganascia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Ethics and Information Technology</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="39" to="47" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Towards ethical persuasive agents</title>
		<author>
			<persName><forename type="first">O</forename><surname>Guerini</surname></persName>
		</author>
		<author>
			<persName><surname>Stock</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IJCAI Workshop on Computational Models of Natural</title>
				<imprint>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Reliable Reasoning: Induction and Statistical Learning Theory</title>
		<author>
			<persName><forename type="first">G</forename><surname>Harman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kulkarni</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2007">2007</date>
			<publisher>MIT Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">Agent Autonomy</title>
		<author>
			<persName><forename type="first">H</forename><surname>Hexmoor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Castelfranchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Falcone</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2003">2003</date>
			<publisher>Kluwer Academic Publishers</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent?</title>
		<author>
			<persName><forename type="first">K</forename><surname>Himma</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">7th International Computer Ethics Conference</title>
				<meeting><address><addrLine>San Diego, CA, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2007-07">July 2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Handbook of cognitive task design</title>
		<editor>E. Hollnagel</editor>
		<imprint>
			<date type="published" when="2003">2003</date>
			<publisher>Erlbaum</publisher>
			<pubPlace>Mahwah, NJ</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">A framework for autonomy levels for unmanned systems ALFUS</title>
		<author>
			<persName><forename type="first">H</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Pavek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Novak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Albus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Messin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AUVSIs Unmanned Systems North America 2005</title>
				<meeting><address><addrLine>Baltimore, MD, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Jonas</surname></persName>
		</author>
		<title level="m">Das Prinzip Verantwortung. Versuch einer Ethik fr die technologische Zivilisation</title>
				<meeting><address><addrLine>Frankfurt</addrLine></address></meeting>
		<imprint>
			<publisher>Insel Verlag</publisher>
			<date type="published" when="1979">1979</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">&apos;Adjustable autonomy for human-centered autonomous systems</title>
		<author>
			<persName><forename type="first">D</forename><surname>Kortenkamp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Bonasso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ryan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Schreckenghost</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the AAAI 1997 Spring Symposium on Mixed Initiative Interaction</title>
				<meeting>the AAAI 1997 Spring Symposium on Mixed Initiative Interaction</meeting>
		<imprint>
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Bekey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Abney</surname></persName>
		</author>
		<title level="m">Autonomous military robotics: Risk, ethics, and design</title>
				<imprint>
			<date type="published" when="2008">2008</date>
		</imprint>
		<respStmt>
			<orgName>California Polytechnic State University</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical report</note>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Resilient machines through continuous self-modeling</title>
		<author>
			<persName><forename type="first">H</forename><surname>Lipson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Bongard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Zykov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Science</title>
		<imprint>
			<biblScope unit="volume">314</biblScope>
			<biblScope unit="page" from="1118" to="1121" />
			<date type="published" when="2006">5802. 2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Self-reflection in evolutionary robotics: Resilient adaptation with a minimum of physical exploration</title>
		<author>
			<persName><forename type="first">H</forename><surname>Lipson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Genetic and Evolutionary Computation Conference</title>
				<meeting>the Genetic and Evolutionary Computation Conference</meeting>
		<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="2179" to="2188" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Computational meta-ethics: Towards the meta-ethical robot</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">J</forename><surname>Lokhorst</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Minds and machines</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="261" to="274" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<monogr>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">J</forename><surname>Lokhorst</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Van Den Hoven</surname></persName>
		</author>
		<title level="m">Robot Ethics: The Ethical and Social Implications of Robotics</title>
				<imprint>
			<publisher>MIT Press</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="145" to="156" />
		</imprint>
	</monogr>
	<note>Responsibility for Military Robots</note>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Computational models of ethical reasoning: Challenges, initial steps, and future directions</title>
		<author>
			<persName><forename type="first">B</forename><surname>Mclaren</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Intelligent Systems</title>
				<imprint>
			<date type="published" when="2006-08">July/August 2006</date>
			<biblScope unit="page" from="29" to="37" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">&apos;Dtection et rsolution de conflits dautorit dans un systme homme-robot&apos;, Revue dIntelligence Artificielle</title>
		<author>
			<persName><forename type="first">S</forename><surname>Mercier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Tessier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Dehais</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">numro spcial &apos;Droits et Devoirs dAgents Autonomes&apos;</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page" from="325" to="356" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<monogr>
		<title level="m" type="main">Ethical and Philosophical Consideration of the Dual-Use Dilemma in the Biological Sciences</title>
		<author>
			<persName><forename type="first">S</forename><surname>Miller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Selgelid</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2009">2009</date>
			<publisher>Springer</publisher>
			<pubPlace>New York</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">What is it like to be a bat?</title>
		<author>
			<persName><forename type="first">T</forename><surname>Nagel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Philosophical Review</title>
		<imprint>
			<biblScope unit="volume">83</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="435" to="450" />
			<date type="published" when="1974">1974</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<title level="m">Teleological Language in the Life Sciences</title>
				<editor>
			<persName><forename type="first">L</forename><surname>Nissen</surname></persName>
		</editor>
		<imprint>
			<publisher>Rowman and Littlefield</publisher>
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<monogr>
		<title level="m" type="main">Deontological Ethics, The Encyclopedia of Philosophy</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">G</forename><surname>Olson</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1967">1967</date>
			<publisher>Collier Macmillan</publisher>
			<pubPlace>London</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<monogr>
		<title level="m" type="main">The Morality of War</title>
		<author>
			<persName><forename type="first">B</forename><surname>Orend</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2006">2006</date>
			<publisher>Broadview Press</publisher>
			<pubPlace>Peterborough, Ontario</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">Drones arms et thique</title>
		<author>
			<persName><forename type="first">T</forename><surname>Pichevin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Penser la robotisation du champ de bataille</title>
				<editor>
			<persName><forename type="first">D</forename><surname>Danet</surname></persName>
		</editor>
		<editor>
			<persName><surname>Saint-Cyr</surname></persName>
		</editor>
		<imprint>
			<publisher>Economica</publisher>
			<date type="published" when="2011-11">November 2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">Towards human operator state assessment</title>
		<author>
			<persName><forename type="first">S</forename><surname>Pizziol</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Dehais</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Tessier</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">1st ATACCS (Automation in Command and Control Systems)</title>
				<meeting><address><addrLine>Barcelona, Spain</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2011-05">May 2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">Does the chimpanzee have a theory of mind?</title>
		<author>
			<persName><forename type="first">D</forename><surname>Premack</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Woodruff</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Behavioral and Brain Sciences</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="515" to="526" />
			<date type="published" when="1978">1978</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Rameix</surname></persName>
		</author>
		<title level="m">Fondements philosophiques de l&apos;thique mdicale, Ellipses</title>
				<meeting><address><addrLine>Paris</addrLine></address></meeting>
		<imprint>
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<monogr>
		<title level="m" type="main">A Theory of Justice</title>
		<author>
			<persName><forename type="first">J</forename><surname>Rawls</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1971">1971</date>
			<publisher>Harvard</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b40">
	<monogr>
		<author>
			<persName><forename type="first">J.-J</forename><surname>Rousseau</surname></persName>
		</author>
		<title level="m">Du contrat social</title>
				<imprint>
			<date type="published" when="1762">1762</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b41">
	<monogr>
		<author>
			<persName><forename type="first">J.-P</forename><surname>Sartre</surname></persName>
		</author>
		<title level="m">L&apos;existentialisme est un humanisme</title>
				<meeting><address><addrLine>Paris</addrLine></address></meeting>
		<imprint>
			<publisher>Gallimard</publisher>
			<date type="published" when="1946">1946</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b42">
	<analytic>
		<title level="a" type="main">Intelligent control of life support systems for space habitat</title>
		<author>
			<persName><forename type="first">D</forename><surname>Schreckenghost</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ryan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Thronesbery</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Bonasso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Poirot</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the AAAI-IAAI Conference</title>
				<meeting>the AAAI-IAAI Conference<address><addrLine>Madison, WI, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b43">
	<analytic>
		<title level="a" type="main">Death strikes from the sky: the calculus of proportionality</title>
		<author>
			<persName><forename type="first">N</forename><surname>Sharkey</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Technology and Society Magazine</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="16" to="19" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
	<note>IEEE</note>
</biblStruct>

<biblStruct xml:id="b44">
	<monogr>
		<author>
			<persName><forename type="first">B</forename><surname>Skyrms</surname></persName>
		</author>
		<title level="m">Evolution of the Social Contract</title>
				<meeting><address><addrLine>Cambridge, UK</addrLine></address></meeting>
		<imprint>
			<publisher>Cambridge University Press</publisher>
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b45">
	<analytic>
		<title level="a" type="main">The Turing triage test</title>
		<author>
			<persName><forename type="first">R</forename><surname>Sparrow</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Ethics and Information Technology</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="203" to="213" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b46">
	<analytic>
		<title level="a" type="main">Killer robots</title>
		<author>
			<persName><forename type="first">R</forename><surname>Sparrow</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Applied Philosophy</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="62" to="77" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b47">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Sparrow</surname></persName>
		</author>
		<title level="m">Robot Ethics: The Ethical and Social Implications of Robotics</title>
				<imprint>
			<publisher>MIT Press</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="301" to="315" />
		</imprint>
	</monogr>
	<note>Can Machine Be People?</note>
</biblStruct>

<biblStruct xml:id="b48">
	<analytic>
		<title level="a" type="main">When is a robot a moral agent?</title>
		<author>
			<persName><forename type="first">J</forename><surname>Sullins</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of information Ethics</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">12</biblScope>
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b49">
	<analytic>
		<title level="a" type="main">Authority management and conflict solving in human-machine systems</title>
		<author>
			<persName><forename type="first">C</forename><surname>Tessier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Dehais</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">AerospaceLab, The Onera Journal</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b50">
	<analytic>
		<title level="a" type="main">Roboethics roadmap</title>
		<author>
			<persName><forename type="first">G</forename><surname>Veruggio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">EURON Roboethics Atelier</title>
				<meeting><address><addrLine>Genoa</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b51">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Vikaros</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Degand</surname></persName>
		</author>
		<title level="m">Moral Development through Social Narratives and Game Design</title>
				<meeting><address><addrLine>Hershey</addrLine></address></meeting>
		<imprint>
			<publisher>IGI Global</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="197" to="216" />
		</imprint>
	</monogr>
	<note>Ethics and Game Design: Teaching Values through Play</note>
</biblStruct>

<biblStruct xml:id="b52">
	<monogr>
		<author>
			<persName><forename type="first">W</forename><surname>Wallach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Allen</surname></persName>
		</author>
		<title level="m">Moral Machines: Teaching Robots Rights from Wrong</title>
				<meeting><address><addrLine>New York</addrLine></address></meeting>
		<imprint>
			<publisher>Oxford University Press</publisher>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b53">
	<monogr>
		<author>
			<persName><forename type="first">K</forename><surname>Warwick</surname></persName>
		</author>
		<title level="m">Robot Ethics: The Ethical and Social Implications of Robotics</title>
				<imprint>
			<publisher>MIT Press</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="317" to="332" />
		</imprint>
	</monogr>
	<note>Robots with Biological Brains</note>
</biblStruct>

<biblStruct xml:id="b54">
	<analytic>
		<title level="a" type="main">Controlling a mobile robot with a biological brain</title>
		<author>
			<persName><forename type="first">K</forename><surname>Warwick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Xydas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Nasuto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Becerra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hammond</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Downes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Marshall</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Whalley</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Defence Science Journal</title>
		<imprint>
			<biblScope unit="volume">60</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="5" to="14" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b55">
	<analytic>
		<title level="a" type="main">Explorations in joint human-machine cognitive systems</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">D</forename><surname>Woods</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">M</forename><surname>Roth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">B</forename><surname>Bennett</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Cognition, computing</title>
				<editor>
			<persName><forename type="first">S</forename><surname>Robertson</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">W</forename><surname>Zachary</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">B</forename><surname>Black</surname></persName>
		</editor>
		<meeting><address><addrLine>Norwood, NJ, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Ablex Publishing Corp</publisher>
			<date type="published" when="1990">1990</date>
			<biblScope unit="page" from="123" to="158" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
