<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Robotics and Law: A Survey</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Sandra</forename><surname>Passinhas</surname></persName>
							<email>sandrap@fd.uc.pt</email>
							<affiliation key="aff0">
								<orgName type="institution">Coimbra University</orgName>
								<address>
									<country key="PT">Portugal</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Robotics and Law: A Survey</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">22FB96A35E674D536CA01FC52D137662</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T22:50+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>robots</term>
					<term>legal status</term>
					<term>liability</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The legal agency of a robot depends on its consideration as i) a legal person (a proper person with rights and duties of their own; ii) as a strict agent in the business law-field (negotiations, contracts, etc); iii) as a source of responsibility for other agents in the system. The option for one of this categories will be made on the assessment of the legal autonomy of robotic systems, that is, on the assessment of their level of consciousness, free will and intent. This paper considers the distinct options available and describes the effects of the prevailing consideration in the field of contracts (where robots act as proper agents establishing rights and obligations in civil law), torts (where strict liability rules regulate the design, production and use of all robotic applications that may be deemed dangerous) and criminal law (robots are not criminally accountable, but they might, in the near future, affect fundamental tenets of criminal law). It also considers the European Parliament resolution with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), of 6 February 2017, proposing recommendations to a future legislative instrument from the European Commission.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Although law is facing a major challenge before robotics technology, it is not uncontroversial the real dimension of that challenge. The vast majority of scholars believes that it strongly affects traditional concepts and principles of legal systems or, even, creates new principles and concepts. The option one may take is much based on the understanding of the legal agency of robots. A robot might be considered: i) as a legal person (a proper person with rights and duties of their own; ii) as a strict agent in the business law-field (negotiations, contracts, etc); iii) as a source of responsibility for other agents in the system. Legal personhood is the most extreme option to take. Persons can be humans and artificial persons. Legal personhood of humans is grounded on Article 1 of the 1948 Universal Declaration of Human Rights: "All human beings are born free and equal in dignity and rights". The second variant of legal personhood concerns artificial persons like governments, organizations, and corporations. It is a legal instrument to bound rights and duties to an autonomous center. Although rights and duties of such legal persons are reducible to an aggregation of human beings as the only relevant source of their action, they are legally autonomous, in that artificial legal persons have rights and duties of their own (e.g. a corporation freedom of speech).</p><p>The discussion about legal personhood of robots has been (re)launched by the recent European Parliament resolution with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL), where electronic personality of robots is considered. I will start by questioning the legal autonomy of robotic systems. Secondly, I will scrutinize the normative challenges of this technology in the field of contracts, tort law, and criminal law. Thirdly, I will present a short overview of the EP resolution. I will conclude that the option on the legal agency of a robot will be made on the assessment of the legal autonomy of robotic systems, that is, on the assessment of their level of consciousness, free will and intent. Under European Law, a legislative instrument is expected following the proposed recommendations: a comprehensive Union system of registration of advanced robots, a system of strict liability, requiring only proof that damage has occurred and the establishment of a causal link between the harmful functioning of the robot and the damage suffered by the injured party; a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Autonomy versus responsibility</head><p>Robots are machines able to learn and adapt to changes in environment. References to the concept of autonomy, however, are not fully consistent. Sometimes robots are intended to a mere "system capable of understanding higher level intent and direction", but they can be further recognized as "improving the set of instructions through which their inner states change, and transform such properties without external stimuli". It is an evidence that robots can deal successfully with their tasks by exerting certain control over their own actions without any direct intervention by humans (e.g. unmanned aerial vehicles or military robotic applications). In that sense, they can rule (nomos) over themselves (auto). Robots are autonomous in the technical meaning of the word but are also they autonomous in legal sense, with the level of consciousness, free will and intent required? This is the core question: will the advancement of robotics technology produce artificial agents capable of autonomous decisions that are similar in all relevant aspects to the ones humans make? If so is accepted, the correlated responsibility has to be allocated, according to the basic tenets of social interaction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Contracts</head><p>Besides the traditional agency of humans in business law field a contract may be concluded by the interaction of electronic agents of the parties, even if no individual is aware of or reviewed the electronic agents actions or the resulting terms and agreements (cf. Section 14 of the US Uniform Electronic Transactions Act). Theoretically speaking, three legal notions of agenthood of robotic systems might be considered: (i) as legal persons with rights and duties of their own; (ii) as proper agents establishing rights and obligations in civil law; (iii) as sources of responsibility for other agents in the system.</p><p>The ability of robots to produce, through their own acts, rights and obligation on behalf of humans brings up the parallelism between owners and their slaves in Roman Law, since slaves were conceived as things that, nevertheless, played a crucial role in trade. The praetor would, casuistically, protect third parties negotiating with the slave on behalf of its owner (considering the slave as a source of responsibility for its owner). But not only counterparties of a contract are at risk: the ability of robots to produce, through their own intentional acts, rights and obligations on behalf of humans, entails the risk that individuals can be financially ruined by their robots' activities. Therefore, more modern regulatory options, namely the constitution of a peculium, should be considered <ref type="bibr">(Pagallo)</ref>. The ability of artificial agents to act as online traders, to buy commodities and resell them might require the transfer to robots an amount of money to be used in online transactions: when robots did not fulfil their obligations, their creditors would directly sue the artificial agent.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Torts</head><p>If someone buys a defective robot, legal accountability is framed as a matter of risk and predictability in contractual obligations. If someone owns a robot that causes damages to third parties, it raises an issue of extracontratual obligations responsibility. For the first time ever, legal systems will hold humans responsible for what an artificial state-transition system decides to do. This is one of the most innovative aspects in the field of the laws of robots. It can be framed according to traditional forms of liability for the behavior of other, such as children, pets or employees. But it can also be complemented with new strict liability policies, or alternatively, mitigated through insurance models, authentication systems, and the mechanism of allocating the burden of proof <ref type="bibr">(Pagallo)</ref>. Human strict liability should be limited to the value of their robot's portfolio (peculium). Several scholars have endorsed this idea because the personal accountability of robots would simplify a number of contentious issues, such as whether robots are acting beyond certain legal powers, which party should be held liable for conferring such powers, or whether humans can evade responsibility for possible malfunctions of a machine. By recognizing the personal accountability of robots, the intrincacies of adding a new hypothesis of extra-contractual obligations for the behavior of others in that some types of robots would be directly liable for provoking an injury and an actual loss or damage to third parties are prevented. In such case, the peculium of the robot guarantees that extra-contractual obligations would be met, regardless of whether a human being should be held strictly liable, or deemed as negligent <ref type="bibr">(Pagallo)</ref>.</p><p>To date, strict liability regulates the design, production and use of all robotic applications that may be deemed dangerous. Designers and producers of robots should be held liable for damages caused to third parties by the defective manufacture of the product or flaws in the design.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Criminal Law</head><p>Robots are not criminally accountable because they lack the set of preconditions for attributing liability to a party, e.g. free will, consciousness. They will, however, affect fundamental tenets of criminal law. In the battlefield, robots are already affecting when and how resort to war can be justified (ius ad bellum), and what can justly be done in war (ius in bello). We can also envisage robots choosing to commit and, ultimately, carry out a crime. Notions of responsibility and moral accountability must be distinguished. Although robots lack moral understanding, they can represent a new meaningful target of human censorship. As Floridi and Sanders address the question "it would be ridiculous to praise or blame an artificial agent for its behavior or charge it with a moral accusation". The authors suggest the following criminal remedies: a) monitoring and modification (i.e., 'maintenance) of the robot; (b) removal to a disconnected component of cyberspace; (c) annihilation from cyberspace (deletion without backup).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">European Parliament Resolution</head><p>On 16 February 2017 the European Parliament issued a resolution with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), asking the Commission to submit, on the basis of Article 114 TFEU, a proposal for a legislative instrument on legal questions related to the development and use of robotics and AI foreseeable in the next 10 to 15 years, combined with non-legislative instruments such as guidelines and codes of conduct. The European Parliament called on the Commission to propose common Union definitions of cyber physical systems, autonomous systems, smart autonomous robots and their subcategories by taking into consideration the following characteristics of a smart robot: the acquisition of autonomy through sensors and/or by exchanging data with its environment (inter-connectivity) and the trading and analysing of those data; self-learning from experience and by interaction (optional criterion); at least a minor physical support; the adaptation of its behaviour and actions to the environment; absence of life in the biological sense. The EP considers that a comprehensive Union system of registration of advanced robots should be introduced within the Union's internal market where relevant and necessary for specific categories of robots, and called on the Commission to establish criteria for the classification of robots that would need to be registered; in this context, it called on the Commission to investigate whether it would be desirable for the registration system and the register to be managed by a designated EU Agency for Robotics and Artificial Intelligence. In what respects liability, a system of strict liability is proposed, requiring only proof that damage has occurred and the establishment of a causal link between the harmful functioning of the robot and the damage suffered by the injured party. The risk management approach does not focus on the person "who acted negligently" as individually liable but on the person who is able, under certain circumstances, to minimise risks and deal with negative impacts. Once the parties bearing the ultimate responsibility have been identified, their liability should be proportional to the actual level of instructions given to the robot and of its degree of autonomy, so that the greater a robot's learning capability or autonomy, and the longer a robot's training, the greater the responsibility of its trainer should be. A possible solution to the complexity of allocating responsibility for damage caused by increasingly autonomous robots might be an obligatory insurance scheme, as is already the case, for instance, with cars; nevertheless, unlike the insurance system for road traffic, where the insurance covers human acts and failures, an insurance system for robotics should take into account all potential responsibilities in the chain. As is the case with the insurance of motor vehicles, such insurance system could be supplemented by a fund in order to ensure that reparation can be made for damage in cases where no insurance cover exists. In concrete, the EP called on the Commission, when carrying out an impact assessment of its future legislative instrument, to explore, analyse and consider the implications of all possible legal solutions, such as: a) establishing a compulsory insurance scheme where relevant and necessary for specific categories of robots whereby, similarly to what already happens with cars, producers, or owners of robots would be required to take out insurance cover for the damage potentially caused by their robots; b) ensuring that a compensation fund would not only serve the purpose of guaranteeing compensation if the damage caused by a robot was not covered by insurance; c) allowing the manufacturer, the programmer, the owner or the user to benefit from limited liability if they contribute to a compensation fund, as well as if they jointly take out insurance to guarantee compensation where damage is caused by a robot; d) deciding whether to create a general fund for all smart autonomous robots or to create an individual fund for each and every robot category, and whether a contribution should be paid as a oneoff fee when placing the robot on the market or whether periodic contributions should be paid during the lifetime of the robot; e) ensuring that the link between a robot and its fund would be made visible by an individual registration number appearing in a specific Union register, which would allow anyone interacting with the robot to be informed about the nature of the fund, the limits of its liability in case of damage to property, the names and the functions of the contributors and all other relevant details; f) creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently. A Charter on Robotics is annexed to this resolution, drawn up with the assistance of the Scientific Foresight Unit (STOA), European Parliamentary Research Service, which proposes a code of ethical conduct for robotics engineers, a code for research ethics committees, a "licence" for designers and a "license" for users. The proposed code of ethical conduct in the field of robotics will lay the groundwork for the identification, oversight and compliance with fundamental ethical principles from the design and development phase. The code will facilitate the ethical categorisation of robotics, strengthen the responsible innovation efforts in this field and address public concerns. Special emphasis should be placed on the research and development phases of the relevant technological trajectory (design process, ethics review, audit controls, etc.). It should aim to address the need for compliance by researchers, practitioners, users and designers with ethical standards, but also introduce a procedure for devising a way to resolve the relevant ethical dilemmas and to allow these systems to function in an ethically responsible manner.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Conclusion</head><p>The legal agency of a robot depends on its consideration as i) a legal person (a proper person with rights and duties of their own; ii) a strict agent in the business law-field (negotiations, contracts, etc); iii) a source of responsibility for other agents in the system. The option will be made on the assessment of the legal autonomy of robotic systems, that is, on the assessment of their level of consciousness, free will and intent. The effects of the prevailing consideration will spread to the field of contracts, torts and criminal law. On 16 February 2017 the European Parliament issued a resolution with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). A legislative instrument is thus expected following the proposed recommendations: a comprehensive Union system of registration of advanced robots, a system of strict liability, requiring only proof that damage has occurred and the establishment of a causal link between the harmful functioning of the robot and the damage suffered by the injured party; a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently. A Charter on Robotics, annexed to this resolution, proposes a code of ethical conduct in the field of robotics will lay the groundwork for the identification, oversight and compliance with fundamental ethical principles from the design and development phase.</p></div>		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">The path of robotics law</title>
		<author>
			<persName><forename type="first">J</forename><surname>Balkin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Circuit</title>
		<imprint>
			<biblScope unit="volume">72</biblScope>
			<biblScope unit="page" from="45" to="60" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Robotics and the lessons of cyberlaw</title>
		<author>
			<persName><forename type="first">R</forename><surname>Calo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">California Law Review</title>
		<imprint>
			<biblScope unit="volume">103</biblScope>
			<biblScope unit="page" from="513" to="563" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Calo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Froomkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Kerr</surname></persName>
		</author>
		<title level="m">Robot law. Elgar</title>
				<meeting><address><addrLine>Cheltenham</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">On the morality of artificial agents</title>
		<author>
			<persName><forename type="first">L</forename><surname>Floridi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sanders</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Minds and Machines</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="349" to="379" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Robot ethics: the ethical and social implications of robotics</title>
		<author>
			<persName><forename type="first">P</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Abney</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Bekey</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2102">2102</date>
			<publisher>MIT Press</publisher>
			<pubPlace>London, Massachusetts</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">U</forename><surname>Pagallo</surname></persName>
		</author>
		<title level="m">The law of robots</title>
				<meeting><address><addrLine>Berlin</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><forename type="first">B</forename><surname>Siciliano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Khatib</surname></persName>
		</author>
		<title level="m">Springer handbook of robotics</title>
				<meeting><address><addrLine>Berlin</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">G</forename><surname>Teubner</surname></persName>
		</author>
		<ptr target="http://cadmus.eui.eu/bitstream/handle/1814/6960/MWP_LS_2007_04.pdf" />
		<title level="m">Rights of non-humans? Electronic agents and animals as new actors in politics and law</title>
				<meeting><address><addrLine>Italy</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
	<note>Max Weber Lecture at the European University Institute of Fiesole</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
