=Paper= {{Paper |id=Vol-2381/xaila2018_paper_4 |storemode=property |title=Homologation of Autonomous Machines from a Legal Perspective |pdfUrl=https://ceur-ws.org/Vol-2381/xaila2018_paper_4.pdf |volume=Vol-2381 |authors=Veronika Žolnerčíková |dblpUrl=https://dblp.org/rec/conf/jurix/Zolnercikova18 }} ==Homologation of Autonomous Machines from a Legal Perspective== https://ceur-ws.org/Vol-2381/xaila2018_paper_4.pdf
   Homologation of Autonomous Machines
         from a Legal Perspective
                                    Veronika ŽOLNERČÍKOVÁa,
                                a
                                    Institute of Law and Technology
                                           Masaryk University
                                         Brno, Czech Republic
                                        zolnercv@mail.muni.cz


           Abstract. This article focuses on the challenges presented by artificial intelligence
           for the creation of technical standards and the process of product homologation. It
           aims to describe what are technical standards, how are they adopted and if they can
           keep up with the technological development that is represented by autonomous
           machines, specifically by autonomous vehicles. The goal of the article is to explain
           why technical standards are a necessity and cannot be supplemented by legislation,
           and to identify the key barriers towards the adoption of new technical standards for
           this technology.

           Keywords. Autonomous machines, Autonomous vehicles, Technical standards,
           Homologation



1. Introduction

Autonomous machines commonly designated simply as “artificial intelligence” or
“robots” are machines operating with a certain level of autonomy. Autonomy of such a
machine derives from goal – oriented programming of the machine’s software. It allows
the machine to sense the surrounding environment through its sensors, collect data about
it and act on them independently without the requirement for other entity to be involved
in the process. [1, p. 4]
      The issue of safety of autonomous machines presents a challenge mainly because
all the capabilities, actions and potential errors of the technology depend on how it is
programmed in the first place. Due to the absence of human supervision, AI offers
limited possibilities of intervention when in operation. [2, p. 446] This article focuses on
a specific part of product safety at the borderline between liability for a faulty product
and general safety issues – technical standards.


2. Technical Standards as a Necessary Component for AI Product Safety

Product safety rules aim to establish when a product is safe for the intended use. Specific
product safety rules take place if the product is deemed to be more dangerous and/or its
target users more vulnerable. These rules are often complemented by technical standards.
     Product safety rules define the acceptable amount of risk when using a product.
Their goal is to establish that certain product must be safe. Technical standards describe
how to manage the desired level of safety. They are created based on a consensus of all
involved parties, f. e. manufacturers, distributors, users, law-makers, non-governmental
organizations etc. This enables technical standards to include a more detailed
specification, such as a description of a product from a technical perspective, its
construction, materials, and other criteria. Technical standards resemble legislation, yet
they are not legally binding on their own. Most standards can be perceived as the best
practice in the field or as a mere recommendation for manufacturers. For a technical
standard to be binding, it must be implemented in a legal statute as a sub-statutory legal
instrument. [3]
     An article called Regulating Autonomous Systems: Beyond standards describes the
following key issues in the standardization of autonomous machines: 1) standardization
plays a prime role for facilitation of technological development, 2) in autonomous
systems performance-based (functional) standards are much more important for their
safety than construction-based (design) standards, 3) the current standards are suitable
for amending components and modules, but autonomous systems as such pose a
regulatory challenge. [4] Article 22 of Report with Recommendations on Civil Rules for
Robotics [5] also indicates that standardization is a key element for AI.
     Some types of products also require prior testing by a relevant authority to ascertain
compliance with relevant standards - this process is called homologation.


3. Barriers on the Road Ahead to Technical Standards for Autonomous Vehicles

Autonomous vehicles are an example of a technology that requires a previous
homologation. An autonomous vehicle is one that can perform all critical safety
functions while monitoring the road. [6, p. 7] This is enabled by a combination of
functionalities as perception, sensor fusion, localization, path planning and actuation. [7]
However, current standards are not prepared for technology of such capabilities. They
are adapted for machines operating under human supervision. Known testing scenarios
focus solely on parts of the vehicle, while the reliability of the driver is assumed, rather
than being subject to testing. [2, p. 435] This chapter highlights the key factors presenting
a challenge for the creation of suitable standards.

3.1. Missing Role - Model

One of the reoccurring ideas is to amend the standards applicable in aviation, because of
the presumed link between autopilots used in aerial vehicles and autonomous vehicles’
driving systems. The research paper called Certification for Autonomous Vehicles
focuses on precisely this topic. Is it possible to use existing standards in the field of
avionics for software testing regarding vehicles? The answer is that they do not translate
well into the automotive industry. The reasons are that autonomous cars face challenges
such as pedestrian detection and changing traffic conditions. Unlike aircrafts, they shall
be operated by consumers. Finally, the only sensor needed by aircraft is a radar [8, p. 32]
Autonomous vehicles require a multitude of sensors, such as ultrasound, lidar, radar,
camera, and other similar means. [2, p. 355] As a result, standards for autonomous
vehicles are more difficult to decide upon, because the system itself is more complex
than the one used in aircraft.
     The important intake from this is that every autonomous machine faces different
challenges in different areas. The key criteria for creating or amending a standard for
autonomous technology is the technology’s purpose. For example, when standards for
autonomous drones will be created, they might bear more resemblance to those for
vehicles than to those for aircraft, because drones will also operate in inhabited areas.
The similarity is not surprising, since the idea of autonomous vehicles originated with
the development of military drones. [9]

3.2. The Unpredictability of the Future Road Traffic System

The biggest safety concern in today’s cars is if their passengers can survive in case of an
accident. Therefore, the cars need to comply with the requirement of sturdiness. If an
accident becomes less likely, which is one of the incentives for introducing autonomous
mobility, the sturdiness may become unnecessary, giving way to other priorities like
speed. However, the evolution of the road traffic system remains unclear. Today, we
prioritize preventing physical damage to the car, because it poses the biggest threat. In
the future our biggest concerns may differ, f. e. it might be cyber-attacks. Relevant
standards must be amended accordingly.

3.3. Setting the Bar

The step preceding the creation of standards is the determination of the threshold the
driving software must pass before it can be deemed safe. The current bar is set to fit the
abilities of a human driver. Desirable attributes in a vehicle with a human driver are crash
avoidance, crashworthiness, post-crash survivability. [8, p. 2] These are without a doubt
attributes desirable in an autonomous vehicle as well. Nevertheless, the vehicle and
therefore its manufacturer will be fully responsible for all of them, whereas currently the
responsibility and subsequently legal liability is shared with the driver.

3.4. Driving School for Autonomous Vehicles

Given the ability of machine learning, it will be necessary to establish criteria on the
learning data for the driving software as well. Diversity is a key element. Vehicles trained
on data from strictly rural environment might not be prepared for an increased level of
traffic in a city, possibly resulting in a failure in the recognition of an obstacle in the way.
One of the possible solutions is to create different standards for different environments.
This is an idea presented by OICA (International Organization of Motor Vehicle
Manufacturers) during the session of World Forum for Harmonization of Vehicle
Regulation (WP.29.) [10] However, this concept would require a prior decision on which
data is needed for which type of operation. More viable solution would be to create a
framework enabling sharing of such data between manufacturers.

3.5. Homologation of Autonomous Software

Nowadays vehicles are tested and homologated before they reach the roads. A reliable
method for testing and homologation of autonomous vehicles needs to be defined as well.
More than one possible method is discussed. One way is real-time testing in simulations
by the hardware-in-loop method. It tests the capabilities of the controller (in this case
driving unit of the car) by virtual stimuli coming from a computer integrated in the
simulation environment. [11] The Report with recommendations to the Commission on
Civil Law Rules on Robotics emphasizes the role of testing in a real environment instead.
A real-world test drive can serve to assess a vehicle’s standard behavior in public road
traffic and compliance with traffic laws. However, the assessment can be biased by
subjective judgement and requires highly skilled certification authority. [10] Another
method is called black-box testing (also known as behavioral testing), where the internal
structure of the software is unknown and the tester only assesses if the output is desirable.
[12, p. 1326]


4. Conclusion

The article introduces the issue of creating standards for autonomous vehicles. The key
barriers are: 1) the technology differs so much, it is hard to find a current technical
standard than can serve as a role model, 2) the purpose of a technical standard is to make
a product safe, but we cannot be certain what safety means in the future road traffic
system, 3) today we do not amend the capabilities of the driver at all, but once software
is behind the wheel, we must set the bar for its safe operation, 4) a decision must be made
regarding what data is necessary for the software to learn from, so it can handle required
tasks, 5) from a technical perspective it is unclear what method shall be used for testing
the capabilities of an autonomous driving system.
      Moreover, AI encompasses a wide range of technologies and the solution to
standardisation cannot be universal for all of them. Nevertheless, the creation of viable
technical standards is a key element for safe AI.


References


[1] Franklin, Stan; Graesser, Art. Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents.
      Intelligent Agents III, Springer Berlin Verlag, Germany, 1997. 21-35.
[2] Maurer, Markus, et al. Autonomous driving. Springer Berlin Heidelberg, Germany, 2016.
[3] Osina, Petr. Pejzl, Jaroslav. Problematika technických norem z pohledu právního řádu. Právní rozhledy.
      2015, 23-24.
[4] Danks, David; London, Alex John. Regulating autonomous systems: Beyond standards. IEEE Intelligent
      Systems, 2017, 32.1: 88-91.
[5] Report with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) from
      27th January 2017.
[6] K. C. Webb. Products Liability and Autonomous Vehicles: Who's Driving Whom?. Rich. J.L. & Tech.
      2017, 23-9.
[7] Self-driving Safety Report by Nvidia, https://www.nvidia.com/en-us/self-driving-cars/safety-report/ ,
      accessed at 07-12-108
[8] Martin, James, et al. Certification for autonomous vehicles. Automative Cyber-physical Systems course
      paper, University of North Carolina, Chapel Hill, NC, USA, 2015
[9] Bose, Ujjayini. The black box solution to autonomous liability. Wash. UL Rev., 2014, 92: 1325.
[10] OICA          preliminary       input       on      automated/autonomous     vehicle      certification,
      https://globalautoregs.com/documents?meeting_id=1147 , accessed at 07-12-2018
[11] What is Hardware-in-Loop Simulation?, https://www.mathworks.com/help/physmod/simscape/ug/what-
      is-hardware-in-the-loop-simulation.html , accessed at 07-12-2018
[12] Why transparency is important in artificial intelligence, https://www.sentient.ai/blog/understanding-
      black-box-artificial-intelligence/ , accessed at 07-12-2018