A Framework for Safety Violation Identification and Assessment in Autonomous Driving Lukas Heinzmann1 , Sina Shafaei1 , Mohd Hafeez Osman1 , Christoph Segler2 and Alois Knoll1 1 Department of Computer Science, Technical University of Munich, Germany 2 BMW Group Research, New Technologies, Innovations, Germany {lukas.heinzmann, sina.shafaei, hafeez.osman}@tum.de, christoph.segler@bmwgroup.com, knoll@in.tum.de Abstract actual inner logic remains unknown even to most of the de- velopers. This leads to new challenges regarding safety as- Safety in self-driving cars is essential and an in- sessment of these systems. terdisciplinary matter. Nevertheless, there exists a massive gap between system developers knowl- Problem Statement. Establishing a safety framework for edge about safety concepts and the knowledge of evaluating the developed applications of self-driving cars safety engineers on autonomous driving. Thus, from safety perspective, is a challenging task due to various an approach to close this gap and integrating new regulations of different countries, the complex and often un- ideas and concepts of the critical safety domain predictable outcomes of the approaches and also lack of the to self-driving cars is needed. This work presents proper standards. Machine learning-based approaches have a framework for mapping safety-critical situations several sources of uncertainty and Reinforcement Learning based on safety measures in CARLA simulator. (RL) is the blackest black box in this area considering the Through this framework, safety engineers can de- fact that developer expert can only provide the “right” and fine basic safety measures such as respecting speed “wrong” actions for the agent at initialisation phase. In this limits, keeping an appropriate distance to the ve- context, the argument that the agent always learns safe actions hicle ahead and keeping the suitable lane. De- is challenging and can often not be generalised, because en- velopers can quickly integrate their agent(s), and coding the whole knowledge into a single numerical function the framework generates a mapping of the safety- is highly error-prone. A good example is a problem called critical states by running an agent over several reward hacking in which the RL algorithm collects much re- episodes in a simulated environment while main- ward without reaching the actual goal by exploiting a bug in taining the considerations of developers and safety the reward function [Amodei et al., 2016]. From the Auto- engineers. In the simulation environment, our motive functional safety point of view [ISO 26262, 2011], the evaluations showed promising and intuitive results V-shaped development model is well accepted in product de- on identification of safety violations of two ma- velopment. The V-shaped model carries a solid requirement chine learning agents. Respectively, several safety- that will be the main input of the product’s safety validation. critical situations could be identified and analysed However, gathering a complete set of requirements for a ma- according to the outcome of the mappings. chine learning-based application is a difficult task due to the uncertainty of these models. In autonomous driving the re- sponsibility shifts from the human driver to the car itself in 1 Introduction driving tasks, and behavioural safety is a fundamental part Context. The formal concept of safety is not easy to grasp of a development. Here, an evaluation is more important to from a development perspective. However, in a general avoid incorrect behaviours that may lead to severe accidents. overview, safety can be seen as a feeling based on the individ- ual’s own experience. Many metrics that are used for current Goal. This work aims to support the integration of safety self-driving car implementations are the accident-free driven concerns in development phase of the machine learning- kilometres, the count on necessary takeovers by the safety based applications in autonomous cars. We provide a frame- driver and the general well-being of the occupants [General work for an easy setup of safety measures and self-driving Motors, 2018; Tesla, 2018]. However, from safety engi- car agents with an exclusive focus on RL-based scenarios neering perspective, there are fewer insights into the tech- in CARLA simulator [Dosovitskiy et al., 2017]. To validate nical functionality of such a system. Besides the technical our approach, we mainly focus on reinforcement learning and complexity and closed source problems, employing machine over several runs, and gather safety-related information about learning techniques in state-of-the-art approaches causes even the agent. Those safety violations are mapped and visualised bigger challenges. Machine learning-based approaches are in the end and can be used by developers and safety engineers seen as black boxes, with input and output streams, while the to analyse the performance of the agent regarding safety. Outline. The remainder of this paper is structured as fol- from the perspective of planning and defusing dangerous road lows: Section 2 summarises related work followed by the pri- segments, still could play a major role for automated vehicles. mary approach of this work in Section 3. Section 4 presents Traffic accident maps like Unfallatlas (Germany) [Statistis- the evaluation with the conducted experiments and gained re- che Ämter des Bundes und der Länder, 2019] or CrashMap sults followed by their discussion in Section 5. Finally, we (Great Britain) [Agilysis, 2019] can be seen as the most fa- conclude the paper in Section 6. mous uses cases of such approaches. These maps display the accidents based on their location and further information such as severity, affected means of transport and the date of the 2 Related Work incident. Unfallatlas also represents the accident frequency Coming up with a formal specification of safe behaviour is for a given stretch of road. Runtime approaches evaluate not an applicable task for humans, because humans learn the safety during driving since some situations or locations the most rules and behaviour through practical exercise a.k.a. are “labelled” as more safer in comparison to others. Time “learning-by-doing”, instead of remembering a specification to Collision (TTC) or Time to Brake (TTB) are also among of safe behaviour. NHTSA [Thorn et al., 2018] has de- the metrics that are employed by researchers to define the veloped a set of “Behaviour Competencies” in which they safety level of situations [Eggert, 2014; González et al., 2018; listed 28 competencies regarding correct behaviour on the Hallerbach et al., 2018; Mario Morando et al., 2018]. One ex- roads. Some instances are Perform Low-Speed Merge, Per- ample for a runtime approach is the Responsibility-Sensitive form Car Following (Including Stop and Go) or Navigate Safety (RSS) proposed by Mobileye [Shalev-Shwartz et al., Roundabouts. Waymo extended this set by 18 additional 2017]. This approach is based on safe distances to define competencies [Waymo, 2018]. For example, Detect and dangerous situation, for which proper responses are defined. Respond to Animals, Detect and Respond to Unanticipated A similar approach is proposed by NVIDIA with the Saftefy Weather or Lighting Conditions Outside of Vehicle’s Capa- Force Field (SFF) [NVIDIA, 2019] which predicts the envi- bility (e.g. rainstorm) or Make Appropriate Reversing Ma- ronment and mitigates harmful scenarios. Other approaches noeuvrers are among the newly added competencies. Those observe the autonomous driving safety by reading sensors or sets give an excellent overview of the competencies of an au- buses and evaluate it based on predefined rules [Kane et al., tonomous car but still lacks from concretely defining an ap- 2015]. propriate or critical behaviour. Further, these competencies result in a wide range of specific scenarios with variations of 3 The Framework parameters like speed, road or weather conditions. Consider- In this section we propose a framework for evaluating the ing those, the number of testable situations will be enormous. safety of an agent and detecting safety-critical situations in An autonomous car normally is evaluated for those scenar- a defined environment. This framework can be used to vi- ios in either the simulation environment, or on closed-courses sualize and expose the safety risk of the unknown situations and real roads. Besides Waymo, PEGASUS [PEGASUS, that may be observed by a reinforcement learning agent in a 2019] and AdaptIVe [AdaptIVe, 2019] are also among the suitable set of iterations defined by the application developer. projects that address the problem of testing autonomous cars with regard to safety, but there was no evaluation measure or 3.1 Concept and Architecture rating for the safety of agents that go further as “x kilometres The proposed framework employs the concept of Safety Mea- without collision” or “x takeovers of the safety driver”. Dur- sures which are activities, precautions or behavioural codes ing the recent years, several benchmarks or evaluation chal- to avoid unnecessary risks and are taken to maintain safety. lenges are proposed for ensuring the safety of autonomous Moreover, it enables safety measures based on predefined cars. An outstanding example in this area and related to rules, proven practices, and accepted guidelines in a real- core idea of our work, is the CoRL Driving Benchmark of world simulation environment. The original concept of safety CARLA [Codevilla, 2018] simulator that is followed by the measures is not new and already well established in the do- CARLA Autonomous Driving Challenge [CARLA, 2019] or main of behavioural safety, with prime examples like traf- The Grand Challenge for Autonomous Vehicles (real world fic rules or rules for defensive driving. Being quantifiable closed track) of the DARPA [DARPA, 2019]. The CARLA is the most important advantage of the safety measures. For challenge integrated several scenarios based on the NHTSA instance, it is possible to determine whether drivers are vi- behavioural competencies into a typical driving task. Never- olating the speed limit or are tailgating. In our proposed theless, the main goal of these challenges is mostly focused framework, safety measures are based on integrating the ex- on comparing the overall performance of autonomous cars, pert knowledge on top of simulated situations that statisti- rather than safety concerns. cally may have higher risks for injuries. A severity level is As it was discussed before, identifying safety-critical sit- assigned to each safety measure to quantify the negative im- uations is a crucial matter because avoiding such situations pact on safety. The respective measures are seen as Safety would lead to a considerable improvement from safety point Constraints in our development and by violating a constraint, of view, however, this remains still a challenging task. Here a Safety Violation is triggered. The framework is separated in this work, we differentiate between the ideas of statistical into three stages: Initiation, Execution and Analysis. The and runtime approaches. Statistical approaches use existing architecture is represented in Figure 1. data such as reported accidents and accordingly visualizing The Initiation phase consists of two different sections, one them, and while they are currently only relevant for safety for application developers () and the other for safety engi- Initiation Execution Analysis Configuration Core Mapping Visualisation Safety Violations Agent  Safety Contraints ü Simulator Text /ü State-Map /ü Simulator /ü Measurements Measurements and Commands Figure 1: Architecture of the framework — Roles: Developer , Safety-Engineer ü neers (ü). The Agent interface provides a platform for appli- spective to keep an appropriate distance to the leading vehicle cation developers to integrate the developed approach as an since the time for reactions, and possible evasive manoeuvres RL-based agent. Safety Constraint interface is also respec- are limited if the distance is too short. Computers are much tively a set of safety restrictions. In the Execution phase, the faster in reaction, nevertheless these systems rely on measure- agent has to drive in the predefined environment and is evalu- ments from the environment (e.g. radar sensors) which in- ated by the given safety constraints. This phase is completed troduce latencies between measuring, detecting, and acting. after a stop criterion is matched. The safety constraints are For this case maintaining an appropriate distance, reduces the evaluated against the current situation and trigger a safety vi- risk of a collision in most of the cases. Defining appropriate olation that contains relevant information about the current in this context is not as straightforward as it seems in the first situation among other agents, type and location. In the end, place. Also, legislators do not specify this exactly for human the framework persists the given events. In the last stage drivers. Most countries specify formal or informal rules of Analysis, the safety violations are filtered, mapped and vi- thumb, popular is the 2-second-rule or in countries with the sualised. The location and the type are the primary param- metric system the half speedometer. The 2-second rule en- eter for the grouping but could vary in future implementa- forces a cushion of at least the distance the car drives in two tions. The framework calculates different safety indicators seconds (for 100km/h → 55,5m). In the framework as well, for each group to make the groups comparable. The gener- the distance constraint is variable based on an x-second rule; ated groups are visualised in a more intuitive way concerning the safety engineer can specify the exact number of seconds. the calculated safety indicators and gives the developers and The position orthogonal to the movement of the car is an safety engineers the possibility to better understand the sys- important safety consideration. The primary focus is on stay- tem. To achieve this we use two types of safety measure that ing in the correct lane. However, there are several cases, are implemented in the proposed framework. where it is necessary or is accepted to violate this rule. Exam- ples are overtaking manoeuvre on a 2-lane road or the bypass Collisions Avoidance over the side-walk if an accident or obstacle blocks the road. A major safety measure is directly derived from the definition If the vehicle leaves the lane, either to the side-walk or to the of safety. If the current situation causes injury at any object other lane, this is declared as a major safety violation. (e.g. humans, cars, other objects in the environment or even immaterial goods), safety is violated. In the context of cars, Safe Driving Behaviour any injury is usually related to a collision. A collision occurs Traffic rules and guidelines for defensive driving are by far if a vehicle collides with another vehicle, pedestrians or other the biggest group of safety measures, and this is not a field objects in the environment such as trees or animals. There are that is only related to self-driving cars. The prevention of different types of collision such as a single-vehicle collision, collisions is the primary goal for road users and countries for where a vehicle collides with an object of the environment decades, and many rules are designed to reduce collisions and without the influence of another road user, or longitudinal maintain the safety. Prominent examples are right of way reg- collision if the vehicle collides with another vehicle that is ulations together with speed limits. Ignoring or misinterpret- driving in the same or the opposite direction. The severity of ing right of way rules can cause hazardous or catastrophic a collision depends on the collision type and parameters such accidents. Therefore, we enforce agents to remain in line and as speed, crash worthiness or involved road users. Therefore, follow them. the highest safety goal is to prevent collisions of any kind and favour light damages on cars against heavy damages and 3.2 Mapping casualties. In the mapping phase, we group the violations by type, sever- Since collisions are a violation to safety, therefore avoiding ity and location. Clustering by type and severity is trivial, collisions is indispensable for detecting safety-critical situa- but for the location, it is necessary to use a grid. The map is tions. Intuitive examples for collision avoidance safety mea- respectively divided into tiles of a predefined size. Each vi- sures are appropriate distances to the vehicle ahead and the olation is added to a specific tile and grouped with the other position in the assigned lane. It is essential from safety per- violations of this tile. The degree of safety is measured in the quantity of safety violation in the situation. The quantity improved version of an imitation agent presented in the first index (score) defines how relevant the tile is. Equation (3) CARLA draft. Imitation Learning uses knowledge of an ex- represents the calculation of the score function. A low index pert and imitates the behaviour of the expert; a human driver (scores < 0) indicates that the violation occurred only a few in this case. This agent is much better at navigating, driv- times compared to the average and is rather unspectacular. ing and awareness regarding other road users. Nevertheless, On the other hand, a high index (scores > 0) indicates an in- this agent has several limitations, e.g. preserving right of way teresting situation. A score of 0 indicates an average situation rules. regarding safety violations, and yet does not imply any irrele- vancy. The severities have weights to value more critical ones Test Environment. The agents initially are set to drive a higher. Equation (1) depicts the weights for each severity. distance of 100km in the simulated environment with a num- ber of iterations according to the preference of the safety en-  gineer. This test environment uses a distance stop criterion   1, if severity is Negligible (S0) over time or episodic criterion because the navigating capa- bilities of the agent strongly influence the episodes. We do  2, if severity is Minor (S0)   m(violation) = 4, if severity is Major (S1) not specify a time criterion to avoid punishing agents driving  with higher speed. The episodes have a fixed number of crit-    8, if severity is Hazardous (S2) ical situation (like intersections) and driving slower through  16, if severity is Catastrophic (S3)  them will decrease the number of critical situations in total. (1) On the selected map, the route is set to be straight from the Note: S0, S1, S2 and S3 indicate the severity class defined in origin to the destination, therefore no advanced navigation ca- ISO 26262 [ISO 26262, 2011] pabilities are required. Nevertheless, the routes still contain X critical situation such as intersections, pedestrians or slower xs = m(v) (2) driving vehicles. Situations with traffic, are considered as v∈Vs well as traffic-free scenarios for the testing. The test environ- ment with traffic includes 100 other cars and 40 pedestrians. where Vs are all violations at location s With this configuration, the scenarios are crowded by cars (µ − x s and pedestrians but without stop-and-go or traffic jams. We , if σ 6= 0 apply the safety constraints Distance, Lane, and Collision for scores = σ (3) 0, else evaluating the agents regarding safety and testing the frame- work. In the traffic-free scenarios, the distance constraint is where µ is the mean and σ the standard deviation of all xs not relevant since no other cars are involved. The value for an appropriate distance is set to two seconds, as a common 3.3 Visualisation practice. An indispensable part of this framework is the visualisation of the given mapping. Scores and counts are calculated in Results. Figure 2a represents the violations of the RL and tiles with the given grid size. The visualisation helps the de- Figure 2b depicts the results of the IL agent in the traffic-free veloper and safety engineer to identify and understand the scenario. For the RL agent only 19 out 71 violations (∼ 26%) problems of the agent in an intuitive way. We propose to use did not occur in this area and the IL agent did not collide out- three different types of visualisation methods: a simple text side of this region. This is an indication of a problem for the output, a 2D map with highlights of the safety violations and agents here. The amount and distribution of lane violations an overlay in which violations can be displayed directly in the of the RL agent imply a broader issue regarding lane keeping simulation environment (cf. Figure 2) and collision avoidance. We assume there exist a relationship between the collisions and the lane violations, but there are plenty of lane violations observed without any related colli- 4 Evaluation and Results sion. We assume that there are no collisions detected since Evaluation Setup. To evaluate or approach we compare there is no other traffic specified in the scenarios in which the two agents within the simulation environment CARLA. As car may face a collision possibility. Driving on the wrong agents we use a reinforcement Learning (RL) agent [Doso- lane or on the side-walk causes no collisions if there are no vitskiy et al., 2017] and an Imitation Learning (IL) objects to collide with. According to the results, it is obvi- agent [Codevilla et al., 2018]. The RL agent is trained as a ous that the IL agent performs a safer drive in comparison to proof of concept in the context of the first CARLA draft. It is the RL agent with better performance in lane keeping. There based on the asynchronous advantage actor-critic (A3C) algo- are no lane violations or collisions recorded outside the men- rithm and is trained for goal-directed navigation in CARLA. tioned hot spot. Contrary to the mapped states of Figure 2, The reward is based on speed, distance to the goal, collision Figure 3 clarifies the safety violations of the IL agent sepa- and position in the assigned lane. From our point of view, rated by the violation type in the scenario with traffic. Again, the agent is driving acceptable for an evaluation of the safety. the IL agent performs a much safer driving compared to the Nevertheless, the agent faces a considerable amount of issues, RL agent. The lane violations are similar to the traffic-free especially in the task of navigating and it has only limited scenario. Most violations occurred in the same area, but have awareness regarding other road users. The second agent is a higher variance. There is a massive increase in collisions trained using Conditional Imitation Learning (CIL) and is an and in this scenario many violations got recorded all over the (a) State-Map of the RL agent without traffic (b) State-Map of the IL agent without traffic Figure 2: State-Map of the evaluation scenario without traffic (a) Collision (b) Lane violations (c) Distance violations Figure 3: State-Map of the IL agent with traffic map. The safety-critical areas is as before, but several new Table 1: Total violations occurred by the RL and IL agent hot spots are also added to the consideration afterwards. Traffic Time Episodes Collision Distance Lane 5 Discussion RL w/o traffic 243min 586 71 – 10109 w/ traffic 286min 1201 626 17418 17332 Interpretation of Results. The results reflect our previous intention regarding the safety of the agents. The IL agent w/o traffic 327min 641 29 – 5758 IL drives much safer than the RL agent, but both still have many w/ traffic 437min 836 154 14116 4951 limitations. The IL agent caused in every category fewer vi- olations (cf. Table 1). This table only represents a high level overview of the safety violations but can be extended with safety measures of lane and distance. In the vicinity of colli- the type of a collision, traffic situation, time and road condi- sions, we also observed either a hot spot of lane/distance vio- tions as well. Further, there is a relation identified between a lations. This demonstrates the impact of those two measures collision and distance/lane violations, which shows the direct on safety. Additionally, the framework detected several hot connection to safety. It is worth to mention that this frame- spots of lane/distance violations without collisions in the pre- work is used in order to exploit the safety violations rather defined environment scenario. This indicates either the safety than enhancing the uncertainty, however the results can be measures are too strict or there are not sufficient episodes to used to increase the confidence of the developed intelligent provoke any collision. features w.r.t. safety factors. As mentioned in Section 4, we evaluated our approach us- The framework was able to identify several safety-critical ing three main safety constraints that are Collision, Lane and situations. Interesting to mention are the ones that got iden- Distance Violation. These safety constraints can be seen as tified as safety-critical for both agents. There are no obvi- high-level safety requirements. To this end, the approach ous causes of the turbulence in this area, but it seems to be a does not provide an automated way to transform the high- general problem. Furthermore, the framework demonstrates level requirements to more detailed ones (e.g. safe distance that the IL agent is driving much safer comparably, hence violation (high level) to 2-seconds (rule)). However, since this reflects the findings of Dosovitskiy et al. [Dosovitskiy et the simulator is able to provide more information on the test al., 2017]. We were able to enlighten the relation between environment as well as the features of the car (e.g. sen- collisions (main symptom of insufficiency in safety) and the sor data), we believe that detailed requirements could be achieved accordingly (top-down approach). As stated in ISO fectiveness and suitability of this approach in identifying and 26262:2011-3, the safety requirements should be evaluated to assessing safety violations of driving functions from a system determine their effectiveness, therefore we suggest to use our developer and safety engineer perspective. proposed framework as a prototype for this purpose. This work is in coordination with the methodology of Salay References and Czarnecki [Salay and Czarnecki, 2018] on considerations [AdaptIVe, 2019] AdaptIVe. Automated driving applications for developing a safety-critical software and is also a suitable and technologies for intelligent vehicles - adaptive fp7 application for supporting the iterative Hazard Analysis and project- automated driving applications and technologies Requirement Refinement [Warg et al., 2016] in order to deter- for intelligent vehicles. http://www.adaptive-ip.eu/, 2019. mine the hazardous condition of autonomous driving appli- (Accessed on 03/26/2019). cations. With the help of this framework, a system prototype can be used in the simulated environment and respectively [Agilysis, 2019] Agilysis. Crashmap - uk road safety a safety engineer can analyse the safety level of the driving map. https://www.crashmap.co.uk/, 2019. (Accessed on application in conjunction with road and environment condi- 03/26/2019). tions. [Amodei et al., 2016] Dario Amodei, Chris Olah, Jacob Threats to Validity. In terms of internal threats to validity, Steinhardt, Paul Francis Christiano, John Schulman, and the evaluation of the approach may not be generalizable to Dan Mané. Concrete problems in ai safety. CoRR, the divers driving environments due to the limitation of the abs/1606.06565, 2016. map and algorithm that are provided by CARLA simulator. [CARLA, 2019] CARLA. Carla ad challenge. https:// Nevertheless, we have minimised the risk of this threat by carlachallenge.org/, 2019. (Accessed on 03/26/2019). evaluating different driving environment within the provided [Codevilla et al., 2018] Felipe Codevilla, Matthias Müller, map. In terms of external threats to validity, we attempt to Antonio López, Vladlen Koltun, and Alexey Dosovit- introduce a generalised safety violation identification and as- skiy. End-to-end driving via conditional imitation learn- sessment framework that can be used for multiple types of au- ing. In International Conference on Robotics and Automa- tonomous driving scenarios in simulated environment. How- tion (ICRA), 2018. ever, we represent our development only in CARLA simula- [Codevilla, 2018] Felipe Codevilla. Carla 0.8.2: Driving tor. Implementing this framework to other autonomous driv- benchmark. http://carla.org/2018/04/23/release-0.8.2/, 4 ing simulators remains as our future work. 2018. (Accessed on 05/08/2019). [DARPA, 2019] DARPA. The grand challenge for au- 6 Conclusion tonomous vehicles. https://www.darpa.mil/about-us/ In this work, we presented a framework for evaluating the timeline/-grand-challenge-for-autonomous-vehicles, safety of agents. Safety engineers and Automated Driving 2019. (Accessed on 03/01/2019). Systems (ADS) developers can use this framework to de- [Dosovitskiy et al., 2017] Alexey Dosovitskiy, German Ros, velop, improve and evaluate the ADS. Initially, we high- Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. lighted the problem of quantifying safety and showed the con- CARLA: An open urban driving simulator. In Proceedings cept of Safety Measures as a solution to this problem. We of the 1st Annual Conference on Robot Learning, pages 1– apply different types of safety measures and showed their 16, 2017. relevance to safety. Most mentionable ones are Collision Avoidance and Safe Driving Behaviour. We evaluate the pro- [Eggert, 2014] J. Eggert. Predictive risk estimation for intel- posed framework by checking two learning agents over sev- ligent adas functions. In 17th International IEEE Confer- eral episodes implemented in the CARLA simulator based on ence on Intelligent Transportation Systems (ITSC), pages the defined safety constraints. We could demonstrate promis- 711–718, 10 2014. ing results regarding the detection and identifying the rela- [General Motors, 2018] General Motors. 2018 self-driving tionships of safety violations and respectively, recognizing car report, 2018. safety-critical situations. Furthermore, this framework al- [González et al., 2018] Leonardo González, Enrique Martı́, lows to easily setup self-driving car approaches by employ- Isidro Calvo, Alejandra Ruiz, and Joshue Pérez. Towards ing safety measures. Developers can simply set up a self- risk estimation in automated vehicles using fuzzy logic. In driving car agent, and safety engineers can build a framework Barbara Gallina, Amund Skavhaug, Erwin Schoitsch, and of safety measures on top of it. Both groups can evaluate and Friedemann Bitsch, editors, Computer Safety, Reliability, improve their ideas and will be able to build better and safer and Security, pages 278–289. Springer International Pub- approaches for the applications of autonomous driving. lishing, 2018. For future work, we would like to (i) extend the frame- work to include other types of safety violation to be identified, [Hallerbach et al., 2018] Sven Hallerbach, Yiqun Xia, Ul- (ii) improve the visualisation of safety violation by present- rich Eberle, and Frank Köster. Simulation-based identifi- ing more relevant information, and (iii) explore the possibility cation of critical scenarios for cooperative and automated of this approach to support the determination of Automotive vehicles. 04 2018. Safety Integrity Level (ASIL) for autonomous driving func- [ISO 26262, 2011] ISO 26262. ISO 26262:2011 Road vehi- tions. Furthermore, we also would like to evaluate the ef- cles – Functional safety, 2011. [Kane et al., 2015] Aaron Kane, Omar Chowdhury, Anupam [Shalev-Shwartz et al., 2017] Shai Shalev-Shwartz, Shaked Datta, and Philip Koopman. A case study on runtime mon- Shammah, and Amnon Shashua. On a formal model of itoring of an autonomous research vehicle (arv) system. In safe and scalable self-driving cars. CoRR, abs/1708.06374, Ezio Bartocci and Rupak Majumdar, editors, Runtime Ver- 2017. ification, pages 102–117, Cham, 2015. Springer Interna- [Statistische Ämter des Bundes und der Länder, 2019] tional Publishing. Statistische Ämter des Bundes und der Länder. Unfallatlas [Mario Morando et al., 2018] Mark Mario Morando, — kartenanwendung. https://unfallatlas.statistikportal.de/, Qingyun Tian, Long Truong, and Hai L. Vu. Studying the 2019. (Accessed on 03/26/2019). safety impact of autonomous vehicles using simulation- [Tesla, 2018] Tesla. Tesla vehicle safety report — tesla. based surrogate safety measures. Journal of advanced https://www.tesla.com/VehicleSafetyReport?redirect=no, transportation, 2018, 02 2018. 2018. (Accessed on 03/11/2019). [NVIDIA, 2019] NVIDIA. Safety force field. [Thorn et al., 2018] Eric Thorn, Shawn Kimmel, and https://www.nvidia.com/en-us/self-driving-cars/safety- Michelle Chaka. A framework for automated driving force-field/, 2019. (Accessed on 06/19/2019). system testable cases and scenarios, 09 2018. [PEGASUS, 2019] PEGASUS. Home - pegasus-en. https: [Warg et al., 2016] Fredrik Warg, Martin Gassilewski, //www.pegasusprojekt.de/en/, 2019. (Accessed on Jörgen Tryggvesson, Viacheslav Izosimov, Anders 03/26/2019). Werneman, and Rolf Johansson. Defining autonomous [Salay and Czarnecki, 2018] Rick Salay and Krzysztof Czar- functions using iterative hazard analysis and requirements necki. Using machine learning safely in automo- refinement. In International Conference on Computer tive software: An assessment and adaption of soft- Safety, Reliability, and Security, pages 286–297. Springer, ware process requirements in iso 26262. arXiv preprint 2016. arXiv:1808.01614, 2018. [Waymo, 2018] Waymo. Waymo safety report, 2018.