Runtime Decision Making Under Uncertainty in Autonomous Vehicles Vibhu Gautam, Youcef Gheraibia, Rob Alexander, and Richard Hawkins Department of Computer Science, University of York, UK {vibhu.gautam,youcef.gheraibia,rob.alexander,richard.hawkins}@york.ac.uk Abstract model techniques, both of which are prone to uncer- tainty. This uncertainty can lead to incorrect predictions Autonomous vehicles (AV) have the potential of not and therefore, jeopardize the safety of the AV. Hence, only increasing the safety, comfort and fuel efficiency for the safety of an AV, it becomes imperative that we in a vehicle but also utilising the road bandwidth more incorporate these uncertainties in the decision making efficiently. This, however, will require us to build an AV control software, capable of coping with multiple process. (Macfarlane and Stroila 2016) sources of uncertainty that are either preexisting or in- One recent ML technique, known as the Convolu- troduced as a result of processing. Such uncertainty can tional Neural Network (CNN), has been widely adopted come from many sources like a local or a distant source, across both the industry and the research, primarily be- for example, the uncertainty about the actual observation cause of its at par human level accuracy in dealing with of the sensors of the AV or the uncertainty in the environ- various image recognition challenges and for providing ment scenario communicated by peer vehicles respec- robustness to the Data and Model Uncertainty. (CNN tively. For AV to function safely, this uncertainty needs are robust to large variation in input data). The percep- to be taken into account during the decision making pro- tion task of an AV also utilises CNN techniques for vari- cess. In this paper, we provide a generalised method for making safe decisions by estimating and integrating the ous classification and object detection tasks. (Stallkamp Model and the Data uncertainties. et al. 2012) For a safety critical application like an AV, it becomes imperative that in perception tasks, such CNN models 1 Introduction not only have high accuracy but are also able to es- timate and utilise the Data and Model Uncertainty for In an AV’s software pipeline, the uncertainty arising decision making. Recent advances in the area of Prob- from various sources is critical for safe decision mak- abilistic Convolutional Neural Networks (PCNN) have ing. Due to the recent advancement in Machine Learn- provided a way to estimate the Data and Model Uncer- ing (ML) techniques, especially Neural Networks (NN), tainty for object classification. (McAllister et al. 2017) the software pipeline of an AV is heavily dependent on data related to its environment and this data comes Data Uncertainty arises from sensor noise or mea- from the sensors. The key data sources in the software surement fluctuations caused by changes in weather pipeline of an AV are the LIDAR, RADAR, GPS, cam- conditions like rain, dust, etc., whereas, Model Uncer- era, etc. As these sources are prone to measurement tainty arises because the ML models learn from data fluctuations, there is always some uncertainty or noise and are not explicitly programmed to perform certain in the data which they provide, for example, uncertainty tasks (Kendall and Gal 2017). Like any other ML tech- due to variation in sensor resolution, internal sensor nique, CNN are also inherently uncertain because the noise, measurement fluctuations caused by changes in model they have learned is always an imperfect repre- the weather like rain, dust, etc. This gives rise to un- sentation of the complex world (Gauerhof, Munk, and certainty about how the sensor data corresponds to the Burton 2018). ground truth. Although recent advancements in sensor Bayesian Networks (BN) are an effective technique technology have greatly reduced such inaccuracies, they for decision making under uncertainty, and are utilised still remain of significant concern (Schwarting, Alonso- heavily for such tasks across domains (Koller and Fried- Mora, and Rus 2018) (McAllister et al. 2017). man 2009). However, it has not yet been shown how In the software pipeline of a typical AV, the percep- to use BN to estimate and utilise uncertainties arising tion task is heavily dependent on data and advanced ML specifically from tasks like classification or object de- tection. Copyright © 2021, for this paper by its authors. Use permit- In this paper, we present a method that addresses the ted under Creative Commons License Attribution 4.0 Interna- challenge of managing the uncertainty from PCNN by tional (CC BY 4.0). using BN for decision-making. Our method links the outputs from a PCNN to a predefined BN. At runtime, in which the AV system needs to function and the pos- the output from the PCNN is used as evidence for nodes sible hazards and failures that the AV may encounter. of the BN. This allows us to estimate the probability of SM have been extensively used to model the failures being in a certain state while taking into account uncer- and faults of a complex system into a chain of simpler tainties arising at runtime. These state probabilities can states. be used to ensure that safe decisions are taken. Like (Kabir et al. 2019), we use an executable BN, which can be used at runtime to produce the probability 2 Background and related works of being in a certain state. BN provide a very powerful way to infer the relationship between a large number of The challenges of decision making in an AV, which random variables which are represented in the form of is safety critical in nature, is that they require ro- a Directed Acyclic Graph (DAG). BN also allow us to bust guarantees to assure safety, security, assurance factor large joint probability distributions by capturing and other dependable characteristics (Burton, Gauer- the independence among various random variables. hof, and Heinzemann 2017) (Gauerhof, Munk, and Bur- In (Kabir et al. 2019) framework, any safety failure ton 2018). Some of the recent work which tries to in the system is defined using a SM and then an exe- bridge various decision making techniques with the cutable BN is used to generate the probability of being safety of an AV have shown promising results, for in certain state. We extend on this framework by propos- example, Papadoulis et al. (Papadoulis, Quddus, and ing a method for estimating both the Data and the Model Imprialou 2019) proposed a runtime decision mak- Uncertainties from the classification task and utilizing ing control algorithm for AV. The algorithm supported them for decision making using the BN. We use PCNN both lateral and longitudinal decision making and was to provide estimates of the Data and the Model Uncer- shown to improve road safety by reducing road con- tainty along with the Label Prediction for the classifica- flicts. For safer decision making in an AV, Furda et al. tion task. (Furda and Vlacic 2011) used Petri net for choosing PCNN produces probabilistic understanding of Deep a safe manoeuvre and Multi Criteria Decision Making Learning models by inferring the distribution over NN (MCDM) model for improving comfort and efficiency parameters, i.e., Weights and Biases. This distribution under multiple criteria. Katrakazas et al. (Katrakazas, over NN parameters allows us to estimate the Model Quddus, and Chen 2019) proposed the usage of Dy- and Data Uncertainty. This estimate of Model and Data namic Bayesian Networks (DBN) to enhance the risk Uncertainty are added to get a single value for Total assessment for AV. In order to increase the safety of au- Uncertainty, which is then normalised using logistic tomated driving, DBN were used to estimate the risk regression to present probability of correct classifica- of collision by providing comprehensive reasoning for tion (Gal and Ghahramani 2015). This probability be- unsafe driving behaviour. comes the runtime evidence for the nodes of the BN. Though these techniques yield good results, none of In the next section, we discuss in detail, how PCNN is these solutions address how to estimate uncertainties used to estimate the Model and the Data Uncertainty. arising from perception tasks, or how to take these un- certainties into account during the decision making pro- 3.1 Estimating Model Uncertainty cess. Work done by (Kabir et al. 2019) tries to utilise un- Model Uncertainty tells our ignorance about which certainties during runtime in an AV by proposing a con- model parameter best fits the underlying data. In the ceptual framework for runtime safety analysis using BN case of NN, where the model training (learning) pro- and State Machines (SM) in a Platooning Scenario. BN cess is stochastic in nature, there can be different values proposed in this architecture are used to address issues for model parameters leading to similar prediction accu- of uncertainty in data and to produce runtime proba- racy. Therefore, using PCNN, we can estimate our igno- bilistic confidence of being in a certain state. However, rance regarding which model parameters generated our the authors do not discuss the methods used for complex underlying data (Kendall and Gal 2017). Owing to their tasks like object detection. For example, in their frame- large parameter space, estimating Model Uncertainty is work, for detecting speed from road signs, they depend a non-trivial task, especially in case of NN (Hinton and on external sources such as roadside infrastructure. It is van Camp 1993). In addition, as discussed in the pre- therefore not clear how various uncertainties can be cap- vious section, similar to other ML techniques, any NN tured. In our work, we extend this framework to show based technique is also inherently uncertainty. Hence, how these uncertainties can be estimated at runtime and for safety critical applications, we need methods to es- integrated into a BN for safe decision making. timate this uncertainty and use it for safe decision mak- ing. In a PCNN, exact inference of posterior distributions 3 Proposed Method over a large parameter space, like a Kernel in PCNN, is Using (Kabir et al. 2019) work as a reference for our intractable. Possible methods which exist consist of the proposed method, at design-time, we model the failure Sampling Methods, the Variational Inference Methods behaviour of the system as a SM. The states of the SM or the Ensemble Methods (Graves 2011) (Osband et al. are based on a detailed study of both the environment 2016). Sampling Methods and Ensemble methods, both suffer from very high latency in real time usage, for ex- cases where the probabilities of two or more different ample, when used in an AV. A recent work proposed states are equal, to avoid a deadlock, the system de- (Gal and Ghahramani 2015) Random Neuron Dropout signer can define a set of rules. In cases where two during runtime as a method for Approximate Varia- states have highest and approximately equal probability, tional Inference. This method only requires dropouts in safety goals can be ensured by using predefined rules to Forward Passes at runtime. The average stochastic For- choose a particular state. For instance, the more safety- ward Passes are then interpreted as Bernoulli Approxi- critical state can be chosen in the case of a tie. mate Variational Inference. Additionally, to handle any latency issues, PCNN can be deployed for runtime in a 4 Experiments distributed manner. In a given dataset, the input feature space is defined In this section, we describe the implementation of our by X = [x1 , .., xn ], and the output to be predicted is de- proposed method by using a conceptual platooning case fined as Y = [y1 , .., yn ]. The usage of dropouts at run- study used by (Kabir et al. 2019). We extend the case time allows us to use the distribution over Weights and study by using PCNN for capturing the Data and Model Biases which can later be used to calculate the Mean of Uncertainty. We also perform an experiment to test the Predictive Posterior Distribution (y∗ ) for any new whether or not the safety of our system is ensured when data (x∗ ) by taking the Mean of the SoftMax output we utilise the uncertainty arising from both the Data and Score for N number of Forward Passes. Finally, the the Modeling tasks. Model Uncertainty can be captured in the form of Shan- non Entropy (SE) (Feng, Rosenbaum, and Dietmayer 4.1 Platooning Case Study 2018). The case study we use is a Platooning Scenario consist- ing of two vehicles, the Follower and the Leader. These 3.2 Estimating Data Uncertainty vehicles operate in Cooperative Adaptive Cruise Con- Data Uncertainty captures the noise which is inherently trol (CACC), tasked to ensure that a Safe Distance is present in the sensor data. PCNN help us to quantify maintained between the two vehicles. For the Platoon- the noise in the data as it can be trained to learn this ing Scenario, the following conditions (Reich 2016) noise in an unsupervised manner. This uncertainty in the must be ensured and verified at runtime: data, which is learned by modifying the loss function of – Condition 1: d ≥ ds , where d and ds are the dis- PCNN, tells us the noise inherently present in the data tance between the two vehicles and the minimum safety (Leung and Bovy 2019) (Kendall and Gal 2017). For distance respectively. classification tasks, in the output layer, in addition to – Condition 2: Current Vehicle Speed ≤ Speed the number of neurons corresponding to the number of Limit, where former is the current speed of the vehicles classes, an extra neuron is added and the loss function is and the latter is the speed limit on the road. modified to incorporate for this additional neuron. This – Condition 3: Any ambiguity arising while check- allows us to train the extra neuron in an unsupervised ing the validity of the input data, is modeled to ensure manner to learn the uncertainty in the data. the safety of the system and the system utilises only cor- Unlike Model Uncertainty, we do not need to run rect input data for decision making. multiple Forward Passes to capture Data Uncertainty. A SM is used to model the failure behaviour (Machin Also, in case of the latter, uncertainty cannot be reduced et al. 2016) of the Platooning System. Based upon the using additional data. three conditions above, the States and corresponding Actions, to ensure the safety of the system, have been 3.3 Decision Making using BN summarised in Table 1 and the SM diagram in Figure 1. Figure 2, represents the BN for the runtime decision An executable BN can be created to produce the sys- making of the Platooning System. The inference is tem’s probability of being in a certain SM state. The based on several parameters and inputs, i.e., the distance BN model and the PCNN used, as shown in Figure 2, between the Follower and the Leader, the safe distance, contain both the quantitative and the probabilistic safety the threshold (proximity in terms of distance), allowed parameters for inferring the system’s state at runtime. error in distance, the current speed, the speed limit, the The BN nodes of “Speed”, “Speed Limit”, “Distance validity of speed values, and the detection quality of the from Follower”, “Safe Distance”, are all quantitative Leader and the Follower. The system state is estimated parameters. These quantitative parameters are used for based on the values of the previous parameters and in- checking the safety condition related to Speed and Dis- puts. tance, as mentioned in the SM. The “Leader detected by For the “Speed Limit” and the “Valid Speed Limit” Follower”, “Follower detected by Leader” and “Valid nodes of the BN, evidence comes for the PCNN. For Speed Limit”, are all probabilistic parameters used for all other nodes we have generated data artificially. We checking the validity of the input data. generated various test case scenario and checked the The safety of the Platooning Scenario, as defined in the working of the BN when using the Data and Model Un- SM in Figure 1, is based on three conditions, i.e., Safe certainty from the PCNN. As a simple rule, the state Speed, Safe Distance and Ambiguity. These condition with the highest probability is selected, however, in are also represented in the BN. The “Speed Check” Figure 1: State Machine for the Platooning case study (Kabir et al. 2019) State Description Action node is responsible for producing probabilistic guar- antees of maintaining a safe speed. This is achieved S0 The safety condition of The state is safe, through two child nodes, namely, “Valid Speed Limit”, safe distance is fulfilled therefore, continue representing a certificate about the validity of the speed and the Follower is driving. limit and “Speed Within Limit”, monitoring the legality driving within the speed limit of the road. of the vehicle’s current speed by comparing it with the current speed limit. Similarly, “IsSafe” is responsible S1 The safety condition of Decelerate to fall within safe distance is fulfilled the speed limit. for producing probabilistic guarantees of maintaining but the Follower is Safe Distance between the Leader and the Follower. The driving above the speed condition for Ambiguity is monitored by the “Detection limit of the road. Quality” node, which provides a guarantee about the de- S2 The safety condition of Decelerate to increase tection by both the Leader and the Follower vehicles. safe distance is not the distance with the In (Kabir et al. 2019) the validity of the estimate of fulfilled and the Follower Leader until safety the speed limit is determined by getting a “certificate” is driving within the condition is fulfilled. speed limit of the road. of the speed limit from the external infrastructure. In- stead, in our method, we assess the validity of the de- S3 The safety condition of Decelerate to achieve a safe distance is not safe distance with the tected speed limit by using uncertainty estimates from fulfilled and the Follower Leader and fall within the PCNN. is driving above the the speed limit. speed limit of the road. 4.2 Implementation of PCNN S4 The safety condition of Brake to stop driving. safe distance is not In this section we discuss the implementation of the fulfilled, the Follower is PCNN for the Platooning Case Study. For the platoon- driving above the speed limit of the road, and is ing working example, we provide evidence of speed de- driving too close to the tected from the speed sign board, which is used in the Leader. “Speed Limit” node of the BN. Also, the probabilis- S5 Safety condition of safe Switch to ACC mode. tic confidence of this prediction is used in the “Valid distance and/or speed Speed Limit” node of the BN. The PCNN used was limit cannot be verified. trained using a Traffic Sign Dataset where traffic signs were detected from images and uncertainty in the results Table 1: Various States and corresponding Actions in was quantified using PCNN discussed earlier. The Ger- the Platooning Scenario (Kabir et al. 2019) man Traffic Sign Recognition Benchmark Dataset (Stal- lkamp et al. 2012) (Houben et al. 2013) is a well estab- Figure 2: Bayesian Network from (Kabir et al. 2019) framework with PCNN input; Test scenario B3 lished benchmark in the area of automatic traffic sign clusion, rotations, illumination, distance, etc. It consists recognition. This dataset consists of about 50,000 traf- of 43 classes having unbalanced class frequencies. By fic sign images reflecting variations in the visual ap- default, it is divided into a Training Dataset and a Test- pearances of signs because of weather conditions, oc- ing Dataset with 39209 training image and 12630 test- ing images. the Platooning System. For easy implementation of PCNN, we used AstroNN API, which is built on top of Keras and Tensorflow. 5 Results For estimating Model Uncertainty, the runtime dropout is implemented by ”MCdropout” layer of the AstroNN To test the working of the proposed method, we gener- API (Leung and Bovy 2019). The dropout rate used was ated two Test Scenarios. Scenario A, where the evidence 20 percent. The Data Uncertainty is estimated in the provided to “Validity of speed limit” is considered to last layer of the architecture as shown in Table 2 and is be 100% for each test case, and Scenario B, where the represented as a ”varianceoutput” layer in the AstroNN the probabilistic uncertainty output from PCNN is used. API. Speed Sign Detection and the Total Uncertainty in The results for the tests performed for each scenario are predictions is the output from PCNN and these become summarised in Table 3 and Table 4. the evidence for the nodes of BN, i.e., “Speed limit” and “Validity of speed limit”. Parameter A1 A2 A3 A4 The simple model with 20 epochs was producing a training accuracy of above 95 percent for multiple runs. Distance by Follower(m) 5.0 3.0 3.0 2.0 Also, the test data accuracy was 90 percent and above. Distance by Leader (m) 5.5 3.5 3.5 2.5 Fig 3a) shows how we have high accuracy correspond- Safe distance (m) 4.0 4.0 4.0 4.0 ing to lower value of Total Uncertainty. The uncertainty measures produced by PCNN are nu- Too close distance (m) 2.0 2.0 2.0 2.0 meric values and not a probability distribution as is re- Allowed error in distances (m) 2.0 2.0 2.0 2.0 quired for probabilistic inference in BN. To address this Speed (miles/h) 55 45 55 55 issue, we convert the uncertainty measure, i.e., the sum Speed limit (miles/h) 50 50 50 50 of the Model and Data Uncertainty, into a probability of correct classification by using a logistic regression and Validity of speed limit 100% 100% 100% 100% it is implemented with a popular pymc3 library. The re- Leader detected by follower 100% 100% 100% 100% sults in Figure 3b), show how low uncertainty correlates Follower detected by leader 100% 100% 100% 100% highly with the probability of correct prediction. State Estimated S1 S2 S3 S4 100% 100% 100% 100% Layer Output Shape Parameters Input Layer (None,40,40,3) 0 Table 3: Test Cases in Scenario A and corresponding Conv2D (None,40,40,8) 224 results Activation (None,40,40,8) 0 MCDropout (None,40,40,8) 0 Firstly, we discuss the results obtained for test cases Conv2D (None,40,40,16) 1168 in Scenario A, where the evidence provided to “Validity of speed limit” is considered to be 100% for each of the Activation (None,40,40,16) 0 following test case: MCDropout (None,40,40,16) 0 – Test Case A1: the “Speed” of the Follower is more MaxPooling2D (None,10,10,16) 0 than the “Speed Limit” and all other safety conditions Flatten (None,1600) 0 are met, therefore, State S1 (Decelerate to fall within Dense (None,256) 409856 the speed limit) is selected with 100% probability. – Test Case A2: the “Distance detected by Leader” MCDropout (None,256) 0 and “Distance detected by Follower” is less than the Dense (None,128) 32896 “Safe distance”, and all other safety conditions are met, Activation (None,128) 0 therefore, State S2 (Decelerate to increase the distance Dense (None,43) 5547 with the Leader until safety condition is fulfilled) is se- Dense (None,43) 5547 lected with 100% probability. – Test Case A3: the “Distance detected by Leader varianceoutput(Dense) (None,43) 5547 and “Distance detected by Follower” is less than the “Safe distance” and “Speed of the Follower” is more Table 2: Probabilistic Neural Network Architecture than the “Speed Limit”, therefore, State S3 (Decelerate used in the Experiment to achieve a safe distance with the Leader and fall within the speed limit) is selected with 100% probability. The remaining setup and assumptions for the experi- – Test Case A4: the Follower is driving above the ment remain the same as used by (Kabir et al. 2019). “Speed Limit” and is also “Too close” to the Leader In the next section, we discuss the results we per- therefore, State S4 (Brake to stop driving) is selected formed to show how our approach can incorporate data with 100% probability. and model uncertainties to ensure the overall safety of Figure 3: a) The plot shows Total Uncertainty vs Accuracy for the test data, b) The plot shows output of logistic regression as Total Uncertainty vs Probability of Correct Classification Parameter B1 B2 B3 B4 70% probability. This result shows that with a sufficient probability from the PCNN, even when probability is Distance by Follower(m) 5.0 5.0 5.0 5.0 less than the 100%, State S0 is correctly selected. This Distance by Leader (m) 4.0 4.0 4.0 4.0 ensures that even when some uncertainty is observed, Safe distance (m) 2.0 2.0 2.0 2.0 the car is still able to move. – Test Case B3: as in Test Cases B1 ad B2, most Too close distance (m) 2.0 2.0 2.0 2.0 of the safety conditions are met and the evidences pro- Allowed error in distances (m) 2.0 2.0 2.0 2.0 vided to various nodes in a BN are the same, except for Speed (miles/h) 40 40 40 40 the “Valid Speed Limit” node. Here, the “Valid Speed Speed limit (miles/h) 50 50 50 50 Limit” node receives the probability of correct “Speed Validity of speed limit 100% 70% 40% 0% Limit” detected as 40% and therefore, we see that State S5 (Switch to ACC mode) is selected with 60% proba- Leader detected by follower 100% 100% 100% 100% bility as the final output. Figure 2 shows that the state Follower detected by leader 100% 100% 100% 100% having the highest probability, i.e., State S5 (Switch to ACC mode), is selected. This represents the safest deci- State Estimated S0 S0 S5 S5 sion for this test case. Here, the output state selected 100% 70% 60% 100% changes because of low confidence in the validity of the speed limit (i.e. the evidence provided to the “Valid Table 4: Test Cases in Scenario B and corresponding Speed Limit” node is below 50%, which is in this case, results the acceptable safety threshold used in the BN). This test case shows that if blind trust is put into the “Speed Limit” detected from road sign boards, believing it to In Scenario B, all the safety conditions, as described be always 100% accurate, then that is likely to lead to in the SM, are met, but instead of always considering an unsafe output state. This was seen in the original im- the probability of the “Valid Speed Limit” detected as plementation of the platooning case study, and would 100%, the probabilistic uncertainty output from PCNN typically be the result if using advanced ML techniques is used. As seen below, the different test cases results in like NN. both a change in the probability of the output state and – Test Case B4: similar to the test cases above, most the output state selected: of the safety conditions are met, however, there is ab- – Test Case B1: all the safety conditions are met, and solutely no confidence in the validity of the speed limit there is 100% confidence in the validity of the speed detected (“Validity of speed limit” is 0%) and therefore, limit, therefore State S0 (The state is safe, therefore, final State S5 (Switch to ACC mode) is selected with continue driving) is selected with 100% probability. 100% probability. – Test Case B2: as in Test Case B1, the safety condi- The Test Scenarios A and B show that while us- tions are met and the evidence provided to the nodes in ing BN, the Model and the Data Uncertainty (provided the BN are the same except for the “Valid Speed Limit” as normalised/probabilistic input to “Validity of speed node, which gets the normalised input from PCNN. limit” node) have a huge influence on the probability Here, the “Valid Speed Limit” node receives the prob- of the output state selected. The results show that our ability of correct “Speed Limit” detected as 70%. We method of using a PCNN, to estimate both the Model see that the same final State S0 is selected, but with and the Data Uncertainty, along with BN, enables us to make safe decisions. Unlike deterministic models, Kabir, S.; Sorokos, I.; Aslansefat, K.; Papadopoulos, Y.; BN are capable of handling uncertainty in the input and Gheraibia, Y.; Reich, J.; Saimler, M.; and Wei, R. 2019. therefore are an better choice for handling uncertainty A runtime safety analysis concept for open adaptive generated from PCNN for making safe decisions. systems. In International Symposium on Model-Based Safety and Assessment, 332–346. Springer. 6 Conclusion Katrakazas, C.; Quddus, M.; and Chen, W.-H. 2019. A In this paper, we have described how we can utilise the new integrated collision risk assessment methodology estimated uncertainty, arising from data and complex for autonomous vehicles. Accident Analysis & Preven- ML models, to improve safety in decision making. The tion 127: 61–79. proposed method allows designers of AV to improve the Kendall, A.; and Gal, Y. 2017. What uncertainties do decision making process by integrating multiple sources we need in bayesian deep learning for computer vision? of uncertainty. The efficacy of the proposed approach In Advances in neural information processing systems, has been illustrated via an experimental analysis. 5574–5584. Koller, D.; and Friedman, N. 2009. Probabilistic graph- 7 Acknowledgement ical models: principles and techniques. MIT press. This research has received funding from Assuring Au- Leung, H. W.; and Bovy, J. 2019. Deep learning of tonomy International Program (University of York) and multi-element abundances from high-resolution spec- the European Union’s EU architecture Program for troscopic data. Monthly Notices of the Royal Astronom- Research and Innovation Horizon 2020 under Grant ical Society 483(3): 3255–3277. Agreement No. 812.788 Macfarlane, J.; and Stroila, M. 2016. Addressing the References uncertainties in autonomous driving. SIGSPATIAL Spe- cial 8(2): 35–40. Burton, S.; Gauerhof, L.; and Heinzemann, C. 2017. Making the case for safety of machine learning in Machin, M.; Guiochet, J.; Waeselynck, H.; Blanquart, highly automated driving. In International Conference J.-P.; Roy, M.; and Masson, L. 2016. Smof: A safety on Computer Safety, Reliability, and Security, 5–16. monitoring framework for autonomous systems. IEEE Springer. Transactions on Systems, Man, and Cybernetics: Sys- tems 48(5): 702–715. Feng, D.; Rosenbaum, L.; and Dietmayer, K. 2018. To- wards safe autonomous driving: Capture uncertainty in McAllister, R.; Gal, Y.; Kendall, A.; Van Der Wilk, M.; the deep neural network for lidar 3d vehicle detection. Shah, A.; Cipolla, R.; and Weller, A. 2017. Concrete In 2018 21st International Conference on Intelligent problems for autonomous vehicle safety: Advantages of Transportation Systems (ITSC), 3266–3273. IEEE. bayesian deep learning. International Joint Conferences on Artificial Intelligence, Inc. Furda, A.; and Vlacic, L. 2011. Enabling safe au- tonomous driving in real-world city traffic using mul- Osband, I.; Blundell, C.; Pritzel, A.; and Van Roy, B. tiple criteria decision making. IEEE Intelligent Trans- 2016. Deep exploration via bootstrapped DQN. In Ad- portation Systems Magazine 3(1): 4–17. vances in neural information processing systems, 4026– 4034. Gal, Y.; and Ghahramani, Z. 2015. Bayesian convolu- tional neural networks with Bernoulli approximate vari- Papadoulis, A.; Quddus, M.; and Imprialou, M. 2019. ational inference. arXiv preprint arXiv:1506.02158 . Evaluating the safety impact of connected and au- tonomous vehicles on motorways. Accident Analysis Gauerhof, L.; Munk, P.; and Burton, S. 2018. Struc- & Prevention 124: 12–22. turing validation targets of a machine learning function applied to automated driving. In International Confer- Reich, J. 2016. Systematic engineering of safe open ence on Computer Safety, Reliability, and Security, 45– adaptive systems shown for truck platooning. Ph.D. 58. Springer. thesis, M. Sc. thesis, Technical University of Kaiser- slautern, Kaiserslautern, Germany. Graves, A. 2011. Practical variational inference for neu- ral networks. In Advances in neural information pro- Schwarting, W.; Alonso-Mora, J.; and Rus, D. 2018. cessing systems, 2348–2356. Planning and decision-making for autonomous vehi- cles. Annual Review of Control, Robotics, and Au- Hinton, G. E.; and van Camp, D. 1993. Keeping neural tonomous Systems . networks simple. In International Conference on Artifi- Stallkamp, J.; Schlipsing, M.; Salmen, J.; and Igel, C. cial Neural Networks, 11–18. Springer. 2012. Man vs. computer: Benchmarking machine learn- Houben, S.; Stallkamp, J.; Salmen, J.; Schlipsing, M.; ing algorithms for traffic sign recognition. Neural net- and Igel, C. 2013. Detection of traffic signs in real- works 32: 323–332. world images: The German Traffic Sign Detection Benchmark. In The 2013 international joint conference on neural networks (IJCNN), 1–8. IEEE.