<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Monitoring Safety of Autonomous Vehicles with Crash Prediction Network</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Saasha Nair</string-name>
          <email>saasha.nair@tum.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sina Shafaei</string-name>
          <email>sina.shafaei@tum.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefan Kugele</string-name>
          <email>stefan.kugele@tum.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mohd Hafeez Osman</string-name>
          <email>hafeez.osman@tum.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alois Knoll</string-name>
          <email>knoll@in.tum.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Technical University Munich Munich</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Traditional Safety Techniques and Neural Networks</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Automation needs safety inbuilt in the system such that it behaves at least as well as a diligent human in unforeseen circumstances, if not better. It is therefore necessary that the machine learns to behave intuitively by predicting future occurrences and take actions accordingly. Machine learning techniques, therefore, have to focus on safety issues. Human development and the consequential environmental changes will only push safety requirements higher demanding artificial intelligence to fill in the voids so generated. The purpose of this paper is to study the artificial intelligence perspective on safety challenges and concerns of such systems through an extensive literature review and propose a futuristic and easily adaptable system using deep learning technique. The paper would focus primarily on safety aspects of autonomous vehicles using Bayesian Deep learning method.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Current trends in the automotive industry are introducing
new, increasingly complex software functions into vehicles
        <xref ref-type="bibr" rid="ref2">(Broy 2006)</xref>
        . The ever-growing availability of computing
resources, memory, and newest technologies allows for new
levels of automated and intelligent systems. Driving at a
high level of driving automation (i. e., level 3 to 5)
according to SAE J3016
        <xref ref-type="bibr" rid="ref5">(Committee and others 2014)</xref>
        is just one
example that has been discussed recently and is no longer
just a future vision. Vehicles driving at levels 3 to 5 will be,
hereafter, referred to as Autonomous Vehicles (AV) and the
corresponding task as Autonomous Driving (AD).
      </p>
      <p>
        Success stories in deep learning have made AVs more
or less a reality, however, commercializing such vehicles
have not yet fructified. Recent accidents, especially those
involving cars driving at as low as SAE Level 2 show,
that there are challenges engineers are still faced with,
and the major impediment that stands in the way of
largescale adoption of AVs, is its associated safety concerns
        <xref ref-type="bibr" rid="ref14 ref16 ref19 ref8">(Kalra and Paddock 2016; Fagnant and Kockelman 2015;
McAllister et al. 2017)</xref>
        .
      </p>
      <p>
        Although there is no concrete solution in addressing
the safety concern, several researchers have outlined the
safety challenges and proposed recommendations to
consider. Salay et al.
        <xref ref-type="bibr" rid="ref13 ref15 ref22">(Salay, Queiroz, and Czarnecki 2017)</xref>
        analyzed the impact that the use of ML-based software has on
various parts of ISO 26262 especially with respect to hazard
analysis and risk assessment (HARA). Within the scope of
highly automated driving (i. e., level 4), Burton et al.
        <xref ref-type="bibr" rid="ref13 ref15 ref3">(Burton, Gauerhof, and Heinzemann 2017)</xref>
        explored the
assurance case approaches that can be applied to the problem of
arguing the safety of machine learning. From the ISO 26262
V-model perspective, Koopman and Wagner
        <xref ref-type="bibr" rid="ref14 ref16">(Koopman and
Wagner 2016)</xref>
        identified several testing challenges for
autonomous vehicles. Monkhouse et al.
        <xref ref-type="bibr" rid="ref20">(Monkhouse et al.
2017)</xref>
        reported several safety concerns to ensure the safety
of highly automated driving from the functional safety
engineers’ perspective. This paper explores the challenges in
developing and monitoring AI-based component for an
end-toend deep learning AV. The presented approach can minimize
the apparent risk when dealing with machine learning based
components of Autonomous Driving. However, a more
finegrain safety assessment such as safety requirements and risk
assessment remain for future work. Basically, this research
endeavors to answer the following research questions (RQ):
RQ1 What are the challenges involved in ensuring safety of
highly critical systems when augmented with machine
learning based components?
RQ2 What are the existing approaches used to ensure safety
of learning systems?
RQ3 What are the shortcomings of the existing approaches
and how can they be overcome?
The main challenges (RQ1) associated with applying
traditional safety assurance methodologies to NNs as it was
explained in
        <xref ref-type="bibr" rid="ref4">(Cheng et al. 2018)</xref>
        are as follows:
      </p>
      <p>(i) Implicit Specification – Traditional Verification and
Validation (V&amp;V) methods (as suggested in ISO 26262
V model) lay great importance on ensuring that the
functional requirements specified at the design-time of the
system are met. However, NN-based systems depend solely on
the training data for inferring the specifications of the model
and do not depend on any explicit list of requirements, which
can be problematic while applying traditional V&amp;V
methods. (ii) Black-Box Structure – While writing the code for
a NN, one specifies the details about the layers and the
activation functions, but, unlike traditional software, the control
flow is not explicitly coded, leading to NNs being referred to
as black-box structures. Traditional white-box testing
techniques such as code coverage and decision coverage cannot
be directly applied to NNs, thus, there is a need to construct
paradigms for adaptive software systems.</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>We distinguish between the existing approaches by
categorizing them into two groups: (i) ‘Training phase’,
i. e.approaches that are solely used during the development
and training phase of the neural network, and (ii)
‘Operational phase’, i. e.those that are used in the run-time
environment of the neural network to ensure proper functioning
(RQ2).</p>
      <sec id="sec-2-1">
        <title>Training Phase</title>
        <p>
          The existing approaches that fall under this category are:
(i) Train/Validation/Test split – This method is used to
ensure that the developed adaptive system works satisfactorily
for a given set of inputs. The method involves splitting the
available data, to obtain three sets, such that the largest of
the sets is used solely for training, and of the remaining two
sets, one is used for fine tuning the hyperparameters of the
NN, and the second one is used to test the working
neural network to study how well it reacts to previously unseen
data points. Though this method helps verify the working of
the NN, it is not extensive enough to be considered a
guarantee for safety
          <xref ref-type="bibr" rid="ref25">(Taylor, Darrah, and Moats 2003)</xref>
          in
highcriticality systems.
(ii) Automated test data generation – Lack of trust in the
train-validation-test split method roots from the fact that one
is left with very few data samples to test against, wherein,
the chances are that cases of high interest might even get
missed in the testing phase. A way to overcome this problem
is to use test data generation tools to generate synthetic data
points which, can be used for testing the trained neural
networks. Tools such as Automated Test Trajectory Generation
(ATTG)
          <xref ref-type="bibr" rid="ref26 ref7">(Taylor 2006)</xref>
          and the more recent approach of
generating scenes that an AV might encounter using ontologies
          <xref ref-type="bibr" rid="ref1 ref13 ref15">(Bagschik, Menzel, and Maurer 2017)</xref>
          fall under this
category. This approach can help the V&amp;V procedure for NNs
by unveiling missing knowledge in fixed NNs and increasing
confidence in the working of adaptive NNs
          <xref ref-type="bibr" rid="ref25">(Taylor, Darrah,
and Moats 2003)</xref>
          .
(iii) Formal Methods – Formal verification
          <xref ref-type="bibr" rid="ref21">(Ray 2010)</xref>
          refers to the use of mathematical specifications to model
and analyse a system. Though these methods work well
with traditional software, they have not shown much
success in the area of adaptive software systems. This is due
to challenges
          <xref ref-type="bibr" rid="ref14 ref16 ref23">(Seshia, Sadigh, and Sastry 2016)</xref>
          in modeling
the non-deterministic nature of the environment, difficulty
in establishing a formal specification to encode the desired
and undesired behavior of the system, and the need to
account for adaptive behavior of the system. Formal
verification techniques for NNs deal instead with proving
convergence and stability
          <xref ref-type="bibr" rid="ref12 ref7">(Fuller, Yerramalla, and Cukic 2006)</xref>
          of
the system, using methods such as Lyapunov analysis
(Yerramalla et al. 2003).
(iv) Rule extraction – Rules
          <xref ref-type="bibr" rid="ref26 ref7">(Darrah and Taylor 2006)</xref>
          are
viewed as a descriptive representation of the inner workings
of a neural network. Rule extraction algorithms, such as KT
          <xref ref-type="bibr" rid="ref10">(Fu 1994)</xref>
          , Validity Interval Analysis (VIA)
          <xref ref-type="bibr" rid="ref27">(Thrun 1995)</xref>
          ,
DeepRed
          <xref ref-type="bibr" rid="ref14 ref16 ref23 ref29">(Zilke, Mencía, and Janssen 2016)</xref>
          , can be used to
model the knowledge that a neural network has acquired
during the training phase. These rules can be expressed as easy
to understand ‘if-then’ statements, that can either be
manually verified owing to the human-readable format or can be
automated with a model checker. This method can be helpful
to establish trust in the system, as it augments the
explainability of the system
          <xref ref-type="bibr" rid="ref13 ref15">(Gasser and Almeida 2017)</xref>
          . It also aids
requirements traceability, as one can verify if the rules
depict functional requirements specified for the system. They
can also help to examine the various functional modes of
the system and ensure that a safe operation mode is induced
by certain inputs, while respecting the expected safety
limits. Though this method brings in enormous advantages, it
is more applicable for offline learning systems, wherein the
V&amp;V practitioner can extract rules from the network after
training is complete.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Operational Phase</title>
        <p>The solutions that fall under this category can be more
accurately referred to as ‘Online monitoring techniques’, that
involve the use of one or more monitors working as an oracle
to ensure continued proper functioning of the neural network
over time (Cukic et al. 2006). The goal here is to ensure that
the adaptation dynamics does not cause the network to
diverge, thereby triggering unpredictable behavior.</p>
        <p>
          Data Sniffing
          <xref ref-type="bibr" rid="ref17">(Liu, Menzies, and Cukic 2002)</xref>
          is an
example based on the foregoing technique, which studies the
data entering and exiting a neural network. If a certain
input could pose negative results, then the monitors generate
an alert and could even possibly flag down the data, thereby
not allowing it to enter the system. This method is extremely
useful in cases where outliers could degrade the functioning
of the system.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Proposed Approach</title>
      <p>
        Majority of the contemporary approaches, as evident from
“Related Work” section, relate to testing a developed model
before it is deployed in the operational environment.
MLbased components, however, suffer from problems like;
operational data/platform being different from what the model
was trained on, uncertainty about the new inferences gained
from operational data, and even wear-and-tear of
hardware/software. This leaves ML-based components
vulnerable to errors. Thus, it is necessary to focus on
monitoringbased approaches, which are starting to gain recent
interest
        <xref ref-type="bibr" rid="ref13 ref15 ref9">(Fridman, Jenik, and Reimer 2017)</xref>
        , to help alleviate the
safety concerns associated with such systems.
      </p>
      <p>To elaborate on specifics of the proposed solution, an
end-to-end deep learning model for lane change maneuvers
has been chosen. Such a model uses a deep neural net that
takes input data from sensors that represent the environment
around the ego vehicle, and generate one of three actions
allowing the ego vehicle to continue driving in the current
lane or to switch to the left or right lane depending on the
presence of obstacles.</p>
      <p>This proposed solution, referred to as ‘Crash Prediction
Network’, involves a neural network model, tasked with
determining the likelihood and severity of a crash at any given
time step (RQ3). The model takes into consideration
multiple features such as output of the perception module of the
vehicle, planned trajectory/action of the ego vehicle,
predicted (or intended, if available via V2V communication)
trajectory of the obstacles, and possibly also information
such as number and severity of previous crashes that the ego
vehicle and obstacles were involved in. Specifics of the
system can be understood by distinguishing between the
training and operational (after deployment) phases of the model.</p>
      <p>The training phase (as shown in Fig. 1) relies on the model
receiving the required input values for the previously
described feature set, and also knowing whether a crash
occurred or not. Thus, the model requires an architecture that
involves a Reinforcement Learning environment, that would
allow the model to know the outcome at every time step for
a given set of feature values. This would also allow the
vehicle to crash often, as is characteristic of RL-agents,
especially at the start of training. We, therefore, propose to train
the model by allowing it to spar with an RL-agent such that
the ego vehicle closely imitates a real-world vehicle that can
perform tasks similar to the lane change maneuver use-case
described above. At each time step, the RL-agent and the
Crash Prediction Network will have access to information
about the environment of the vehicle, the Crash Prediction
Network will predict whether a crash occurs or not, while
simultaneously, the RL agent would interact with the
environment to determine whether a crash really occurred or not.
Based on the differences in the output of the two networks,
the Crash Prediction Network would be updated to
eventually be able to predict crashes with a high level of accuracy.</p>
      <p>The operational stage (as show in Fig. 2) of this model
is designed such that the inputs as usual are fed to the
MLbased component responsible to determine the lane change
maneuver to be carried out by the ego vehicle. The vehicle,
however, does not act directly on the generated lane change
action command. The action command along with the
environmental inputs in the form of sensor data are directed
to the Crash Prediction Network, which performs its task of
predicting the likelihood of a crash. Only if the likelihood
is low, is the vehicle allowed to perform the desired actions,
else the vehicle is pushed into Fail-safe mode which varies
depending on the predicted severity of the crash. It is
important to note that for the model to stay relevant to the
environment, it needs to learn and improve even in the
operational stage. Thus, similar to the training stage the difference
between the actual output and predicted output are used to
update the model.</p>
      <p>
        Crash Prediction Network is based on Bayesian Deep
Learning (BDL). The reason being that other Deep
Learning methods in use currently are known to make hard
classifications based on what they see and what they perceive.
The disadvantage with this method becomes apparent in a
system such as an AV where multiple components come
together to form a complex whole, an error in one
component could have a snowball effect up the pipeline, leading to
catastrophic outputs in the later components. A way to get
over this problem is to use BDL
        <xref ref-type="bibr" rid="ref19">(McAllister et al. 2017)</xref>
        .
Bayesian models would provide better results
        <xref ref-type="bibr" rid="ref13 ref15 ref19">(Kendall and
Gal 2017)</xref>
        , owing to the fact that such models generate as
output a probability distribution with a consideration for
uncertainty, which can be exploited for the output regarding the
likelihood of a crash that the model is expected to generate.
Additionally, it would mean that the model would propagate
not only the classification output but also the uncertainty of
the model associated with the output, such that the
higherlevel components can be developed to react in a way that the
system behaves conservatively when the uncertainty of the
previous components in the pipeline is high.
      </p>
      <p>The proposed system has definite advantages. Most
importantly, such a system does not just focus on
futuristic autonomous vehicles, but, can even be used in current
day Advanced Driving Assistance Systems (ADAS) as well,
thereby allowing a smoother transition to Autonomous
Vehicles in future. Secondly, the model can be seen as making
an intuitively ‘informed decision’, by taking into
consideration data from multiple sources. Additionally, such a system
would also generalize and scale well to different scenarios
that the vehicle might encounter. One of the major problems
that would be encountered during the development of the
model, however, is the consideration of handling input data
received from different sources in varied formats. Next,
redundancy needs to be inbuilt to compensate for sensor
failures/malfunctions in such a way that failure of a sensor does
not affect the accuracy of the system. Another major aspect,
apropos this methodology that needs experimentation and
validation is that of having one ML based component
supervising another.</p>
    </sec>
    <sec id="sec-4">
      <title>Conclusion</title>
      <p>This work covered the different aspects of safety for
intelligent components which employ machine learning
techniques in order to enable the integration of artificial
intelligence for autonomous driving. The focus was on the main
concerns and challenges to ensure safety in highly critical
applications which are based on machine learning methods,
with special emphasis on neural networks. Traditional safety
approaches are not sufficiently poised for such systems and
therefore, there is a need for more concrete methods like
monitoring techniques, such as the one proposed Crash
Prediction Network, which guarantees an acceptable level of
safety for the system functions. The team is in the process
of implementing and evaluating the proposed approach.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Bagschik</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ; Menzel,
          <string-name>
            <surname>T.</surname>
          </string-name>
          ; and Maurer,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <year>2017</year>
          .
          <article-title>Ontology based scene creation for the development of automated vehicles</article-title>
          .
          <source>arXiv preprint arXiv:1704</source>
          .
          <fpage>01006</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Broy</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <year>2006</year>
          .
          <article-title>Challenges in automotive software engineering</article-title>
          .
          <source>In Proceedings of the 28th international conference on Software engineering</source>
          ,
          <fpage>33</fpage>
          -
          <lpage>42</lpage>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Burton</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Gauerhof</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Heinzemann</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Making the case for safety of machine learning in highly automated driving</article-title>
          . In International Conference on Computer Safety, Reliability, and Security,
          <fpage>5</fpage>
          -
          <lpage>16</lpage>
          . Springer.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Cheng</surname>
          </string-name>
          , C.-H.;
          <string-name>
            <surname>Diehl</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Hinz</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ; Hamza,
          <string-name>
            <given-names>Y.</given-names>
            ;
            <surname>Nührenberg</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          ; Rickert,
          <string-name>
            <given-names>M.</given-names>
            ;
            <surname>Ruess</surname>
          </string-name>
          , H.; and
          <string-name>
            <surname>Truong-Le</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>Neural networks for safety-critical applicationsâA˘Tˇ challenges, experiments and perspectives</article-title>
          .
          <source>In Design, Automation &amp; Test in Europe Conference &amp; Exhibition (DATE)</source>
          ,
          <year>2018</year>
          ,
          <fpage>1005</fpage>
          -
          <lpage>1006</lpage>
          . IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Committee</surname>
            ,
            <given-names>S. O.-R. A. V. S.</given-names>
          </string-name>
          , et al.
          <year>2014</year>
          .
          <article-title>Taxonomy and definitions for terms related to on-road motor vehicle automated driving systems</article-title>
          .
          <source>SAE Standard J</source>
          <volume>3016</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          2006.
          <article-title>Run-Time Assessment of Neural Network Control Systems</article-title>
          . Boston, MA: Springer US.
          <volume>257</volume>
          -
          <fpage>269</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Darrah</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Taylor</surname>
            ,
            <given-names>B. J.</given-names>
          </string-name>
          <year>2006</year>
          .
          <article-title>Rule Extraction as a Formal Method</article-title>
          . Boston, MA: Springer US.
          <volume>199</volume>
          -
          <fpage>227</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Fagnant</surname>
            ,
            <given-names>D. J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Kockelman</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <year>2015</year>
          .
          <article-title>Preparing a nation for autonomous vehicles: opportunities, barriers and policy recommendations</article-title>
          .
          <source>Transportation Research Part A: Policy and Practice</source>
          <volume>77</volume>
          :
          <fpage>167</fpage>
          -
          <lpage>181</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Fridman</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Jenik</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Reimer</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Arguing machines: Perceptioncontrol system redundancy and edge case discovery in real-world autonomous driving</article-title>
          .
          <source>arXiv preprint arXiv:1710</source>
          .
          <fpage>04459</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>Fu</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <year>1994</year>
          .
          <article-title>Rule generation from neural networks</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <source>IEEE Transactions on Systems, Man, and Cybernetics</source>
          <volume>24</volume>
          (
          <issue>8</issue>
          ):
          <fpage>1114</fpage>
          -
          <lpage>1124</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>Fuller</surname>
            ,
            <given-names>E. J.</given-names>
          </string-name>
          ; Yerramalla,
          <string-name>
            <given-names>S. K.</given-names>
            ; and
            <surname>Cukic</surname>
          </string-name>
          ,
          <string-name>
            <surname>B.</surname>
          </string-name>
          <year>2006</year>
          .
          <article-title>Stability properties of neural networks</article-title>
          .
          <source>In Methods and Procedures for the Verification and Validation of Artificial Neural Networks</source>
          . Springer.
          <fpage>97</fpage>
          -
          <lpage>108</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Gasser</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Almeida</surname>
            ,
            <given-names>V. A.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>A layered model for ai governance</article-title>
          .
          <source>IEEE Internet Computing</source>
          <volume>21</volume>
          (
          <issue>6</issue>
          ):
          <fpage>58</fpage>
          -
          <lpage>62</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <surname>Kalra</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Paddock</surname>
            ,
            <given-names>S. M.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability</article-title>
          ?
          <source>Transportation Research Part A: Policy and Practice</source>
          <volume>94</volume>
          :
          <fpage>182</fpage>
          -
          <lpage>193</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <surname>Kendall</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Gal</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>What uncertainties do we need in bayesian deep learning for computer vision</article-title>
          ? In
          <source>Advances in neural information processing systems</source>
          ,
          <volume>5574</volume>
          -
          <fpage>5584</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <surname>Koopman</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Wagner</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>Challenges in autonomous vehicle testing and validation</article-title>
          .
          <source>SAE International Journal of Transportation Safety</source>
          <volume>4</volume>
          (
          <issue>1</issue>
          ):
          <fpage>15</fpage>
          -
          <lpage>24</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Menzies</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Cukic</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <year>2002</year>
          .
          <article-title>Data sniffingmonitoring of machine learning for online adaptive systems</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <source>In Tools with Artificial Intelligence</source>
          ,
          <year>2002</year>
          .(
          <article-title>ICTAI 2002)</article-title>
          .
          <source>Proceedings. 14th IEEE International Conference on</source>
          ,
          <fpage>16</fpage>
          -
          <lpage>21</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>McAllister</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ; Gal,
          <string-name>
            <given-names>Y.</given-names>
            ;
            <surname>Kendall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.; Van Der</given-names>
            <surname>Wilk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ;
            <surname>Shah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ;
            <surname>Cipolla</surname>
          </string-name>
          , R.; and
          <string-name>
            <surname>Weller</surname>
            ,
            <given-names>A. V.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Concrete problems for autonomous vehicle safety: Advantages of bayesian deep learning</article-title>
          .
          <source>International Joint Conferences on Artificial Intelligence</source>
          , Inc.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <surname>Monkhouse</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Habli</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>McDermid</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ; Khastgir,
          <string-name>
            <surname>S.</surname>
          </string-name>
          ; and Dhadyalla,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <year>2017</year>
          .
          <article-title>Why functional safety experts worry about automotive systems having increasing autonomy</article-title>
          .
          <source>In International Workshop on Driver and Driverless Cars: Competition or Coexistence.</source>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <surname>Ray</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2010</year>
          .
          <article-title>Scalable techniques for formal verification</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <string-name>
            <surname>Salay</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ; Queiroz, R.; and
          <string-name>
            <surname>Czarnecki</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>An analysis of ISO 26262: Using machine learning safely in automotive software</article-title>
          .
          <source>CoRR abs/1709</source>
          .02435.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <string-name>
            <surname>Seshia</surname>
            ,
            <given-names>S. A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Sadigh</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Sastry</surname>
            ,
            <given-names>S. S.</given-names>
          </string-name>
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <article-title>Towards verified artificial intelligence</article-title>
          .
          <source>arXiv preprint arXiv:1606</source>
          .
          <fpage>08514</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <string-name>
            <surname>Taylor</surname>
            , B. J.; Darrah,
            <given-names>M. A.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Moats</surname>
            ,
            <given-names>C. D.</given-names>
          </string-name>
          <year>2003</year>
          .
          <article-title>Verification and validation of neural networks: a sampling of research in progress</article-title>
          .
          <source>In Intelligent Computing: Theory and Applications</source>
          , volume
          <volume>5103</volume>
          ,
          <fpage>8</fpage>
          -
          <lpage>17</lpage>
          . International Society for Optics and Photonics.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <string-name>
            <surname>Taylor</surname>
            ,
            <given-names>B. J.</given-names>
          </string-name>
          <year>2006</year>
          .
          <source>Automated Test Generation for Testing Neural Network Systems</source>
          . Boston, MA: Springer US.
          <volume>229</volume>
          -
          <fpage>256</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <string-name>
            <surname>Thrun</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>1995</year>
          .
          <article-title>Extracting rules from artificial neural networks with distributed representations</article-title>
          . In Tesauro, G.; Touretzky, D.; and Leen, T., eds.,
          <source>Advances in Neural Information Processing Systems (NIPS) 7</source>
          . Cambridge, MA: MIT Press.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          2003.
          <article-title>Lyapunov analysis of neural network stability in an adaptive flight control system</article-title>
          .
          <source>In Symposium on SelfStabilizing Systems</source>
          ,
          <volume>77</volume>
          -
          <fpage>92</fpage>
          . Springer.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          <string-name>
            <surname>Zilke</surname>
            ,
            <given-names>J. R.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Mencía</surname>
            ,
            <given-names>E. L.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Janssen</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>Deepredrule extraction from deep neural networks</article-title>
          .
          <source>In International Conference on Discovery Science</source>
          ,
          <volume>457</volume>
          -
          <fpage>473</fpage>
          . Springer.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>