<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Out-of-Distribution Detection Using Deep Neural Network Latent Space Uncertainty</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Fabio</forename><surname>Arnez</surname></persName>
							<email>fabio.arnez@cea.fr</email>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">Université Paris-Saclay</orgName>
								<orgName type="institution" key="instit2">CEA</orgName>
								<address>
									<addrLine>List</addrLine>
									<postCode>F-91120</postCode>
									<settlement>Palaiseau</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ansgar</forename><surname>Radermacher</surname></persName>
							<email>ansgar.radermacher@cea.fr</email>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">Université Paris-Saclay</orgName>
								<orgName type="institution" key="instit2">CEA</orgName>
								<address>
									<addrLine>List</addrLine>
									<postCode>F-91120</postCode>
									<settlement>Palaiseau</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">François</forename><surname>Terrier</surname></persName>
							<email>francois.terrier@cea.fr</email>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">Université Paris-Saclay</orgName>
								<orgName type="institution" key="instit2">CEA</orgName>
								<address>
									<addrLine>List</addrLine>
									<postCode>F-91120</postCode>
									<settlement>Palaiseau</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Out-of-Distribution Detection Using Deep Neural Network Latent Space Uncertainty</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">BF3A9F4CEAFE348F8AB3E9233312C2EF</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-04-29T06:38+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>Uncertainty Estimation Latent Space Out-of-Distribution Detection Semantic Segmentation Automated Vehicle</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>As automated systems increasingly incorporate deep neural networks (DNNs) to perform safety-critical tasks, confidence representation and uncertainty estimation in DNN predictions have become useful and essential to represent DNN ignorance. Predictive uncertainty has often been used to identify samples that can lead to wrong predictions with high confidence, i.e., Out-of-Distribution (OoD) detection. However, predictive uncertainty estimation at the output of a DNN might fail for OoD detection in computer vision tasks such as semantic segmentation due to the lack of information about semantic structures and contexts. We propose using the DNN uncertainty from intermediate latent representations to overcome this problem. Our experiments show promising results in OoD detection for the semantic segmentation task.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>In the last decade, Deep Neural Networks (DNNs) have witnessed great advances in real-world applications like Autonomous Vehicles (AVs) to perform complex tasks such as object detection and tracking or vehicle control. Despite the progress introduced by DNNs in the previous decade, they still have significant safety shortcomings due to their complexity, opacity and lack of interpretability. Moreover, it is well-known that DNN models behave unpredictably under dataset shift <ref type="bibr" target="#b0">[1]</ref>. Deep Learning (DL) models have training and data bias that directly impact model predictions and performance. This impedes ensuring the reliability of the DNN models, which is a precondition for safety-critical systems to ensure compliance with industry safety standards to avoid jeopardizing human lives <ref type="bibr" target="#b1">[2]</ref>.</p><p>As highly automated systems (e.g., autonomous vehicles or autonomous mobile robots) increasingly rely on DNNs to perform safety-critical tasks, different methods have been proposed to represent confidence in the DNN predictions. One way to represent DNN confidence is to capture the uncertainty associated with a prediction for a given input sample. Capturing information about "what the model does not know" is not only useful but essential in safety-critical tasks.</p><p>Bayesian Neural Networks (BNNs) and existing Bayesian approximate inference methods (Deep Ensem-bles, Monte-Carlo Dropout, etc.) offer a principled approach to model and quantify uncertainties in DNNs. However, quantifying uncertainty is challenging since we do not have access to ground-truth uncertainty estimates, i.e., we do not have a clear definition of what a good uncertainty estimate is. Moreover, computer vision tasks can add an extra level of complexity since tasks such as semantic segmentation require a pixel-level understanding of an image. In this case, a Bayesian Deep Learning model for semantic segmentation will classify each pixel in the input image and generate an uncertainty estimate for each classified pixel.</p><p>In semantic segmentation, uncertainty estimation has been used for Out-of-Distribution (OoD) detection under the assumption that samples that are far away from the training distribution (anomalous or OoD samples) provide higher predictive uncertainty than samples that are observed in the training data <ref type="bibr" target="#b2">[3]</ref>. Approaches that use BNNs are able to capture aleatoric and epistemic uncertainties in the form of uncertainty maps (Figure <ref type="figure" target="#fig_0">1</ref>-top) but still fail to detect anomalies accurately. BNN methods for semantic segmentation are prone to yield false-positive predictions, as well as miss-matches between anomaly instances and uncertain areas caused by the lack of information on semantic structures and contexts <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5]</ref>, as presented in Figure <ref type="figure" target="#fig_0">1-middle.</ref> Recently, embedding density estimation methods have been proposed to estimate the connection to uncertainties from Bayesian methods <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b2">3]</ref>. In this direction, methods that leverage metrics or statistics from the nonparametric embedding space density have been proposed recently <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8]</ref>, in contrast to a distance-based method that often assumes a parametric embedding density <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b10">11]</ref>.</p><p>The present work combines the benefits from Bayesian methods for uncertainty estimation with methods for la- </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Semantic Segmentation with Probabilistic U-Net Architecture</head><p>Probabilistic U-Net <ref type="bibr" target="#b11">[12]</ref>, is a DNN architecture for semantic segmentation that combines the U-Net architecture <ref type="bibr" target="#b12">[13]</ref> with the conditional variational autoencoder (CVAE) framework <ref type="bibr" target="#b13">[14]</ref>. The goal of Probabilistic U-Net is to handle input image ambiguities by leveraging the stochastic nature of the CVAE latent space. Figure <ref type="figure" target="#fig_1">2</ref> shows the Probabilistic U-Net architecture.</p><p>During training, depicted in Figure <ref type="figure" target="#fig_1">2a</ref>, Probabilistic U-Net finds a useful embedding of the segmentation variants in the latent space by introducing a Posterior Net. This network learns to recognize a segmentation variant and to map it into a noisy position in the latent space (𝜇𝑝𝑜𝑠𝑡, 𝜎 2 𝑝𝑜𝑠𝑡 ). In addition, KL divergence is used to penalize differences between the distributions at the output of prior and posterior nets. The idea here is to bring both distributions as close as possible so that the Prior Net distribution covers the spaces of all presented segmentation variants.</p><p>In general, the central component of this architecture is its latent space. Each value from the latent space encodes a segmentation variant. During inference, the Prior Net encodes each input image 𝑥𝑖 and estimates the probability of these segmentation variants (𝜇𝑝𝑟𝑖𝑜𝑟, 𝜎 2 𝑝𝑟𝑖𝑜𝑟 ). To predict a set of segmentation outputs, a set of samples are drawn from the Prior Net probability distribution. Interestingly, we can draw a connection from this approach to other related work that aims to model complex aleatoric uncertainty (ambiguity, multi-modality) by handling stochastic input variables <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b15">16,</ref><ref type="bibr" target="#b16">17]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Methods</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Capturing Uncertainty from Intermediate Latent Representations</head><p>Despite the benefits introduced by injecting random samples from the latent space into U-Net, aleatoric uncertainty alone is not enough. For the Out-of-Distribution detection task, epistemic uncertainty is needed <ref type="bibr" target="#b17">[18,</ref><ref type="bibr" target="#b18">19]</ref>.</p><p>Although the Prior Net encoder 𝑞𝑝𝑟𝑖𝑜𝑟 employs Bayesian inference to obtain latent vectors z, it does not capture epistemic uncertainty since the encoder lacks a distribution over parameters 𝜑. To overcome this problem, we took inspiration from Daxberger and Hernández-Lobato <ref type="bibr" target="#b19">[20]</ref>, Jesson et al. <ref type="bibr" target="#b20">[21]</ref>, and propose to capture uncertainty in the Probabilistic U-Net Prior Net encoder using 𝑀 Monte Carlo Dropout (MCD) samples <ref type="bibr" target="#b21">[22]</ref>, i.e., 𝑞𝑝𝑟𝑖𝑜𝑟(𝑧 | 𝑥, 𝜑𝑚).</p><formula xml:id="formula_0">𝑞Φ(z | x, 𝒟𝑝) = ∫︁ 𝜑 𝑞(z | x, 𝜑)𝑝(𝜑 | 𝒟𝑝)𝑑𝜑<label>(1)</label></formula><p>In eq. 1, we adapt the Prior Net encoder to capture the posterior 𝑞(z | x, 𝒟) using a set Φ = {𝜑𝑚} 𝑀 𝑚 of encoder parameters samples 𝜑𝑚 ∼ 𝑝(𝜑 | 𝒟𝑝) that are obtained applying MCD at test-time. During execution time, we forward-pass an input image 𝑥𝑖 multiple times into the 𝑞𝑝𝑟𝑖𝑜𝑟 net. Each time we forward-pass the input image, we will generate a new dropout mask that in consequence will make a new (𝜇𝑝𝑟𝑖𝑜𝑟, 𝜎 2 𝑝𝑟𝑖𝑜𝑟 ) prediction. From each predicted (𝜇𝑝𝑟𝑖𝑜𝑟, 𝜎 2 𝑝𝑟𝑖𝑜𝑟 ) for the same image we sample a new latent vector z, as presented in Figure <ref type="figure" target="#fig_2">3</ref>.</p><p>MCD has been applied extensively for simple epistemic uncertainty estimation. However, dropout was found to be ineffective on convolutional neural networks (CNNs). Standard dropout is ineffective in removing semantic information from CNN feature maps because nearby activations contain closely related information. On the other hand, dropping continuous regions in 2D feature maps can help remove semantic information and enforce remaining units to learn features for the assigned task <ref type="bibr" target="#b22">[23]</ref>. This effect is also desired for capturing uncertainties, otherwise, we could get overconfident uncertainty estimates in the presence of samples that contain anomalies. To overcome the standard dropout limitation, we followed  the approach from Deepshikha et al. <ref type="bibr" target="#b23">[24]</ref>, and used Drop-Block2D to capture uncertainty from the Probabilistic U-Net. We applied MC DropBlock2D in the last feature map from the Prior Net, as shown in Figure <ref type="figure" target="#fig_1">2</ref> and Figure <ref type="figure" target="#fig_2">3</ref> (in red).</p><p>The average surprise or uncertainty of a random variable 𝑧 is defined by its probability distribution 𝑝(𝑧), and it is called the entropy of 𝑧, i.e., H(𝑧). For continuous random variables, we use the differential entropy, as presented in Eq. 2,</p><formula xml:id="formula_1">H(𝑧) = ∫︁ 𝑧 𝑝(𝑧) log 1 𝑝(𝑧) 𝑑𝑧<label>(2)</label></formula><p>To quantify uncertainty from Prior Net MCD samples, we used standard entropy estimators <ref type="bibr" target="#b24">[25]</ref> on 32 Monte Carlo samples (32 image forward passes through Prior Net with MC DropBlock2D turned on). In Eq. 3, the entropy H ˆΦ(𝑧 | 𝑥) measures the average surprise of observing latent vector 𝑧 at the output of Prior Net, given an input image 𝑥.</p><formula xml:id="formula_2">H(𝑧 | 𝑥) = ∫︁ 𝑧 𝑝(𝑧 | 𝑥) log 1 𝑝(𝑧 | 𝑥) 𝑑𝑧<label>(3)</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Bayesian Generative Classifier for OoD Detection</head><p>For OoD detection, we assume that we have access to a dataset of normal (InD) and anomaly (OoD) samples 𝑌 = {normal, anomaly}, with which we can train a Bayesian generative classifier (Not so naive Bayes Classifier) using the empirical density of a metric or statistic 𝑇 from latent representations z, i.e., 𝑇 (z). To this end, we follow Morningstar et al. <ref type="bibr" target="#b6">[7]</ref> approach and use a Kernel Density Estimation (KDE) method to obtain the 𝑇 (z) densities. Since we aim at leveraging the uncertainty from intermediate latent representations, the 𝑇 statistic is the entropy at the output of the Prior Net (described in the previous section) with which we build the monitoring function ℳ𝑂𝑂𝐷, as presented in Figure <ref type="figure" target="#fig_1">2b</ref>.</p><p>For each label set, we fit a KDE to obtain a generative model of the data, i.e., use KDE to compute the likelihood 𝑝(𝑇 (z) | 𝑦). Then, we compute the class label prior probability 𝑝(𝑌 ), i.e., compute the marginal categorical distribution by counting frequencies (from the number of samples of each class in the complete training set). For an unknown latent vector, we can compute the posterior probability of each class 𝑝(𝑦 | 𝑇 (z)), using Baye's rule in Eq. 4. For the OoD task, we use Eq. 5</p><formula xml:id="formula_3">𝑝(𝑦 | 𝑇 (z)) = 𝑃 (𝑇 (z) | 𝑦)𝑝(𝑦) 𝑝(𝑇 (z))<label>(4)</label></formula><formula xml:id="formula_4">𝑝(𝑦 | 𝑇 (z)) = 𝑝(𝑇 (z) | 𝑦)𝑝(𝑦) ∑︀ 𝑦∈𝑌 𝑝(𝑇 (z) | 𝑦)𝑝(𝑦)<label>(5)</label></formula><p>For a more details description of the approach for Bayesian generative classification we refer the reader to the works from VanderPlas <ref type="bibr" target="#b25">[26]</ref> and Postels et al. <ref type="bibr" target="#b2">[3]</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Early Experiments and Results</head><p>Dataset Building. For training the DNN model for semantic segmentation we used the Valeo Woodscape dataset<ref type="foot" target="#foot_0">1</ref>  <ref type="bibr" target="#b26">[27]</ref> with the semantic segmentation labels.</p><p>For training the monitoring function (i.e., Bayesian generative classifier), our first choice was to use Soiling Woodscape sub-dataset. However, after inspecting the dataset, we noticed that samples were taken in small sequences. To improve dataset diversity and implement our approach, we decided to create a new smaller sub-dataset by taking just one or two samples from the sampling sequences for each anomaly in soiling Woodscape. We called this new dataset OoD Woodscape, and it combines samples from the Woodscape training set (normal class) and samples from the Soiling Woodscape validation set (anomaly class). The ooD-Woodscape training set has 280 samples, 140 samples for each class; the validation set has 120 samples total, 60 samples for each class. The dataset-building procedure is depicted in Figure <ref type="figure" target="#fig_3">4</ref>.</p><p>Experiments. We quantify the entropy from intermediate latent vectors. Using the entropy values, we estimate the entropy density for each sub-dataset, i.e., samples from normal and anomaly sub-datasets. First, we quantify the entropy assuming a multivariate Gaussian distribution 𝐻 ˆ𝜑(z | 𝑥), as presented in Figure <ref type="figure" target="#fig_4">5</ref> top-right. Next, we compute the entropy estimation for each variable in the latent vector 𝐻 ˆ𝜑(𝑧𝑖 | 𝑥), as shown in Figure <ref type="figure" target="#fig_4">5</ref>-bottom. Finally, for comparison, we also use the Mahalanobis distance which is a multivariate measure of the distance between a point and distribution. In this last case, we built the reference distribution taking intermediate representations z i for each input image 𝑥𝑖, from the Woodscape validation set (see Figure <ref type="figure" target="#fig_4">5</ref> top-left). Then, we measure the distance to this reference distribution using 𝑑𝑀 = √︁ (z * − 𝜇z val ) 𝑇 Σ −1 z val (z * − 𝜇z val ), for a new input image x * and its predicted latent vector z * .</p><p>For entropy, in both cases, we observe that the densities for InD and OoD samples are different. In the first case, the estimated latent vector density shows clear multimodality for OoD samples, with peaks in entropy inter- vals that denote under-confident (uncertainty high) and overconfident (uncertainty very low) predictions. In the latter case, the entropy from latent vector variables, we observe that some variables exhibit multimodal density predictions for OoD samples and density peaks in different entropy value intervals from those obtained with InD samples. Finally, the 𝑑𝑀 density shows slight peaks or modes for OoD samples, however, densities for InD and OoD have a high degree of overlap.</p><p>Metrics. To evaluate our monitoring function, we used the validation set from OoD-Woodscape (the dataset we designed and built). We report the results using the following metrics, as suggested by Ferreira et al. <ref type="bibr" target="#b27">[28]</ref> and Blum et al. <ref type="bibr" target="#b5">[6]</ref>. In this regard, we report the Matthews correlation coefficient (MCC), the F1-score, the area under the Receiver Operating Characteristic (AUROC), and the False-Positive Rate at 90% True Positive Rate (FPR90) values. Table <ref type="table" target="#tab_0">1</ref> summarizes the results used for each statistic or feature employed in our classifier (monitoring function), and Figure <ref type="figure" target="#fig_5">6</ref>, shows the ROC curve.</p><p>Results &amp; Discussion. We present the results of our monitoring function (classifier) in Table <ref type="table" target="#tab_0">1</ref> and in Figure <ref type="figure" target="#fig_5">6</ref>.</p><p>In the results, we can see that the latent vector entropybased methods outperform the Mahalanobis distancebased 𝑑𝑀 method in almost all the performance metrics. We believe that the reason behind the poor performance of the 𝑑𝑀 method is the strong assumption on the embedding space being class conditional Gaussian we building the reference distributions to compute the distance. On the hand, we can see that latent vector variable entropy has the best results. The reason behind the performance is that the classifier benefits from getting more expressive (entropy) information at the latent variable level.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Method</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>In this work, we presented a method to use the uncertainty from intermediate latent representations for Outof-distribution detection in a semantic segmentation task.</p><p>Our early results show that using the entropy from latent features can be useful in building data-driven monitoring functions. In future work, we aim to explore the impact of the structure in the latent space by relaxing the Gaussian assumption <ref type="bibr" target="#b28">[29]</ref> and its effect on the metrics and statistics used for the OoD detection task. Moreover, it is important to analyze the applicability of our approach in other semantic segmentation architectures that do not present generative blocks of neural networks.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Semantic segmentation uncertainty estimation comparison for in-distribution and out-of-distribution data</figDesc><graphic coords="2,94.60,84.19,192.75,164.40" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Probabilistic U-Net [12], with Bayesian Prior Net for Semantic Segmentation: a. During training b. During inference with the monitoring function ℳ 𝑂𝑂𝐷 at the output of the Prior Net.</figDesc><graphic coords="3,107.71,84.19,379.84,163.20" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Prior Net latent vector z predictions with Monte Carlo DropBlock2D. The latent space at the output of the Prior Net is presented in 2D for illustration purposes.</figDesc><graphic coords="3,99.46,295.79,183.02,75.79" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Dataset for training the OoD monitoring function</figDesc><graphic coords="4,302.62,84.19,203.36,167.39" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Illustration of empirical densities with KDE: Mahalanobis distance 𝑑 𝑀 (top-left), the multivariate Gaussian entropy 𝐻 ^𝜑(z | 𝑥) (top-right), and the entropy from latent each vector variable 𝐻 ^𝜑(𝑧 𝑖 | 𝑥).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: OoD detector ROC Curve analysis</figDesc><graphic coords="5,99.46,181.98,183.03,144.70" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Evaluation of OoD detection methods using DNN latent representations</figDesc><table><row><cell></cell><cell>MCC</cell><cell>F1</cell><cell cols="2">AUROC FPR90</cell></row><row><cell>𝑑 𝑀</cell><cell>0.473</cell><cell>0.763</cell><cell>0.769</cell><cell>0.5</cell></row><row><cell>𝐻 ^𝜑(z | 𝑥)</cell><cell>0.572</cell><cell>0.797</cell><cell>0.855</cell><cell>0.4</cell></row><row><cell>𝐻 ^𝜑(𝑧 𝑖 | 𝑥)</cell><cell cols="2">0.685 0.849</cell><cell>0.946</cell><cell>0.16</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://woodscape.valeo.com/download</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgement</head><p>This work has been supported by the French government under the "France 2030" program as part of the SystemX Technological Research Institute within the Confiance.ai Program (www.confiance.ai).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Robustness to out-of-distribution inputs via task-aware generative uncertainty</title>
		<author>
			<persName><forename type="first">R</forename><surname>Mcallister</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Kahn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Clune</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Levine</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2019 International Conference on Robotics and Automation (ICRA), IEEE</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="2083" to="2089" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A comparison of uncertainty estimation approaches in deep learning components for autonomous vehicle applications</title>
		<author>
			<persName><forename type="first">F</forename><surname>Arnez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Espinoza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Radermacher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Terrier</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Workshop on Artificial Intelligence Safety</title>
				<meeting>the Workshop on Artificial Intelligence Safety</meeting>
		<imprint>
			<date type="published" when="2020">2020. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Postels</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Blum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Strümpler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Cadena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Siegwart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Van Gool</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Tombari</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2012.03082</idno>
		<title level="m">The hidden uncertainty in a neural networks activations</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Pixelwise anomaly detection in complex driving scenes</title>
		<author>
			<persName><forename type="first">G</forename><surname>Di Biase</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Blum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Siegwart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Cadena</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE/CVF conference on computer vision and pattern recognition</title>
				<meeting>the IEEE/CVF conference on computer vision and pattern recognition</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="16918" to="16927" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Synthesize then compare: Detecting failures and anomalies for semantic segmentation</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Xia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Shen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">L</forename><surname>Yuille</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">European Conference on Computer Vision</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="145" to="161" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Cadena, The fishyscapes benchmark: measuring blind spots in semantic segmentation</title>
		<author>
			<persName><forename type="first">H</forename><surname>Blum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P.-E</forename><surname>Sarlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Nieto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Siegwart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Computer Vision</title>
		<imprint>
			<biblScope unit="volume">129</biblScope>
			<biblScope unit="page" from="3119" to="3135" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Density of states estimation for out of distribution detection</title>
		<author>
			<persName><forename type="first">W</forename><surname>Morningstar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Ham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gallagher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Lakshminarayanan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Alemi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Dillon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ternational Conference on Artificial Intelligence and Statistics</title>
				<imprint>
			<publisher>PMLR</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="3232" to="3240" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">Y</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Ming</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2204.06507</idno>
		<title level="m">Out-of-distribution detection with deep nearest neighbors</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">A simple unified framework for detecting out-of-distribution samples and adversarial attacks</title>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Shin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in neural information processing systems</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Cadena, Out-of-distribution detection for automotive perception</title>
		<author>
			<persName><forename type="first">J</forename><surname>Nitsch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Itkina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Senanayake</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Nieto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Schmidt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Siegwart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Kochenderfer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2021 IEEE International Intelligent Transportation Systems Conference (ITSC), IEEE</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="2938" to="2943" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Cutpaste: Selfsupervised learning for anomaly detection and localization</title>
		<author>
			<persName><forename type="first">C.-L</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Sohn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Yoon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Pfister</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</title>
				<meeting>the IEEE/CVF Conference on Computer Vision and Pattern Recognition</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="9664" to="9674" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Kohl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Romera-Paredes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Meyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>De Fauw</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Ledsam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">H</forename><surname>Maier-Hein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Eslami</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">J</forename><surname>Rezende</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Ronneberger</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1806.05034</idno>
		<title level="m">A probabilistic u-net for segmentation of ambiguous images</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">U-net: Convolutional networks for biomedical image segmentation</title>
		<author>
			<persName><forename type="first">O</forename><surname>Ronneberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Fischer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Brox</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Medical image computing and computer-assisted intervention</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="234" to="241" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Learning structured output representation using deep conditional generative models</title>
		<author>
			<persName><forename type="first">K</forename><surname>Sohn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Yan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in neural information processing systems</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="page" from="3483" to="3491" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Decomposition of uncertainty in bayesian deep learning for efficient and risksensitive learning</title>
		<author>
			<persName><forename type="first">S</forename><surname>Depeweg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-M</forename><surname>Hernandez-Lobato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Doshi-Velez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Udluft</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Machine Learning</title>
				<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="1184" to="1193" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Model-predictive policy learning with uncertainty regularization for driving in dense traffic</title>
		<author>
			<persName><forename type="first">M</forename><surname>Henaff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Lecun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Canziani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">7th International Conference on Learning Representations</title>
				<meeting><address><addrLine>ICLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Improving robustness of deep neural networks for aerial navigation by incorporating input uncertainty</title>
		<author>
			<persName><forename type="first">F</forename><surname>Arnez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Espinoza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Radermacher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Terrier</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computer Safety, Reliability, and Security. SAFECOMP 2021 Workshops</title>
				<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="219" to="225" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">What uncertainties do we need in bayesian deep learning for computer vision?</title>
		<author>
			<persName><forename type="first">A</forename><surname>Kendall</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Gal</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in neural information processing systems</title>
				<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="5574" to="5584" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Can you trust your model&apos;s uncertainty? evaluating predictive uncertainty under dataset shift</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Ovadia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Fertig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Nado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Sculley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Nowozin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Dillon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Lakshminarayanan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Snoek</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in Neural Information Processing Systems</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="page" from="13991" to="14002" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<author>
			<persName><forename type="first">E</forename><surname>Daxberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Hernández-Lobato</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1912.05651</idno>
		<title level="m">Bayesian variational autoencoders for unsupervised out-of-distribution detection</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Identifying causal-effect inference failure with uncertaintyaware models</title>
		<author>
			<persName><forename type="first">A</forename><surname>Jesson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mindermann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Shalit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Gal</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in Neural Information Processing Systems</title>
		<imprint>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="page" from="11637" to="11649" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Dropout as a bayesian approximation: Representing model uncertainty in deep learning</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Gal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Ghahramani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">international conference on machine learning</title>
				<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1050" to="1059" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Dropblock: A regularization method for convolutional networks</title>
		<author>
			<persName><forename type="first">G</forename><surname>Ghiasi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T.-Y</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><forename type="middle">V</forename><surname>Le</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in Neural Information Processing Systems</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="page" from="10727" to="10737" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<author>
			<persName><forename type="first">K</forename><surname>Deepshikha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">H</forename><surname>Yelleni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Srijith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">K</forename><surname>Mohan</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2108.03614</idno>
		<title level="m">Monte carlo dropblock for modelling uncertainty in object detection</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Sample estimate of the entropy of a random vector</title>
		<author>
			<persName><forename type="first">L</forename><surname>Kozachenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">N</forename><surname>Leonenko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Problemy Peredachi Informatsii</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="9" to="16" />
			<date type="published" when="1987">1987</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Vanderplas</surname></persName>
		</author>
		<title level="m">Python data science handbook: Essential tools for working with data</title>
				<imprint>
			<publisher>O&apos;Reilly Media, Inc</publisher>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Woodscape: A multi-task, multi-camera fisheye dataset for autonomous driving</title>
		<author>
			<persName><forename type="first">S</forename><surname>Yogamani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Hughes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Horgan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sistu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Varley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>O'dea</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Uricar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Milz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Simon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Amende</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Witt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Rashed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chennupati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Nayak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mansoor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Perrotton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Perez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)</title>
				<meeting>the IEEE/CVF International Conference on Computer Vision (ICCV)</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Benchmarking safety monitors for image classifiers with machine learning</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">S</forename><surname>Ferreira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Arlat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Guiochet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Waeselynck</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE 26th Pacific Rim International Symposium on Dependable Computing (PRDC), IEEE</title>
				<imprint>
			<date type="published" when="2021">2021. 2021</date>
			<biblScope unit="page" from="7" to="16" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">From variational to deterministic autoencoders</title>
		<author>
			<persName><forename type="first">P</forename><surname>Ghosh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Sajjadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vergari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Black</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Scholkopf</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Learning Representations</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
