<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>On the Validity of Bayesian Neural Networks for Uncertainty Estimation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>John Mitros</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Brian Mac Namee</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>School of Computer Science University College Dublin</institution>
          ,
          <addr-line>Dublin, IR</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>Deep neural networks (DNN) are versatile parametric models utilised successfully in a diverse number of tasks and domains. However, they have limitations|particularly from their lack of robustness and over-sensitivity to out of distribution samples. Bayesian Neural Networks, due to their formulation under the Bayesian framework, provide a principled approach to building neural networks that address these limitations. This work provides an empirical study evaluating and comparing Bayesian Neural Networks to their equivalent point estimate Deep Neural Networks to quantify the predictive uncertainty induced by their parameters, as well as their performance in view of uncertainty. Speci cally, we evaluated and compared three point estimate deep neural networks against their alternative comparable Bayesian neural network utilising well-known benchmark image classi cation datasets.</p>
      </abstract>
      <kwd-group>
        <kwd>Bayesian Neural Networks</kwd>
        <kwd>Uncertainty Quanti cation</kwd>
        <kwd>OoD</kwd>
        <kwd>Robustness</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>With the advancement of technology and the abundance of data, our society has
been transformed beyond recognition. From smart home assistance technologies
to self-driving cars, to smart mobile phones a multitude of connected devices
now assist us in our daily routines.</p>
      <p>One thing that is common among these devices is the exponential explosion
of data generated as a consequence of our activities. Predictive models rely on
this data to capture patterns in our daily routines, from which they can o er us
assistance tailored to our individual needs. Many of these predictive models are
based on deep neural networks (DNNs).</p>
      <p>The machine learning community, however, is becoming increasingly aware of
issues associated with DNNs ranging from fairness to bias, and, from robustness
to uncertainty estimation. Motivated by this we setup to investigate the issues
of reliability and trustworthiness of prediction con dence estimates produced by
DNNs. We rst assess the capability of current DNN models to provide con
dent (i.e. calibration error) and reliable (i.e. noise sensitivity error or the ability
to predict out of sample instances with high uncertainty) predictions. Second,
we compare this to the capability of equivalent recent Bayesian formulations
(i.e. Bayesian neural networks (BNN)), in terms of accuracy, calibration error,
and ability to recognise and indicate out of sample instances.</p>
      <p>
        There exist two types of uncertainties related to predictive models, aleatoric
and epistemic uncertainty [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Aleatoric uncertainty is usually attributed to
stochasticity inherent in the related task or experiment to be performed. It can
therefore be considered an irreducible error. Epistemic uncertainty is usually
attributed to uncertainty induced by the model parameters. It can therefore be
considered a a reducible error as it can be reduced by obtaining more data. The
question under investigation in this work is whether BNNs can provide better
calibrated and reliable estimates for out of sample instances compared to point
estimate DNNs and therefore relates to epistemic uncertainty.
      </p>
      <p>The remaining sections of this paper are divided as follows. Section 2 outlines
related work in the area of con dent estimations and Bayesian neural networks.
In Section 3, we provide the information related to the datasets used throughout
the experiments, their respective sizes and types. In Section 4, we describe the
metrics used to evaluate whether a classi er is calibrated (i.e. expected
calibration error and reliability gures), as well as its ability to identify and predict out
of sample instances (i.e. symmetric KL divergence and distributional entropy
gures), along with their respective explanations. In Section 5, we introduce the
three BNN approaches utilised in the experiments, providing detailed
explanations of how they work. Finally, in Sections 6 and 7 we present the results related
to con dence calibration (i.e. Table 1 and Figure 1) and reliability prediction
estimates for out of sample instances (i.e. Table 3 and Figures 2), along with the
concluding remarks.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        Earlier ndings [
        <xref ref-type="bibr" rid="ref1 ref13 ref17">13, 17, 20, 18, 1</xref>
        ] have demonstrated the incapacity of point
estimate deep neural networks (DNN) to provide con dent and calibrated [
        <xref ref-type="bibr" rid="ref10 ref5 ref9">10, 5, 9</xref>
        ]
uncertainty estimates in their predictions. Motivated by recent work in this area
we strive to demonstrate, rst, that this is indeed a serious problem currently
investigated in the machine learning community, and second, provide an
alternative viable solution (i.e. BNN) which combines the best from both worlds (i.e. a
principled and elegant formulation due to the Bayesian inference framework and
powerful and expressive models thanks to DNN).
      </p>
      <p>In order to help the reader understand the terminology and semantics of
uncertainty quanti cation in predictive models, it would be helpful to relate
the variance existent in the model parameters represented as the sum of both
aleatoric and epistemic uncertainty. Additionally, whenever the reader
encounters the following terminology \point estimate DNN" in the document, it simply
refers to a DNN model coupled with a softmax function in the nal layer. For
instance, suppose our DNN is de ned by y^ = f (x), where y^ denotes the
predictions for K possible classes. Then a \point estimate DNN" is simply de ned
by exponentiating each prediction and normalising it by the total sum of all
exponentiated predictions.</p>
      <p>p(y = jjx) =</p>
      <p>ey^i
PK
j=1 ey^j
for i = 1; : : : ; K and y^ = (y1; : : : ; yK ) 2 RK
(1)</p>
      <p>The reason for identifying them as point estimate DNN is because they are
mistakenly misinterpreted as probabilistic models due to the fact that they
provide predictions which resemble probabilities (i.e. estimates 2 R[0 1]).
Furthermore, Eq. 1 is misinterpreted mistakenly for a categorical distribution to which
we disagree since it should have a prior Dirichlet in order to be classi ed as a
categorical distribution. Our view is that Eq. 1 is more of a mathematical
convenience in order to allow DNN model to emit predictions rather than a well
de ned probability distribution.</p>
      <p>
        As previously stated there are two main problems investigated in this work.
The rst one is related to the inability of DNN to predict probability estimates
representative of the true correct likelihood function (i.e. calibration con dence).
For instance, in a binary classi cation task a classi er is considered uncalibrated
when the classi ers' predictions do not match the empirical proportion of the
positive class upon which the classi er is requested to make a prediction. Poor
calibration con dence problems in DNN can be a ected by di erent choices while
constructing the DNN architecture [
        <xref ref-type="bibr" rid="ref5 ref9">5, 9</xref>
        ] (e.g. depth, width, regularisation or
batch-normalisation). The second problem, is related to the incapacity of DNN to
identify and reliably predict out of sample instances (i.e. noise sensitivity) which
can be a consequent of noise in the data, noise in the model parameters or noise
constructed by an adversary in order to manipulate the models' predictions [
        <xref ref-type="bibr" rid="ref1 ref11 ref3">1,
11, 18, 3</xref>
        ].
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Data</title>
      <p>
        The data used in this empirical study include two well-established datasets in
the machine learning literature, CIFAR-10 [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and SVHN [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Both datasets
are comprised of colour images of dimensionality 32x32 and include 10 distinct
categories. In addition, both are considered to represent real world datasets with
CIFAR-10 being collected over the Internet while SVHN [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] being a result of
the Google Street View project representing house numbers. Further details
regarding the number of instances of each dataset and equivalently their categories
are described below
{ The CIFAR-10 [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] dataset consists of 60,000 colour images of dimensionality
32x32 with 10 classes. Each class contains 6,000 images. In total there are
50,000 training images and 10,000 test images.
{ The SVHN [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] dataset consists of 99,289 colour images of dimensionality
32x32 representing digits of house numbers. There exist 10 categories one
for each digit, in total there are 73,257 colour images representing digits for
training, and equivalently 26,032 digits for testing.
      </p>
    </sec>
    <sec id="sec-4">
      <title>Metrics</title>
      <p>The chosen evaluation metrics utilised for this empirical study involved:
{ Accuracy
{ Expected calibration error
{ Entropy
{ Symmetric DKL divergence</p>
      <p>Particularly, for a given neural network model y^ = f (x; ) of depth L
dened as fWL L(WL 1 : : : 2(W2 1(W1x)))g, describing the number of
function compositions and parameters f = fW1; : : : ; WLg with ( ) being a
nonlinear function.</p>
      <p>The accuracy on a given output y^n is measured by the indicator function
acc = N1 PnN=1 1(yn 6= y^n) for each instance n 2 N averaged over the total
number of instances N in the dataset. This metric is predominantly used in the
machine learning community to evaluate the generalisation ability of a predictive
model on a hold-out test set.</p>
      <p>
        In order to capture whether a model is calibrated we utilised the expected
calibration error ECE = PmM=1 jacc(Bm) conf(Bm)j jBNmj in combination with
the equivalent reliability plots shown in Figure 1 similar to [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. ECE is usually
expressed as a weighted average between the accuracy and con dence of a model
across M bins for N samples. This metric has the ability to capture any
disagreement between the classi ers predictions and the true empirical proportion
of instances for each class category for every mini-batch of instances presented
to the classi er.
      </p>
      <p>
        Furthermore, to assess a models' ability to characterise out of sample data
with high degree of uncertainty we focused on the work of [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] utilising
information entropy H(Y ) = PkK=1 p(y^k) log p(y^k) on the nal predictions of
a model in order to derive the uncertainty plots depicted in Figure 2.
Essentially, for every input x we have we have a corresponding vector of predictions
y^ = (0:86; 0:23; : : : ; K) where each entry denotes the prediction of the classi er
for each class k 2 K. For every dataset we split them randomly into two halves.
The rst half represents the K=2 classes and the other other half the remaining.
We select one of the halves to be utilised to train the classi er (i.e. denotes
insample instances) and the remaining half (i.e. denotes out-of-sample instances)
to be utilised only during the testing phase of the classi er. Therefore, after the
classi er has been trained on one half (hence the 5+5 categories in Figures 2 )
we evaluate its generalisation ability on the remaining half where for every input
we have a corresponding entropy over the K classes. This provides a
distribution over the total number of N inputs allowing to distinguish and evaluate the
classi er entropy among in-sample vs out-of-sample instances.
      </p>
      <p>
        Finally, in order to conveniently compare and summarize a models'
performance on detecting out of sample instances as a summary statistic the scalar
value of the symmetric KL-divergence DKL(p k q) + DKL(q k p) [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] between
two distributions p and q was selected as a sensible candidate. The choice of
KL-divergence allows to evaluate how similar are two distributions p and q.
The larger the KL-divergence is the more distinct are the distributions p and q.
Since we want to evaluate the ability of the classi er to recognise out of sample
instances we should be able to measure the KL-divergence of the classi ers'
estimates for in-sample p against out of sample q instances. The larger KL values
denote the classi er is in better position to recognise out of sample instances.
5
      </p>
    </sec>
    <sec id="sec-5">
      <title>Methods</title>
      <p>This section provides the details of the empirical evaluation comprised of the
following three components, (i) models, (ii) calibration and (iii) uncertainty. The
following (i) models were selected among which three of them represent point
estimate deep neural networks (DNN) and the remaining three their equivalent
Bayesian Neural Networks (BNN).</p>
      <p>{ Point estimate deep neural networks</p>
      <p>
        VGG16 [19]
PreResNet164 [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]
      </p>
      <p>WideResnet28x10 [22]
{ Bayesian neural networks</p>
      <p>
        VGG16 - Monte Carlo Dropout [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]
VGG16 - Stochastic Weight Aaveraging of Gaussian samples [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]
Stochastic Variational - Deep Kernel Learning [21]
(ii) The calibration of each model was evaluated using the expected
calibration error introduced in Section 4 in combination with the reliability plots
demonstrated in Figure 1. Each model was trained on 5 categories from
CIFAR10 and accordingly SVHN with the remaining 5 categories being whithheld in
order to evaluate the models' ability to associate out of samples instances with
high uncertainty as they were not introduced to the model at any step. The
duration of training for each model was 300 epochs with best performing model
on the validation set being selected as the nal model for each architecture.
      </p>
      <p>(iii) As already stated in order to evaluate a model's ability to detect out of
sample instances with high uncertainty we utilised entropy on the predictions of
a model to derive Figures 2, for each dataset and model combination accordingly.
In addition, the symmetric KL-divergence described in Section 4 was introduced
in order to provide a comparable scalar summary statistic of the overall essence
of Figures 2.</p>
      <p>In the remainder of this section we will introduce the three Bayesian neural
network approaches utilised during the experimental study:</p>
      <sec id="sec-5-1">
        <title>1. Dropout as Bayesian approximation:</title>
        <p>
          Provides a view of dropout at test time as approximate Bayesian inference [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ].
It is based on prior work of [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] which established a relationship between
neural networks with dropout and Gaussian Processes (GP). Given a dataset
(X; Y) the posterior over the GP is formulated as,
        </p>
        <p>FjX
YjF
y^jY</p>
        <p>N (0; K(X; X))
N (F; 0 I)</p>
        <p>Categorical ( )
where y^ denotes a class label and y^ 6= y^0. An integral part of the GP is
the choice of the covariance matrix K representing the similarity between
two inputs as a scalar value. The key insight to draw connections between
neural networks and Gaussian processes is to consider the possibility the
choice of the kernel to represent a non-linear function, for instance,
consider the recti ed linear (ReLU) function, then the kernel would be
expressed as R p(w) (wT x) (wT x)dw with p(w) N ( ; ). Because usually
the integral is intractable a conventional approach would be to use Monte
Carlo integration in order to approximate it k^ = T1 PtT=1 (wtT x) (wtT x),
hence, the name Monte Carlo Dropout. Let us now consider a one
hidden layer neural network with dropout y^ = ( 2W2) (x( 1W1)) where
1; 2 Bernoulli(p1;2). Utilising the approximate kernel k^ one can
express the parameters W1;2 as W1;2 = 1;2(A1;2 + 1;2)(1 1;2) 1;2 with
A1;2; 1;2 N (0; I) and 1;2 Bernouli(p1;2) closely resembling the NN
formulation. Therefore, to establish the nal connection among NNs trained
with stochastic gradient descent (SGD) and dropout to GPs one has to
simulate Monte Carlo sampling by drawing samples from the trained model at
test time [y^n = f (xn; n n)]nN=1 with n Bernouli(pn). The samples y^n
resulting from the di erent dropout masks n are averaged over the N
different models in order to approximate and retrieve the posterior distribution.</p>
      </sec>
      <sec id="sec-5-2">
        <title>2. Stochastic weight averaging of Gaussian samples</title>
        <p>
          Stochastic weight averaging of Gaussian samples (SWAG) [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] is an
extension of stochastic weight averaging (SWA) [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] where the weights of a NN are
averaged during di erent SGD iterates, which in itself can be viewed as
approximate Bayesian inference [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], with ideas traced back to [
          <xref ref-type="bibr" rid="ref15 ref16">16, 15</xref>
          ]. In order
to understand SWAG we rst need to explain SWA. SWA at a high level
can be viewed as averaged SGD [
          <xref ref-type="bibr" rid="ref15 ref16">16, 15</xref>
          ]. Essentially the main di erence
between SWA and averaged SGD is that SWA utilises a simple moving average
instead of an exponential one, in conjunction with a high constant learning
rate, instead of a decaying one. In essence, in SWA one maintains a running
average over the weights of a NN during the last 25% of the training process
which is used to update the rst and second moments of batch-normalisation.
This leads to better generalisation since the SGD projections are smoothed
out during the average process leading to wider optima in the optimisation
landscape of the NN. Now that we have established what SWA is let us
introduce SWAG. SWAG is an approximate Bayesian inference technique
for estimating the covariance from the weight parameters of a NN. SWAG
maintains a running average 2 = T1 PT
t=1 t2 in order to compute the
covariance = diag( 2 2) which produces the approximate Gaussian posterior
N ( ; ). At test time the weights of the NN are drawn from this posterior
~n N ( ; ) in order to perform Bayesian model averaging to retrieve the
nal posterior of the model as well as the uncertainty estimates from the
rst and second moments.
        </p>
      </sec>
      <sec id="sec-5-3">
        <title>3. Deep kernel learning</title>
        <p>The deep kernel learning method [21] establishes a combination of NN
architectures and GPs trained jointly in order to derive kernels with GP properties
overcoming the need to perform approximate Bayesian inference. The rst
part is composed of any NN (i.e. task dependent) whose output is utilised
in the second part in order to approximate the covariance of the GP in the
additive layer As explained earlier in the MC-Dropout approach a kernel
between inputs x and x0 can be expressed via a non-linear mapping function
thanks to the kernel trick k(x; x0) ! k(f (x; w); f (x0; w)jw) therefore
combining NNs with GPs seems like a natural evolution which permits scalable
and exible kernels represented as neural networks to be utilised directly
in Gaussian Processes. Finally, given that the formulation of GPs allows to
represent a distribution over a function space it is thus possible to derive
uncertainty estimates from the moments of this distribution in order to inform
the models about the uncertainty in their parameters having an impact in
the nal posterior distribution.
6</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Results</title>
      <p>In this section we describe the results from our experiments and the ndings
that arise from these for our two initial questions. Let us recall them here again
for clarity:
{ Do point estimate deep neural networks su er from pathologies of poor
calibration and inability to identify out of sample instances?
{ Are Bayesian neural networks better calibrated and more resilient to out of
sample instances?</p>
      <p>
        To answer the rst question we draw the attention of the reader to Figure 1
and equivalently Table 1. Figure 1 shows the reliability plots for all models and
datasets. In these plots a perfectly calibrated model is indicated by the
diagonal line. Anything below the diagonal represents an over-con dent model, while
anything above the diagonal represents an under-con dent model. The expected
calibration errors (ECE) in Table 1 (which measure the degree of miscalibration
present) seem to be in accordance with the results from [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. All of the models are
somewhat miscalibrated. Some of the Bayesian approaches, however|in
particular the models based on MC-Dropout and SWAG|are better calibrated than
their point estimate DNN counterparts.
      </p>
      <p>
        Notice that all models exhibit high accuracy on the nal test set (shown in
Table 2). This illustrates that a model can be very accurate but miscalibrated,
or equivalently a model can be very well calibrated but inaccurate. There is no
real correlation between between calibration and accuracy of a model. It is also
known [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] that as the complexity of the model increases the calibration error
increases as well.
      </p>
      <p>In order to evaluate and demonstrate the ability of the models to handle
out of sample instances we divided each of the CIFAR-10 and SVHN datasets
into two halves containing 5 categories each. These partitions represent in and
out of distribution samples. In the discussion that follows this is indicated with
the parenthesis (5 + 5) next to the dataset name to denote that the model was
trained on only 5 categories representing in distribution samples and at test time
it was evaluated on the other 5 categories to simulate out of sample instances.
The results are illustrated in Figures 2, for CIFAR-10 and SVHN respectively,
and summarised in Table 3. The information depicted in Table 3 provides a
summary of Figures 2, by measuring the symmetric KL divergence, between the
distribution of class con dence entropies of each model for the in and out of
sample instances.</p>
      <p>Together these results suggest that the Bayesian methods are better at
identifying out of sample instances. Although the result is not clear cut, in some cases
the point estimate networks get higher divergence scores than the Bayesian ones,
overall the results point in the Bayesian direction.
In conclusion, as we have showed that point estimate deep neural networks indeed
su er from poor calibration and inability to identify out sample instances with
high uncertainty. Bayesian deep neural networks provide a principled and viable
alternative that allows the models to be informed about the uncertainty in their
parameters and at the same time exhibits a lower degree of sensitivity against
noisy samples compared to their point estimate DNN. This suggests that this
is a promising research direction for improving the performance of deep neural
networks.</p>
      <p>Acknowledgement. This work was supported by Science Foundation Ireland
under Grant No.15/CDA/3520 and Grant No. 12/RC/2289.
18. Shafaei, A., Schmidt, M., Little, J.J.: Does Your Model Know the Digit 6 Is Not a
Cat? A Less Biased Evaluation of "Outlier" Detectors. arXiv:1809.04729 [cs, stat]
(Sep 2018)
19. Simonyan, K., Zisserman, A.: Very Deep Convolutional Networks for Large-Scale</p>
      <p>Image Recognition. arXiv e-prints (Sep 2014)
20. Stracuzzi, D.J., Darling, M.C., Peterson, M.G., Chen, M.G.: Quantifying
Uncertainty to Improve Decision Making in Machine Learning. Tech. Rep.
SAND201811166, 1481629, Sandia National Laboratories (Oct 2018)
21. Wilson, A.G., Hu, Z., Salakhutdinov, R., Xing, E.P.: Stochastic Variational Deep</p>
      <p>Kernel Learning. arXiv e-prints (Nov 2016)
22. Zagoruyko, S., Komodakis, N.: Wide Residual Networks. arXiv e-prints (May 2016)</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Choi</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jang</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alemi</surname>
            ,
            <given-names>A.A.</given-names>
          </string-name>
          : WAIC, but Why?
          <article-title>Generative Ensembles for Robust Anomaly Detection</article-title>
          . arXiv:
          <year>1810</year>
          .01392 [cs, stat] (
          <year>Oct 2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Damianou</surname>
            ,
            <given-names>A.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lawrence</surname>
          </string-name>
          , N.D.: Deep Gaussian Processes. arXiv e-prints (
          <year>Nov 2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Fawzi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fawzi</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fawzi</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          :
          <article-title>Adversarial vulnerability for any classi er</article-title>
          .
          <source>Neural Information Processing Systems (Feb</source>
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Gal</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ghahramani</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          :
          <article-title>Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning</article-title>
          . arXiv e-prints (
          <year>Jun 2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Guo</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pleiss</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weinberger</surname>
            ,
            <given-names>K.Q.</given-names>
          </string-name>
          :
          <source>On Calibration of Modern Neural Networks. International Conference on Machine Learning (Jun</source>
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>He</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ren</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sun</surname>
          </string-name>
          , J.:
          <article-title>Identity Mappings in Deep Residual Networks</article-title>
          . arXiv e-prints (
          <year>Mar 2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Izmailov</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Podoprikhin</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garipov</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vetrov</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , Wilson,
          <string-name>
            <surname>A.G.</surname>
          </string-name>
          :
          <article-title>Averaging Weights Leads to Wider Optima and Better Generalization</article-title>
          . arXiv:
          <year>1803</year>
          .05407 [cs, stat] (
          <year>Mar 2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Krizhevsky</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Learning multiple layers of features from tiny images pp</article-title>
          .
          <volume>32</volume>
          {
          <issue>33</issue>
          (
          <year>2009</year>
          ), https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Kumar</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liang</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , Ma, T.:
          <article-title>Veri ed Uncertainty Calibration</article-title>
          . In: arXiv:
          <year>1909</year>
          .10155 [Cs, Stat]. vol.
          <volume>33</volume>
          . Vancouver, Canada (Sep
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shin</surname>
          </string-name>
          , J.:
          <article-title>Training Con dence-calibrated Classi ers for Detecting Out-of-Distribution Samples</article-title>
          . arXiv:
          <volume>1711</volume>
          .09325 [cs, stat] (
          <year>Nov 2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Maddox</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garipov</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Izmailov</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vetrov</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , Wilson,
          <string-name>
            <surname>A.G.</surname>
          </string-name>
          :
          <article-title>A Simple Baseline for Bayesian Uncertainty in Deep Learning</article-title>
          . arXiv:
          <year>1902</year>
          .02476 [cs, stat] (
          <year>Feb 2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Mandt</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ho</surname>
            <given-names>man</given-names>
          </string-name>
          , M.D.,
          <string-name>
            <surname>Blei</surname>
            ,
            <given-names>D.M.</given-names>
          </string-name>
          :
          <article-title>Stochastic Gradient Descent as Approximate Bayesian Inference</article-title>
          .
          <source>arXiv e-prints (Apr</source>
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Nalisnick</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Matsukawa</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Teh</surname>
            ,
            <given-names>Y.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gorur</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lakshminarayanan</surname>
            ,
            <given-names>B.: Do</given-names>
          </string-name>
          <string-name>
            <surname>Deep Generative Models Know What They Don't Know</surname>
          </string-name>
          ? International Conference on Learning Representations (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Netzer</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Coates</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bissacco</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ng</surname>
            ,
            <given-names>A.Y.</given-names>
          </string-name>
          :
          <article-title>Reading digits in natural images with unsupervised feature learning (</article-title>
          <year>2011</year>
          ), http://ufldl. stanford.edu/housenumbers/nips2011_housenumbers.pdf
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Polyak</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Juditsky</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Acceleration of Stochastic Approximation by Averaging</article-title>
          .
          <source>SIAM Journal on Control and Optimization</source>
          <volume>30</volume>
          (
          <issue>4</issue>
          ),
          <volume>838</volume>
          {855 (jul
          <year>1992</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Ruppert</surname>
          </string-name>
          , D.:
          <article-title>E cient Estimations from a Slowly Convergent Robbins-Monro Process</article-title>
          .
          <source>Technical Report TR000781</source>
          , Cornell (feb
          <year>1988</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Schulam</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Saria</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Can You Trust This Prediction? Auditing Pointwise Reliability After Learning</article-title>
          .
          <source>In: Proc. of Arti cial Intelligence and Statistics</source>
          . p.
          <volume>10</volume>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>