<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Exploring the Potential of Bilevel Optimization for Calibrating Neural Networks</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Gabriele</forename><surname>Sanguin</surname></persName>
							<email>gabriele.sanguin@math.unipd.it</email>
							<affiliation key="aff0">
								<orgName type="department">Dipartimento di Matematica</orgName>
								<orgName type="institution">Università degli Studi di Padova</orgName>
								<address>
									<addrLine>Via 8 Febbraio 2</addrLine>
									<postCode>35122</postCode>
									<settlement>Padova</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Arjun</forename><surname>Pakrashi</surname></persName>
							<email>arjun.pakrashi@ucd.ie</email>
							<affiliation key="aff1">
								<orgName type="department">School of Computer Science</orgName>
								<orgName type="institution">University College Dublin</orgName>
								<address>
									<settlement>Belfield</settlement>
									<region>Dublin</region>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Marco</forename><surname>Viola</surname></persName>
							<email>marco.viola@dcu.ie</email>
							<affiliation key="aff3">
								<orgName type="department">School of Mathematical Sciences</orgName>
								<orgName type="institution">Dublin City University</orgName>
								<address>
									<addrLine>Collins Avenue Ext</addrLine>
									<settlement>Dublin</settlement>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Francesco</forename><surname>Rinaldi</surname></persName>
							<email>rinaldi@math.unipd.it</email>
							<affiliation key="aff0">
								<orgName type="department">Dipartimento di Matematica</orgName>
								<orgName type="institution">Università degli Studi di Padova</orgName>
								<address>
									<addrLine>Via 8 Febbraio 2</addrLine>
									<postCode>35122</postCode>
									<settlement>Padova</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<address>
									<country key="IE">Ireland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff4">
								<address>
									<country key="IE">Ireland</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Exploring the Potential of Bilevel Optimization for Calibrating Neural Networks</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">8C5A3BA19941B48E2B2D7621894926CB</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:14+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Bilevel optimization</term>
					<term>confidence</term>
					<term>calibration</term>
					<term>neural networks</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Handling uncertainty is critical for ensuring reliable decision-making in intelligent systems. Modern neural networks are known to be poorly calibrated, resulting in predicted confidence scores that are difficult to use. This article explores improving confidence estimation and calibration through the application of bilevel optimization, a framework designed to solve hierarchical problems with interdependent optimization levels. A self-calibrating bilevel neural-network training approach is introduced to improve a model's predicted confidence scores. The effectiveness of the proposed framework is analyzed using toy datasets, such as Blobs and Spirals, as well as more practical simulated datasets, such as Blood Alcohol Concentration (BAC). It is compared with a well-known and widely used calibration strategy, isotonic regression. The reported experimental results reveal that the proposed bilevel optimization approach reduces the calibration error while preserving accuracy.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Machine learning classification algorithms with increasingly higher discriminative power, especially neural networks, have rapidly developed in the last decade. Such advanced models are generally supposed to help humans make decisions by assisting. Although these models have a high discriminative power, sometimes they will predict something completely incorrect with a fairly high confidence score. This creates a huge problem in highly regulated and sensitive real-world applications (e.g. medical field, autonomous vehicles, healthcare diagnostics, financial forecasting, etc.) <ref type="bibr" target="#b0">[1]</ref>. Therefore, it is important for these models to provide a meaningful confidence, based on which it can say "I don't know", when they are not confident enough, so that the human expert can inspect, and make further decisions. This is sometimes called learning to reject or abstention <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3]</ref>.</p><p>Confidence is a probabilistic score that a model assigns to each prediction of a data point, and it determines how certain the model is about the prediction. One straightforward approach to use such a confidence score to understand if the model is confident enough or not is to define an interval, a rejection window, within which if the prediction falls, it will be marked as reject. It is however hard to fix such a confidence window to decide the rejection window for such models, especially modern neural network models, because such models are known to have poor model confidence calibration <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b3">4]</ref>.</p><p>Confidence calibration is the process of adjusting the confidence scores to better align with the actual likelihood of correctness. Well-calibrated confidence is crucial for effective decision making and is important for the interpretation of the model, since humans have a natural cognitive intuition for probabilities <ref type="bibr" target="#b4">[5]</ref>. Accurate confidence scores make it easier for users to comprehend how confident the model was during prediction and to establish trust on the decisions being made. Moreover, accurate confidence scores are essential if such a rejection window needs to be defined by a human expert.</p><p>There are two types of confidence calibration. Firstly, post-calibration, where the output scores/probabilities of the main model are re-adjusted by another external calibration model. For such methods, the main model does not need to be modified. In <ref type="bibr" target="#b3">[4]</ref>, the authors have demonstrated how well-known post-calibration methods can be used to calibrate existing models. On the other hand, self-calibration method are algorithms which integrate the confidence calibration process into the model training itself. These methods aim to ensure that the model's probabilities are calibrated during the training phase, without the need for a separate calibration step.</p><p>The objective of this article is to present an initial study on whether it is possible to self-calibrate neural network models by exploiting bilevel optimization (BO), a mathematical framework specifically designed to solve hierarchical two-level decision-making optimization problems. BO has recently gained importance in machine learning, particularly in hyperparameter optimization and metalearning <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b7">8]</ref>. The choice of such a framework is obvious where the inner-level optimization problem tackles the training of the model, while the outer-level optimization problem addresses the model confidence calibration.</p><p>To the best of our knowledge, in the context of uncertainty scores, the only work in the literature that uses BO is <ref type="bibr" target="#b8">[9]</ref>. Here, we define a BO framework to train two different architectures, one for classification and one for the uncertainty score (which is then tested as a rejection function), thus leading to a significant increase of the parameters to be trained. Then we focus on the study on the selective potential of such a score. The work presented in the current article is the first of its kind, which is quite distinct from the one in <ref type="bibr" target="#b8">[9]</ref>. The objective of the current work is to propose a BO framework to train a single self-calibrated deep neural network, BO4SC, and provide an initial, but crucial analysis of the applicability of the approach.</p><p>The article is organized as follows. Section 2 discusses the mathematical foundations of confidence estimation, calibration methods, and bilevel optimization (BO). Section 3 introduces BO4SC, a BO framework for confidence estimation in neural networks. In Section 4 the initial experiments and analysis are shown and discussed. Finally, Section 5 concludes the article.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Theoretical Background</head><p>This section will briefly introduce and discuss the relevant parts of confidence estimation, model calibration evaluation, calibration methods, and bilevel optimization.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Confidence Estimation</head><p>For a machine learning model, the standard process (referred to as standard method later in Section 4) to accurately predict an output involves learning a mapping function that can generalize well from training data to unseen samples. Let {X, Y } = {(x i , y i )} n i=1 be a labeled dataset, where x i represents a data point and y i ∈ {1, 2, . . . , C} is its corresponding class label, where C being the total number of classes and n the number of data points. The objective is usually to learn a function f that maps an unseen data point x t to a predicted output ŷt = f (x t ). This function f can be efficiently and effectively trained by minimizing the empirical loss over all training data.</p><p>Confidence estimation involves assigning a probabilistic score to each prediction which reflects the model's certainty about the predicted output. Confidence is a score function pi = g(f, x i , ŷi ), which measures the likelihood of the prediction ŷi being correct given the features x i and the classifier f . Ideally, the confidence score should be continuous and fall within the range [0, 1].</p><p>A variety of methods have been developed to enable confidence estimation across different model types. Among these approaches, we find distance-based methods, which use the distance of a data point from other points, decision boundaries, or centroids of classes to estimate confidence <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b12">13,</ref><ref type="bibr" target="#b13">14]</ref>. Bayesian uncertainty methods use Bayesian principles to model uncertainty, providing a probabilistic interpretation of confidence <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b15">16,</ref><ref type="bibr" target="#b16">17,</ref><ref type="bibr" target="#b17">18]</ref>. Reconstruction error techniques rely on the error of reconstructing input data to obtain a confidence score, often used in models with an encoder-decoder framework, such as autoencoders. The idea is that a high reconstruction error indicates a lower confidence in the prediction of the model, <ref type="bibr" target="#b18">[19,</ref><ref type="bibr" target="#b19">20]</ref>. Ensemble methods utilize the variance among predictions from multiple models to estimate confidence <ref type="bibr" target="#b20">[21,</ref><ref type="bibr" target="#b21">22]</ref>. Extreme value theory (EVT) approaches are based on EVT that assess confidence by modeling the tail distributions of prediction scores <ref type="bibr" target="#b22">[23]</ref>. Finally, logits-based techniques involve the use of logits, which include probabilistic outputs and other mechanisms derived from the raw scores produced especially by neural networks. These scores can be transformed or analyzed to estimate the confidence of the predictions <ref type="bibr" target="#b23">[24]</ref>.</p><p>Logits-based methods are the ones that are gaining more attention for the recent extensive use of neural networks. The experiments in the current work use a smooth version of maximum class probability (MCP), which is a common approach in many classification tasks. We define the MCP as follows p(x) = max</p><formula xml:id="formula_0">c P (y = c | x),<label>(1)</label></formula><p>where P (y = c | x) represents the predicted probability of class c for input x after applying a softmax function to the logits.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Evaluating Model Calibration</head><p>Unlike classification functions, which can be efficiently learned from labeled data, there is no available supervisory information for directly learning a confidence function and the first challenge is how to estimate it from pre-trained models (e.g. with MCP). Unfortunately, in many cases, especially in modern deep neural networks, the calculated confidences tend to be overestimated, meaning that the models are over-confident (see <ref type="bibr" target="#b1">[2]</ref>). This can be formally described as</p><formula xml:id="formula_1">P (ŷ = y | p = p) &lt; p,<label>(2)</label></formula><p>where p represents the true probability.</p><p>To address this issue, it is necessary to employ methods to calibrate the confidence. A model is considered calibrated if pi accurately reflects the true likelihood of correctness:</p><formula xml:id="formula_2">P (ŷ = y | p = p) = p, ∀p ∈ [0, 1].<label>(3)</label></formula><p>To understand whether a model is well-calibrated one can exploit metrics quantifying the degree to which the model's predicted probabilities align with the actual outcomes or its likelihood. It is important to note that there is no single, universally accepted metric for assessing calibration.</p><p>In this work, we will use two of the most common calibration metrics in the literature, namely: reliability diagrams and expected calibration error (ECE).</p><p>Reliability diagrams are a visual tool used to assess model calibration <ref type="bibr" target="#b24">[25,</ref><ref type="bibr" target="#b25">26]</ref> by plotting the expected accuracy of samples against their predicted confidence levels. The predictions are grouped into M interval bins, each of size 1 M . By letting B m represent the set of indices of samples whose predicted confidence falls within the interval</p><formula xml:id="formula_3">I m = m−1 M , m M , the accuracy for bin B m is calculated as acc(B m ) = 1 |Bm| i∈Bm 1(ŷ i = y i ),</formula><p>where ŷi and y i are the predicted and true class labels for data point x i , respectively. According to basic probability theory, acc(B m ) serves as an unbiased and consistent estimator of</p><formula xml:id="formula_4">P (ŷ = y | p ∈ I m ).</formula><p>The average confidence within the bin B m is given by</p><formula xml:id="formula_5">conf(B m ) = 1 |Bm| i∈Bm pi ,</formula><p>where pi represents the confidence of the sample i. For a perfectly calibrated model, the relationship acc(B m ) = conf(B m ) should hold for all m ∈ {1, . . . , M }, i.e., the plot should follow the identity line.</p><p>It is important to note that reliability diagrams do not display the proportion of samples in each bin. This is also why they are often paired with a density plot of confidence prediction, called confidence histograms.</p><p>The ECE represents the weighted average of the absolute difference between accuracy and confidence over all prediction bins. Formally, it is defined as:</p><formula xml:id="formula_6">ECE = M m=1 |Bm| n |acc(B m ) − conf(B m )| , (<label>4</label></formula><formula xml:id="formula_7">)</formula><p>where n is the total number of samples. Although ECE is widely adopted due to its simplicity and interpretability, it is sensitive to the choice of the number of bins M , which can affect the accuracy of the measurement.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Calibration Methods</head><p>Calibration methods can be categorized into two types, namely post-calibration and self-calibration.</p><p>Post-calibration: These methods involve adjusting the output probabilities of a pretrained model using a separate calibration model, applied after the initial model has been trained. This adjustment aims to align the predicted probabilities with the true likelihood of events.</p><p>Among the most common approaches we can find histogram binning <ref type="bibr" target="#b26">[27]</ref>, Bayesian binning into quantiles (BBQ) <ref type="bibr" target="#b27">[28]</ref>, Platt scaling <ref type="bibr" target="#b28">[29,</ref><ref type="bibr" target="#b25">26]</ref> and its derivative matrix and vector scaling and temperature scaling <ref type="bibr" target="#b3">[4]</ref>. Other recent methods include beta calibration <ref type="bibr" target="#b29">[30]</ref>, Shape-Restricted Polynomial Regression <ref type="bibr" target="#b30">[31]</ref> and neural calibration <ref type="bibr" target="#b31">[32]</ref>.</p><p>The experiments in the current work make use of the isotonic regression <ref type="bibr" target="#b32">[33]</ref>, because of its simplicity and effectiveness. Isotonic regression learns a piecewise constant function f to transform uncalibrated outputs into calibrated ones, by minimizing the squared loss subject to the constraint that f is a non-decreasing function.</p><p>Self-calibration: These methods integrate the calibration process into the model training itself. These methods aim to ensure that the model's probabilities are calibrated during the training phase, without the need for a separate calibration step. Self-calibration often requires modifying the loss function or the training procedure to directly incorporate the calibration objectives. Techniques such as Bayesian neural networks, which incorporate uncertainty directly into the model predictions through probabilistic inference, inherently produce better-calibrated probabilities <ref type="bibr" target="#b15">[16,</ref><ref type="bibr" target="#b33">34]</ref>.</p><p>The main objective of this article is to explore a new self-calibration strategy for neural networks that makes use of a bilevel optimization framework.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4.">Bilevel Optimization</head><p>Bilevel optimization (BO) is a mathematical approach designed to address hierarchical decision-making processes, where decisions made at an outer level influence the outcomes of an inner level, which in turn affects the outer level. This hierarchical structure is prevalent in many real-world scenarios, such as economics, engineering, management, and various public and private sector operations. The distinctive feature of bilevel optimization lies in its two interconnected levels of optimization. Each level has its own objectives and constraints, and there are two classes of decision vectors: the leader's (outer-level) decision vectors and the follower's (inner-level) decision vectors. The inner-level optimization is a parametric optimization problem solved with respect to the inner-level decision vectors, while the outer-level decision vectors act as parameters. The inner-level optimization problem acts as a constraint to the outer-level optimization problem, such that only those solutions are considered feasible that are optimal for the inner level.</p><p>By denoting the outer and inner parameters as w and θ, respectively, we can define an unconstrained BO problem as min</p><formula xml:id="formula_8">w f (w, θ * ) s.t. θ * ∈ arg min θ g(w, θ),<label>(5)</label></formula><p>where θ * is one of the minimizers of g. Gradient-based approaches are now the most commonly used methods for solving bilevel optimization problems. The most compelling approach to gradient-based bilevel optimization is to replace the inner problem with a dynamical system. This idea, discussed, e.g., in <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b34">35,</ref><ref type="bibr" target="#b35">36]</ref>, involves approximating the bilevel problem with a sequence of optimization steps, which allows for efficient gradient computation.</p><p>Specifically, consider a prescribed positive integer T and let [T ] = {1, 2, . . . , T }. We now rewrite the bilevel problem Eq.( <ref type="formula" target="#formula_8">5</ref>) with the following approximation:</p><formula xml:id="formula_9">min w f (w, θ T (w)) s.t. θ 0 (w) = Φ 0 (w), θ t (w) = Φ t (θ t−1 (w), w), t ∈ [T ],<label>(6)</label></formula><p>where Φ 0 : R n → R m is a smooth initialization mapping and for each t ∈ [T ] , Φ t : R m × R n → R m represents the operation performed by the t-th step of an optimization algorithm. For example, if the optimization dynamics is gradient descent, we might have:</p><formula xml:id="formula_10">Φ t (θ t−1 , w) = θ t−1 − η t ∇ θ g(θ t−1 , w),<label>(7)</label></formula><p>where (η t ) t∈[T ] is a sequence of step sizes. This approach approximates the bilevel problem and gives the possibility to use gradient descent also to solve the outer objective. To this end, one has to compute an hypergradient, which is the gradient of the outer objective f (w, θ T (w)) with respect to the hyperparameters w, i.e.,</p><formula xml:id="formula_11">∇ w f (w, θ T (w)) = ∇ w f (w, θ T ) + [J θ T (w) (w)] ⊤ ∇ θ f (w, θ T ),<label>(8)</label></formula><p>where rows in the Jacobian matrix J θ T (w) (w) contain gradients of the entries of θ T with respect to w.</p><p>The reformulation (6) allows for efficient computation of the hypergradient using reverse or forward mode algorithmic differentiation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">BO4SC: A Bilevel Optimization Framework for Self-Calibration</head><p>We introduce here the bilevel optimization framework we designed to enhance confidence estimation, which we will name BO4SC.</p><p>We here assume that the prediction models are characterized by a dual-output structure: one output to provide the prediction for the data point, the other to estimate the confidence of that prediction. This is essential because we want both the class predictions and the confidence estimation to be dependent on the same model parameters. For the model m, parametrized by θ, we will denote the output relative to the sample x i with</p><formula xml:id="formula_12">m(x i , θ) = (ŷ(x i , θ), p(x i , θ)) = (ŷ i , pi ),<label>(9)</label></formula><p>where ŷi is the class prediction and pi is his confidence estimation. Now consider the optimization problem in Eq. ( <ref type="formula" target="#formula_8">5</ref>), with the outer parameters the weights w and the inner parameters θ of the model m θ . The inner loss function g is trained on the training set (D train ), focusing on minimizing the weighted cross-entropy (CE) loss over the model's prediction output with the objective of minimizing the model's parameters θ:</p><formula xml:id="formula_13">g(w, θ) = 1 |D train | i∈D train w i • CE(ŷ(x i , θ), y i )<label>(10)</label></formula><p>supposing θ * to be unique and where the CE loss is defined as:</p><formula xml:id="formula_14">CE(ŷ(x i , θ), y i ) = − C c=1 y i,c log(ŷ(x i , θ) c )<label>(11)</label></formula><p>Here, C represents the number of classes, y i,c is the binary indicator (0 or 1) if the class label c is the correct classification for input x i , and ŷ(x i , θ) c is the final logit for class c given input x i according to the model ŷ(•, θ).</p><p>The outer loss function f , on the other hand, is evaluated on the validation set (D val ), where it aims to minimize a binary cross-entropy (BCE) loss on the model's confidence output p(•, θ). The objective is to learn weights for each sample in the training set that can effectively balance the trade-off between prediction accuracy and confidence calibration:</p><formula xml:id="formula_15">f (w, θ * ) = 1 |D val | j∈D val BCE(p(x j , θ * w ), y j ),<label>(12)</label></formula><p>where θ * w are the model parameters found by the inner problem and that depend on the weights w assigned to the training samples. p(•, θ) is the confidence output of the model.</p><p>The binary cross-entropy (BCE) loss is defined as:</p><formula xml:id="formula_16">BCE(p(x j , θ * ), y j ) = − y B j log(p(x j , θ * )) + (1 − y B j ) log(1 − p(x j , θ * ))<label>(13)</label></formula><p>In this equation, y B j is the true binary label (0 or 1) for the sample x j , indicating whether x j has been correctly classified (i.e. ŷj = y j ); p(x j , θ * ) represents the predicted confidence (probability) that x j belongs to the positive class according to the model.</p><p>The difficulty in solving this bilevel optimization problem usually lies in the accurate computation of the hypergradient ∇ w L outer (w) = ∇ w f (w, θ * w ), which necessitates sophisticated approaches requiring a large cost in time and memory performance.</p><p>We schematize as Algorithm 1 the approximate hypergradient descent algorithm we implemented to solve the BO4SC problem.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Algorithm 1 BO4SC via Approximate Hypergradient Descent</head><p>Initialize: Set initial weights w 0 and model parameters θ 0 . for j = 0, 1, . . . do for k = 0 to T − 1 do {Inner loop: gradient descent on inner loss} Compute the gradient of the inner loss w.r.t. θ k :</p><formula xml:id="formula_17">∇ θ g(w j , θ k ) = 1 |D train | i∈D train w j i • ∇ θ CE(ŷ(x i , θ k ), y i )</formula><p>Update model parameters θ k using gradient descent: θ k+1 = θ k − η θ • ∇ θ g(w j , θ k ) end for Set θ j w = θ T {Final inner solution after T iterations, in function on outer parameters w} Compute the hypergradient, i.e. the gradient of the outer loss w.r.t. w, using the approximated θ j w :</p><formula xml:id="formula_18">∇ w f (w, θ j w ) = 1 |D val | j∈D val ∇ w BCE(p(x j , θ j w ), y j )<label>(14)</label></formula><p>Update the outer-parameters w j using gradient descent: w j+1 = w j − η w • ∇ w f (w j , θ j w ) end for</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experiments and Results</head><p>In this section we present our experiment process and the results. First, we give an overview on the training approaches we compared and the datasets we used. Then we analyse the experiment results.</p><p>This work mainly focuses on the proposed method along with two others which are described as follows:</p><p>• Standard: this refers to the standard training procedure in which the model's parameters are updated using backpropagation based on a single loss function. • Isotonic Regression (IsoReg): it is the non-parametric method used to calibrate confidence scores after the initial training phase of a model with the Standard method (see Section 2.3). • BO4SC: the proposed method in this work Algorithm 1).</p><p>In the implementation of the BO4SC algorithm, particularly for the explicit calculation of the outer loss gradient with respect to w (that is, the gradient of θ T w with respect to w), the Python package torchopt <ref type="bibr" target="#b36">[37]</ref> was used. torchopt is a library that extends PyTorch <ref type="bibr" target="#b37">[38]</ref> by providing tools for higherorder optimization, specifically tailored for problems involving complex optimization hierarchies such as bilevel optimization. It enables efficient computation of the hypergradients. By leveraging torchopt, we can accurately and efficiently compute the required gradients in Eq. ( <ref type="formula" target="#formula_18">14</ref>), thereby facilitating the optimization process in our experiments.</p><p>To facilitate the initial investigation, some toy datasets were built which were used as a diagnostic tool to understand BO4SC behavior, as well as how it compares with others. These datasets have two features to facilitate visual inspection.</p><p>The first two datasets are Blobs 1.3 and Blobs 1.7, each of which has two dimensions and five classes, where the blobs are generated from a normal distribution with standard deviations of 1.3 and 1.7, respectively. The third and fourth are two class datasets named Spiral 2.5 and Spiral 3.5, consisting of two interlocking spiral-shaped regions, each corresponding to one class, with the values 2.5 and 3.5 indicating the standard deviation from the center of the spiral, thus controlling the amount of overlap between the regions. These datasets are used for diagnostic purposes to understand the behaviour of the algorithm. Finally, we used the Blood Alcohol Concentration (BAC) dataset, which is commonly utilized in decision-making and confidence estimation tasks. Data were first collected by Nugent and Cunningham <ref type="bibr" target="#b38">[39]</ref> and can be used for regression and binary classification, depending on whether a threshold is set on the BAC level to distinguish between classes. Both the toy datasets, Blobs and Spirals, and BAC are made of 2000 samples in total, 700 are used for training, 300 for validation and 1000 in the test set.</p><p>For each dataset a feed forward neural network has been implemented, with a softmax function applied to the final logits. The MCP is extracted with a smooth maximum function, namely the Boltzman operator <ref type="bibr" target="#b39">[40]</ref>, to keep the confidence score differentiable with respect to the model parameters. The Adam <ref type="bibr" target="#b40">[41]</ref> optimizer was used in the standard training and in the inner loop of BO4SC (to optimize the model parameters θ). All hyperparameters has been selected through a grid search. Besides the number of epochs, in the bilevel approach it is important to adjust the number of inner iterations (T ) and the learning rate η w for the update of the outer parameters.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Confidence Estimation and Calibration</head><p>What interests us in our experiments is assessing how well the methods predict calibrated confidence estimations. We begin with a analysis using one of the toy datasets, where we can observe how well the models differentiate between high-confidence and low-confidence regions.</p><p>The toy dataset Blobs 1.7 provides an excellent case for this analysis. In Figure <ref type="figure" target="#fig_0">1</ref> below, we present an image made up of three different plots, each representing the confidence estimation results. These plots visually demonstrate the predicted confidence levels across the entire input space, highlighting areas where each model is more or less confident in its predictions.</p><p>The standard model is highly confident in most regions, as indicated by the yellow areas. These regions reflect the areas where the model predicts class membership with high certainty (confidence value in (0.9, 1]). However, this confidence is sharply reduced in very narrow areas corresponding to the decision boundaries, represented by the green regions. These 'lines' of uncertainty appear consistently thin across different parts of the dataset, irrespective of the degree of overlap between classes. In contrast, the BO4SC model's confidence regions show a different pattern. Here, the uncertainty regions are considerably broader, especially in areas where the classes overlap more. This broader distribution of uncertainty better reflects the true complexity and intersections within the data, suggesting that the BO4SC approach is more sensitive to the nuances of the dataset's distribution. This ability, which also characterize the Isotonic Regression post-calibration method, represents a significant improvement over the Standard model, highlighting the advantages of a post-or self-calibration technique to address the confidence estimation challenge. A more detailed examination using quantitative metrics is essential to rigorously evaluate the effectiveness of these methods and of bilevel optimization in producing well-calibrated models.</p><p>The first step is to examine the confidence calibration through reliability diagrams (Section 2.2) and confidence histograms. These visual tools provide a direct representation of the relationship between predicted probabilities and actual observed frequencies, allowing for a straightforward assessment of a model's calibration. The plots are consistently similar in all datasets, and we present in Figure <ref type="figure" target="#fig_1">2</ref> the reliability diagrams for the Spiral 3.5 dataset, which emphasize a drawback of Isotonic Regression. A critical aspect to consider is the gap between the two dashed vertical lines in the confidence istogram: the darker line represents average accuracy, while the lighter grey line indicates the average confidence. For a model to be considered well-calibrated, these two lines should ideally overlap, or at least be very close to each other. The closer these lines are, the more aligned the model's predicted confidence is with its actual performance. When we examine the toy datasets, the gap between these two lines becomes particularly noticeable. The Standard model consistently displays the largest gap between the average accuracy and the average confidence across all datasets. This wide gap, with the darker line staying on the left side, implies that the model's confidence scores are overly optimistic and do not accurately reflect its true performance.</p><p>On the other hand, the BO4SC model shows the smallest gap, indicating that it has a more accurate alignment between confidence and accuracy. The IsoReg method also achieves a relatively close alignment between these two metrics. However, there is a nuanced difference between the confidence distribution obtained through bilevel optimization and the distribution achieved by post-calibration methods like IsoReg. Although IsoReg effectively narrows the gap between accuracy and confidence, it does not always appropriately adjust confidence predictions. In the spiral dataset for example, the IsoReg model produces confidence scores that fall within the (0, 0.5] range. Since these datasets are binary classification tasks, the minimum reasonable confidence score should be around 0.5, reflecting the baseline probability of a random guess. The presence of lower confidence scores indicates an improper adjustment given by the IsoReg model, where it underestimates the confidence needed, thereby deviating from a reasonable calibration.</p><p>The reliability diagrams further reinforce the conclusions drawn from the confidence histograms. The Standard model demonstrates a clear tendency toward overconfidence. This is evident from the prevalence of orange gaps, especially in the higher confidence bins. In contrast, the bilevel optimization approach exhibits much better calibration, with reliability diagrams visually more balanced. Interestingly, while IsoReg effectively reduces the overconfidence seen in the Standard model, it introduces occasional calibration issues of its own. In particular, it may undercorrect or overcorrect certain confidence levels, leading to gaps that are not entirely aligned with the model's true accuracy.</p><p>With regard to confidence calibration metrics, we report in Table1 the results for the Expected Calibration Error (ECE) and the Accuracy of the models. The Expected Calibration Error (ECE) shows that the Bilevel Optimization method generally achieves lower values compared to the traditional  Standard and IsoReg methods, indicating better calibration and more reliable confidence scores that are closer to the true probabilities, while keeping good accuracies overall.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Training Weights</head><p>We can make some additional comments regarding the training approach that exploits bilevel optimization. One of these relates to the role of the weights assigned to each training sample. The weighted approach used allows the model to prioritize certain samples over others during training, potentially leading to better calibration and improved performance on more challenging or ambiguous classification. By studying the evolution of these weights in BO4SC, we can better understand how the method operates.</p><p>In Figure <ref type="figure" target="#fig_2">3</ref> the history of the weights values (left panel) and their final distribution (right panel) are reported for the Blobs 1.7 dataset. In the left panel, the red lines indicate the weights associated with those samples that at the end result to be misclassified. One can clearly see that the weights often move in groups, creating bundles of lines that follow the same trend. They might represent groups of samples close to each other that have the same characteristic or close in the variable space. The main observation is that most of the red lines end between 0 and 0.5, while the darker lines are mainly above the middle value.</p><p>Looking at the right panel of Figure <ref type="figure" target="#fig_2">3</ref> one can observe that the BO4SC approach assigns a weight value of 1.0 to samples that are clearly and confidently classified into a single class, typically those located near the center of each cluster, far from the decision boundaries. As samples approach these boundaries, their weights decrease, converging towards 0.5 or even lower. This trend reflects BO4SC's strategy to diminish the influence of samples that are ambiguous or more likely to be misclassified. This is visually evident, as many of these samples are marked with a red contour to indicate their misclassification, so belonging to a cluster of a different class, and appear as dark-colored (black) points, indicative of their low weight.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions</head><p>In this article, we explored a novel bilevel optimization approach to address the challenge to self-calibrate a neural network in classification tasks. The objective was to improve the confidence predicted by a model in such a way that it better reflects the actual accuracy and that it would be more meaningful in ambiguous scenarios. We made experimentation and analysis across a variety of datasets, ranging from toy datasets like Blobs and Spirals to more complex ones like BAC, and demonstrated the effectiveness of bilevel methods, particularly in their ability to refine confidence by dynamically adjusting sample weights during training.</p><p>We used the Expected Calibration Error (ECE) to quantitatively assess the models' performance. The consistent superiority of the bilevel approach over traditional methods highlights its ability to enhance classifier reliability while maintaining good accuracies overall.</p><p>The bilevel approach behaves well also when compared with post-calibration techniques. They present better results and, more importantly, they do not suffer from typical issues that show up when post-calibrating the confidence. In fact, we found that fine-tuning with post-calibration methods, like isotonic regression, occasionally leads to over-adjustments, resulting in overly cautious confidence estimates. For this reason, the confidence produced by the bilevel optimization methods would be more trustworthy in a real-world scenario.</p><p>While the results are promising, future research should focus on further refining these techniques. There is indeed still room for improvements on the computational side, i.e., executional time and memory performance of our bilevel approach are not always competitive with traditional training.</p><p>Another future research direction lies towards reject-option classification, which allows models to refrain from making uncertain predictions.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Confidence region estimation on the Blobs 1.7 dataset for differnent approaches. Each plot represents the spatial distribution of confidence levels across the dataset. The color in the background represents the confidence value that the model associates to a point that would be found in that place.</figDesc><graphic coords="8,72.00,65.61,148.92,115.06" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Confidence Histograms (top) and Reliability Diagrams (bottom) for Spiral 3.5 test set. Orange sections represent overconfident gap, while red represents underconfidence.</figDesc><graphic coords="9,223.18,66.75,148.93,273.79" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Left: evolution of training weights found by the BO4SC method for the Blobs 1.7 dataset (1 epoch unit = 10 training epochs). Right: Final weight distribution.</figDesc><graphic coords="10,72.00,56.00,221.12,173.49" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Comparison</figDesc><table /><note>of Expected Calibration Error (ECE) and Accuracy across different datasets for Standard, IsoReg, and BO4SC methods. The best performance for each dataset and metric is highlighted in bold.MethodBlobs 1.</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>3 Blobs 1.7 Spiral 2.5 Spiral 3.5 BAC Expected Calibration Error (ECE)</head><label></label><figDesc></figDesc><table><row><cell>Standard</cell><cell>0.026</cell><cell>0.074</cell><cell>0.064</cell><cell>0.109</cell><cell>0.018</cell></row><row><cell>IsoReg</cell><cell>0.023</cell><cell>0.039</cell><cell>0.039</cell><cell>0.143</cell><cell>0.004</cell></row><row><cell>BO4SC</cell><cell>0.017</cell><cell>0.016</cell><cell>0.025</cell><cell>0.067</cell><cell>0.012</cell></row><row><cell></cell><cell></cell><cell>Accuracy</cell><cell></cell><cell></cell><cell></cell></row><row><cell>Standard</cell><cell>0.94</cell><cell>0.876</cell><cell>0.91</cell><cell>0.815</cell><cell>0.989</cell></row><row><cell>IsoReg</cell><cell>0.94</cell><cell>0.876</cell><cell>0.91</cell><cell>0.815</cell><cell>0.989</cell></row><row><cell>BO4SC</cell><cell>0.931</cell><cell>0.859</cell><cell>0.923</cell><cell>0.801</cell><cell>0.994</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>This work was supported by the STEM Challenge Fund 2023, University College Dublin.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Revisiting the calibration of modern neural networks</title>
		<author>
			<persName><forename type="first">M</forename><surname>Minderer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Djolonga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Romijnders</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Hubis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Houlsby</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Tran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lucic</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems</title>
				<imprint>
			<publisher>Curran Associates, Inc</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="page" from="15682" to="15694" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A survey on learning to reject</title>
		<author>
			<persName><forename type="first">X.-Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G.-S</forename><surname>Xie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Mei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C.-L</forename><surname>Liu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Proceedings of the IEEE</title>
		<imprint>
			<biblScope unit="volume">111</biblScope>
			<biblScope unit="page" from="185" to="215" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Machine learning with a reject option: A survey</title>
		<author>
			<persName><forename type="first">K</forename><surname>Hendrickx</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Perini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Van Der Plas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Meert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Davis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Machine Learning</title>
		<imprint>
			<biblScope unit="volume">113</biblScope>
			<biblScope unit="page" from="3073" to="3110" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">On calibration of modern neural networks</title>
		<author>
			<persName><forename type="first">C</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Pleiss</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">Q</forename><surname>Weinberger</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International conference on machine learning</title>
				<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="1321" to="1330" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Are humans good intuitive statisticians after all? rethinking some conclusions from the literature on judgment under uncertainty</title>
		<author>
			<persName><forename type="first">L</forename><surname>Cosmides</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Tooby</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">cognition</title>
		<imprint>
			<biblScope unit="volume">58</biblScope>
			<biblScope unit="page" from="1" to="73" />
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Hyperparameter optimization with approximate gradient</title>
		<author>
			<persName><forename type="first">F</forename><surname>Pedregosa</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International conference on machine learning</title>
				<imprint>
			<publisher>PMLR</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="737" to="746" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Forward and reverse gradient-based hyperparameter optimization</title>
		<author>
			<persName><forename type="first">L</forename><surname>Franceschi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Donini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Frasconi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pontil</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Machine Learning</title>
				<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="1165" to="1173" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Bilevel programming for hyperparameter optimization and meta-learning</title>
		<author>
			<persName><forename type="first">L</forename><surname>Franceschi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Frasconi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Salzo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Grazzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pontil</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International conference on machine learning</title>
				<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="1568" to="1577" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<author>
			<persName><forename type="first">N</forename><surname>Jain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Shenoy</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2212.05987</idno>
		<title level="m">Selective classification using a robust meta-learning approach</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Distance metric learning for large margin nearest neighbor classification</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">Q</forename><surname>Weinberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">K</forename><surname>Saul</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of machine learning research</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Nearest neighbors distance ratio open-set classifier</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">R</forename><surname>Mendes Júnior</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">M</forename><surname>De Souza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">D O</forename><surname>Werneck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">V</forename><surname>Stein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">V</forename><surname>Pazinato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">R</forename><surname>De Almeida</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">A</forename><surname>Penatti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">D S</forename><surname>Torres</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rocha</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Machine Learning</title>
		<imprint>
			<biblScope unit="volume">106</biblScope>
			<biblScope unit="page" from="359" to="386" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">To trust or not to trust a classifier</title>
		<author>
			<persName><forename type="first">H</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Guan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gupta</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in neural information processing systems</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Mandelbaum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Weinshall</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1709.09844</idno>
		<title level="m">Distance-based confidence score for neural network classifiers</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">N</forename><surname>Papernot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Mcdaniel</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1803.04765</idno>
		<title level="m">Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Dropout as a bayesian approximation: Representing model uncertainty in deep learning</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Gal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Ghahramani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">international conference on machine learning</title>
				<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1050" to="1059" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Weight uncertainty in neural network</title>
		<author>
			<persName><forename type="first">C</forename><surname>Blundell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Cornebise</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kavukcuoglu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Wierstra</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International conference on machine learning</title>
				<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="1613" to="1622" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Being bayesian, even just a bit, fixes overconfidence in relu networks</title>
		<author>
			<persName><forename type="first">A</forename><surname>Kristiadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hennig</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International conference on machine learning</title>
				<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="5436" to="5446" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Riquelme</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Tucker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Snoek</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1802.09127</idno>
		<title level="m">Deep bayesian bandits showdown: An empirical comparison of bayesian deep networks for thompson sampling</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Learning discriminative reconstructions for unsupervised outlier removal</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Xia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Cao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Wen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Hua</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sun</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE international conference on computer vision</title>
				<meeting>the IEEE international conference on computer vision</meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="1511" to="1519" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Classification-reconstruction learning for open-set recognition</title>
		<author>
			<persName><forename type="first">R</forename><surname>Yoshihashi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Shao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Kawakami</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>You</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Iida</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Naemura</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</title>
				<meeting>the IEEE/CVF Conference on Computer Vision and Pattern Recognition</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="4016" to="4025" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Dropout: a simple way to prevent neural networks from overfitting</title>
		<author>
			<persName><forename type="first">N</forename><surname>Srivastava</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Hinton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Krizhevsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Sutskever</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Salakhutdinov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The journal of machine learning research</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page" from="1929" to="1958" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Simple and scalable predictive uncertainty estimation using deep ensembles</title>
		<author>
			<persName><forename type="first">B</forename><surname>Lakshminarayanan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Pritzel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Blundell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in neural information processing systems</title>
		<imprint>
			<biblScope unit="volume">30</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Towards open set deep networks</title>
		<author>
			<persName><forename type="first">A</forename><surname>Bendale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">E</forename><surname>Boult</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE conference on computer vision and pattern recognition</title>
				<meeting>the IEEE conference on computer vision and pattern recognition</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1563" to="1572" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">To reject or not to reject: that is the question-an answer in case of neural classifiers</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">De</forename><surname>Stefano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Sansone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vento</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)</title>
		<imprint>
			<biblScope unit="volume">30</biblScope>
			<biblScope unit="page" from="84" to="94" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">The comparison and evaluation of forecasters</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">H</forename><surname>Degroot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">E</forename><surname>Fienberg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of the Royal Statistical Society: Series D (The Statistician)</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="page" from="12" to="22" />
			<date type="published" when="1983">1983</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Predicting good probabilities with supervised learning</title>
		<author>
			<persName><forename type="first">A</forename><surname>Niculescu-Mizil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Caruana</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd international conference on Machine learning</title>
				<meeting>the 22nd international conference on Machine learning</meeting>
		<imprint>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="625" to="632" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers</title>
		<author>
			<persName><forename type="first">B</forename><surname>Zadrozny</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Elkan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Icml</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="609" to="616" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Obtaining well calibrated probabilities using bayesian binning</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">P</forename><surname>Naeini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Cooper</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hauskrecht</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the AAAI conference on artificial intelligence</title>
				<meeting>the AAAI conference on artificial intelligence</meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="volume">29</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods</title>
		<author>
			<persName><forename type="first">J</forename><surname>Platt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in large margin classifiers</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="61" to="74" />
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers</title>
		<author>
			<persName><forename type="first">M</forename><surname>Kull</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Silva Filho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Flach</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Artificial intelligence and statistics</title>
				<imprint>
			<publisher>PMLR</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="623" to="631" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Calibrating classification probabilities with shape-restricted polynomial regression</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Dang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE transactions on pattern analysis and machine intelligence</title>
		<imprint>
			<biblScope unit="volume">41</biblScope>
			<biblScope unit="page" from="1813" to="1827" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Field-aware calibration: a simple and empirically strong method for reliable probabilistic predictions</title>
		<author>
			<persName><forename type="first">F</forename><surname>Pan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Ao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Xiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>He</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of The Web Conference 2020</title>
				<meeting>The Web Conference 2020</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="729" to="739" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Transforming classifier scores into accurate multiclass probability estimates</title>
		<author>
			<persName><forename type="first">B</forename><surname>Zadrozny</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Elkan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining</title>
				<meeting>the eighth ACM SIGKDD international conference on Knowledge discovery and data mining</meeting>
		<imprint>
			<date type="published" when="2002">2002</date>
			<biblScope unit="page" from="694" to="699" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Uncertainty quantification using bayesian neural networks in classification: Application to biomedical image segmentation</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Kwon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-H</forename><surname>Won</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">J</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename><surname>Paik</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computational Statistics &amp; Data Analysis</title>
		<imprint>
			<biblScope unit="volume">142</biblScope>
			<biblScope unit="page">106816</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Generic methods for optimization-based modeling</title>
		<author>
			<persName><forename type="first">J</forename><surname>Domke</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Artificial Intelligence and Statistics</title>
				<imprint>
			<publisher>PMLR</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="318" to="326" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">Gradient-based hyperparameter optimization through reversible learning</title>
		<author>
			<persName><forename type="first">D</forename><surname>Maclaurin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Duvenaud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Adams</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International conference on machine learning</title>
				<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="2113" to="2122" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">Torchopt: An efficient library for differentiable optimization</title>
		<author>
			<persName><forename type="first">J</forename><surname>Ren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">*</forename></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Feng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">*</forename></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">*</forename></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Pan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">*</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Fu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Mai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Machine Learning Research</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page" from="1" to="14" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<monogr>
		<title level="m" type="main">Automatic differentiation in pytorch</title>
		<author>
			<persName><forename type="first">A</forename><surname>Paszke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gross</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chintala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Chanan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Devito</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Desmaison</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Antiga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lerer</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2017">2017</date>
			<publisher>NIPS-W</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">A case-based explanation system for black-box systems</title>
		<author>
			<persName><forename type="first">C</forename><surname>Nugent</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Cunningham</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artif. Intell. Rev</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page" from="163" to="178" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<analytic>
		<title level="a" type="main">An alternative softmax operator for reinforcement learning</title>
		<author>
			<persName><forename type="first">K</forename><surname>Asadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">L</forename><surname>Littman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Machine Learning</title>
				<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="243" to="252" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b40">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">P</forename><surname>Kingma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ba</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1412.6980</idno>
		<title level="m">Adam: A method for stochastic optimization</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
