<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Ellipsoidal distribution-free set</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Dmitriy</forename><surname>Klyushin</surname></persName>
							<email>dmytroklyushin@knu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Taras Shevchenko National University of Kyiv</orgName>
								<address>
									<addrLine>60 Volodymyrska Street</addrLine>
									<postCode>01033</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Andrii</forename><surname>Tymoshenko</surname></persName>
							<email>andriitymoshenko@knu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Taras Shevchenko National University of Kyiv</orgName>
								<address>
									<addrLine>60 Volodymyrska Street</addrLine>
									<postCode>01033</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Ellipsoidal distribution-free set</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">C9C859DF0DD3410BF3B757270A51281A</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:49+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>data mining</term>
					<term>prediction set</term>
					<term>Petunin ellipsoid</term>
					<term>outlier detection 1</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper introduces a distribution-free approach based on the Hill's assumption and the Petunin ellipsoids. Several distributions are used to generate points and build ellipsoids, which are then used to check if test points with same distribution are created inside largest ellipsoid. As a result, a new prediction set is constructed in the form of Petunin ellipsoid, while confidence level refers to the number of points. The method described here works effectively for chosen distributions. Moreover, statistical analysis of the quantity of points inside is performed. This method is a useful tool for solving many urgent problems of machine learning, e.g. generalization of training samples, effective cross-validation etc.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Construction of prediction sets is a popular problem in machine learning, often placed together with neural networks. In recent years many international scientists have discussed prediction sets. For example, Adam Khakhar, Stephen Mell and Osbert Bastani <ref type="bibr" target="#b5">[1]</ref> used a trained code generation model in algorithm that leverages an abstract syntax tree based on programming language to create a set of programs with high confidence about the correct program. Another example by Soroush H. Zargarbashi, Mohammad Sadegh Akhondzadeh and Aleksandar Bojchevski <ref type="bibr" target="#b6">[2]</ref> derive provably robust sets by defining bounds for the worst-case change related to conformity scores.</p><p>Another important idea is to find distribution-free classification and sets. Here we can mention a work by Chirag Gupta, Aleksandr Podkopaev and Aaditya Ramdas <ref type="bibr" target="#b7">[3]</ref> where they study calibration and prediction sets combined with confidence intervals. Their research is dedicated to binary classification in case of distribution-free sets. Based on demonstrated theorems, confidence intervals for binned probabilities allow to perform distribution free calibration.</p><p>In <ref type="bibr">[4]</ref> Hongxiang Qiu, Edgar Dobriban and Eric Tchetgen Tchetgen offer a novel flexible distribution-free method named PredSet-1Step for constructing prediction sets where asymptotic coverage is guaranteed under unknown covariate shift.</p><p>As for <ref type="bibr" target="#b9">[5]</ref> A.N. Angelopoulos, S. Bates, J. Malik and M.I. Jordan present an algorithm which changes a chosen classifier to determine a predictive set, where the true label is inside with probability set by user. This simple and fast algorithm reminds of Platt scaling but results into a formal finite-sample coverage for every model and dataset.</p><p>Approach described in <ref type="bibr" target="#b10">[6]</ref> by analysis of a holdout set allows to choose the size of the prediction sets and leads to explicit finite-sample guarantees. As a result, simple, distribution-free and rigorous error control is obtained for many tasks, demonstrated on five large-scale machine learning problems. Some more works related to predictions offer various approaches: prediction based on language models <ref type="bibr" target="#b11">[7]</ref>, neural networks compared to calibrated predictions <ref type="bibr" target="#b12">[8]</ref>, distribution-free uncertainty quantification and conformal prediction <ref type="bibr" target="#b13">[9]</ref>, conformal risk control <ref type="bibr" target="#b14">[10]</ref>, conformal predictors applied for medical imaging <ref type="bibr" target="#b15">[11]</ref>, confident prediction in case distributions shift <ref type="bibr" target="#b16">[12]</ref>, conformal prediction robust to label noise <ref type="bibr" target="#b17">[13]</ref>, conformal prediction via probabilistic circuits <ref type="bibr" target="#b18">[14]</ref>.</p><p>Not only predictions attract modern scientists. Some related topics are also worth being mentioned: uncertainty quantification is performed over graph using conformalized graph neural networks <ref type="bibr" target="#b19">[15]</ref>, adversarial robustness applied to randomly smoothed classifiers <ref type="bibr" target="#b20">[16]</ref>, randomized smoothing for graphs and images <ref type="bibr" target="#b21">[17]</ref>, adversarially trained smoothed classifiers <ref type="bibr" target="#b22">[18]</ref>.</p><p>The purpose of our paper is to describe a method to construct ellipsoidal prediction set for a set of the randomly generated points, based on chosen distribution. The main tools for our forecast are predictive sets represented by ellipses, constructed using generated points. Test points are generated with same distribution, the more ellipses contain a point the higher probability of belonging to same class can be expected. Consider the problem of creating conformal prediction based on points 12 , ,...,</p><formula xml:id="formula_0">d m x x</formula><p>x  . The aim is to find a prediction set ( ) 12 , ,...,</p><formula xml:id="formula_1">d m E x x</formula><p>x  resulting into probability ( )</p><formula xml:id="formula_2">1 1 m p x E +   −  ,</formula><p>where 01    is a chosen significance level, so that ( )</p><formula xml:id="formula_3">1 m p x E + </formula><p>is the confidence level of the predictive set.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Hill`s Assumption () m</head><p>A As 12 , ,..., m x x x we denote a sample drawn from a population generated according to absolutely continuous distribution F. Next, we arrange it in the increasing order and create the variance series</p><formula xml:id="formula_4">(1)<label>(2) ( )</label></formula><p>... x can be calculated as</p><formula xml:id="formula_5">( ) ( ) ( ) 1 1 m im i km ik F u C F u F u − =         =− </formula><p>, where ( ) ( )</p><formula xml:id="formula_6">F u p x u = . () m</formula><p>A <ref type="bibr" target="#b23">[19]</ref> states that if () i x is chosen from the population according to distribution F then ( ) ( )</p><formula xml:id="formula_7">1 ( ) ( ) , , . 1 m i j ji p x x x j i m + −  =  +<label>(1)</label></formula><formula xml:id="formula_8">() n</formula><p>A was proven in papers of Yu.I. Petunin <ref type="bibr" target="#b24">[20]</ref> and by several other scientists. Let us recall the proof for random variables  and  . If they are independent, then</p><formula xml:id="formula_9">( ) ( ) ( ) p F u dF u    − =  ,<label>(2)</label></formula><p>where ( )</p><formula xml:id="formula_10">Fu  and</formula><p>( )</p><formula xml:id="formula_11">Fu </formula><p>denote the distribution functions of  and  , respectively. The probability density of i-th order statistics is:</p><formula xml:id="formula_12">( ) ( ) ( ) ( ). 1 mm i m i i k m i i k i k f u C F u F u G u − == ==         − <label>(3)</label></formula><p>Hence, ( ) ( )</p><formula xml:id="formula_13">' m ki ik F u G u = =   , (<label>) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )</label></formula><formula xml:id="formula_14">11 11 k m k k m k k km G u C k F u F u f u F u m k F u f u − − − −  =−   − − −<label>(4) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) (</label></formula><p>) ( )</p><formula xml:id="formula_15">11 1 1 1 1 2 1 . 1 1 1 1 1 k m k k km k m k k m k k m G u C F u F u C k F u F u f u F u F u m k f u + − − + + − − + − − +    ==    =   − + − − − − −</formula><p>The second term is compensated by the first term:</p><formula xml:id="formula_16">( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 ! ! 1 !! 1 0. ! ! 1 ! 1 ! 1 ! ! 1 ! ! kk mm m m k m k mm C m k C k m k k m k k m k k m k k + −+ −+ − + = − + = − + = − − − + − − − −</formula><p>The last term of the previous sum is equal to zero</p><formula xml:id="formula_17">( )<label>( ) ( ) ( ) (</label></formula><p>) ( )</p><formula xml:id="formula_18">0 10 m F u F u m m f u − − = .</formula><p>Thus,</p><formula xml:id="formula_19">( ) ( ) ( ) ( ) 1 1 1 1 k m k k km f u mC F u F u f u −− − −         =− .</formula><p>Let us find ( )</p><formula xml:id="formula_20">() i p x x </formula><p>and ( )</p><formula xml:id="formula_21">() j p x x </formula><p>. Using the above equations, we have</p><formula xml:id="formula_22">( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 1 ( ) 1 1 11 0 1 1 . i m i m i i i i i m m p x x F u dF u mC F u F u dF u mC v v dv  − − −− −− − − ==          = − −    It is proven that, ( ) ( ) ( ) ( ) ( ) ( ) (</formula><p>)</p><formula xml:id="formula_23">1 1 1 0 1 ! 1 ! 1 1 1 ! q p dx p q p q xx p q p q − − =   − − −=  + − + −  .</formula><p>We can apply this equation as</p><formula xml:id="formula_24">( )<label>( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) (</label></formula><p>) </p><formula xml:id="formula_25">1 11 11 0 1 1 1 1 ! ! 1 1, 1 1 1 2 1 ! mi i i m i i m i i m i x x dx B i m i i m i m m − + − +−  +  − +  +  − + − − = + − + = = =  + + + −  + +  ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) 1 1 ( ) 1 . ! ! 1 ! 1 ! ! ! 1 ! ! 1 ! 1 1 ! 1 i m j</formula><formula xml:id="formula_26">( ) ( ) ( ) ( ) ( ) ( ) 1 1 1 , . 1 1 1 m m j m i ij j j i i p x x x p x x p x x m m m + + +  =  − −  = − = + + +</formula><p>So, in case a random variable x is independent from 12 , ,..., m</p><p>x x x and it is chosen by sampling from the same population based on distribution ( ) </p><formula xml:id="formula_27">Fu, then<label>( ) ( ) (1) ( )</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Petunin Ellipsoids</head><p>The algorithm for construction of the ellipsoid containing the set as m random points with the . Connect them with a line (next mentioned as diameter), then project all the points to the hyperplane orthogonal to this line. To simplify this, we can rotate all objects together around line center to make it horizontal. But then we will need to rotate it back in the end (Figure <ref type="figure" target="#fig_2">1</ref>). Next, we need to find the farthest points from the line. Construct lines parallel to the diameter through these points. Create lines that are orthogonal to diameter and pass through the farthest from each other points. As a result, we obtain a rectangle, which covers the given set of points and lies on a two-dimensional plane (Figure <ref type="figure" target="#fig_5">2</ref>).  Find its center and all distances from the center of the square to every image of point. Then, we need to find maximal distance and create a circle with center same as square center and radius that is equal to maximum distance from the center to the images of points.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Numerical results</head><p>In this section testing results are described. First, we generate a chosen distribution-based set of 1000 points and build ellipses through each point. Then we generate 1000 more points with the same distribution and check the number of points inside the largest ellipse. Statistical characteristics of these results are shown below for three different distributions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Normal distribution</head><p>The first test was performed on normal distribution testing, 3 to 1. We generate 12 random numbers from 0 to 1, calculate their sum and subtract 6. Then we modify values by multiplying the result and adding value so that horizontal and vertical coordinate values can be generated in proportion <ref type="bibr" target="#b7">3</ref>      More tests were performed for these distributions with other parameters and results were alike. Ellipse areas increased smoothly at first, but for the last 100-200 most distant points faster increase was reported. As for accuracy, we expected values to be approximately 0.998 and received alike results.</p><p>distribution with rectangular area covered.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>Constructing Petunin ellipsoid is a useful approach for arranging data and detecting anomalies using statistical depth. According to obtained results, the algorithm leads to effective prediction</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fx</head><label></label><figDesc>where () i x is i-th order statistics. The resulting order statistics (1) of the k-th order statistics () k</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>Previous equation was obtained by multiplying numerator and denominator by (j 1). So, we get</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>1 .</head><label>1</label><figDesc>The confidence level of the tolerance interval ( ) this interval is less than 0.05.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head></head><label></label><figDesc>Yu. I. Petunin. Statistical and geometrical properties of the Petunin ellipsoids were investigated in<ref type="bibr" target="#b25">[21]</ref>.Here we applied -dimensional case. First, we find two points farthest from each other k x and l x of the set   1 ,...,</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: The furthest from each other points</figDesc><graphic coords="4,190.90,62.50,218.80,137.20" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Rectangle covering the images of the points</figDesc><graphic coords="4,190.90,302.75,218.80,128.40" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Square covering the images of the points</figDesc><graphic coords="4,212.75,520.55,175.20,164.40" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: The circle covering the images of the points</figDesc><graphic coords="5,212.75,62.40,175.20,164.40" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Petunin ellipse covering the initial points</figDesc><graphic coords="5,178.80,288.60,243.10,148.80" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_9"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Average ellipse areas Normal distribution, 100 tests</figDesc><graphic coords="6,170.30,414.60,260.15,260.15" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_10"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: Average ellipse areas Exponential distribution, 100 tests</figDesc><graphic coords="7,158.05,331.10,284.65,284.60" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_11"><head>4. 3 .</head><label>3</label><figDesc>Gamma distributionGamma distribution was used here based on pseudorandom numbers with parameters 50 and 90 for horizontal and vertical values respectively.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_12"><head>Figure 8 :</head><label>8</label><figDesc>Figure 8: Average ellipse areas Gamma distribution, 100 tests</figDesc><graphic coords="8,156.70,270.60,287.30,287.30" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>to 1.</figDesc><table><row><cell>Expected probability 0.99762</cell></row><row><cell>Mean 0.99762</cell></row><row><cell>Mean Deviation 0.001873</cell></row><row><cell>Mode 0.996477</cell></row><row><cell>Median 0.998</cell></row><row><cell>Standard Deviation 0.0022867</cell></row><row><cell>Variance 0.0000052279</cell></row><row><cell>Kurtosis 3.24629</cell></row><row><cell>Skewness -0.892779</cell></row><row><cell>Range 0.01</cell></row><row><cell>Maximum 1</cell></row><row><cell>Minimum 0.989</cell></row><row><cell>Geometric Mean 0.9976174</cell></row><row><cell>Harmonic Mean 0.9976148</cell></row></table></figure>
		</body>
		<back>
			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>sets based on Petunin ellipsoid. The confidence level reached is theoretically precise for tested distributions. It allows us to compute statistical depth based on every point and detect outliers of the set. Experimental results approved theoretical properties of the Petunin ellipses.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Declaration on Generative AI</head><p>The authors have not employed any Generative AI tools.</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Exponential distribution Exponential with parameters -17, -50 as multipliers for logarithm from random value from 0 to 1</title>
		<idno>0.998158</idno>
		<imprint/>
	</monogr>
	<note>Expected probability 0.998158 Mean</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title/>
		<idno>.00138457 Mode 0.99943 Median 0.999</idno>
	</analytic>
	<monogr>
		<title level="j">Mean Deviation</title>
		<imprint>
			<biblScope unit="volume">0</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title/>
		<idno>0.00000351465 Kurtosis 5.98476568 Skewness -1.5228786 Range 0.01 Maximum 1 Minimum 0.989999</idno>
	</analytic>
	<monogr>
		<title level="j">Standard Deviation</title>
		<imprint>
			<biblScope unit="volume">0</biblScope>
			<biblScope unit="page">18747</biblScope>
		</imprint>
	</monogr>
	<note>Variance</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title/>
	</analytic>
	<monogr>
		<title level="j">Geometric Mean</title>
		<imprint>
			<biblScope unit="volume">0</biblScope>
			<biblScope unit="page">998</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title/>
	</analytic>
	<monogr>
		<title level="j">Harmonic Mean</title>
		<imprint>
			<biblScope unit="volume">0</biblScope>
			<biblScope unit="page">998</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">PAC Prediction Sets for Large Language Models of Code</title>
		<author>
			<persName><forename type="first">A</forename><surname>Khakhar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Bastani</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2302.08703</idno>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Robust Yet Efficient Conformal Prediction Sets</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">H</forename><surname>Zargarbashi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Akhondzadeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bojchevski</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2407.09165</idno>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Distribution-free binary classification: prediction sets, confidence intervals and calibration</title>
		<author>
			<persName><forename type="first">C</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Podkopaev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ramdas</surname></persName>
		</author>
		<ptr target="https://proceedings.neurips.cc/paper/2020/file/26d88423fc6da243ffddf161ca712757-Paper.pdf" />
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Prediction sets adaptive to unknown covariate shift</title>
		<author>
			<persName><forename type="first">H</forename><surname>Qiu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Dobriban</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Tchetgen</surname></persName>
		</author>
		<idno type="DOI">10.1093/jrsssb/qkad069</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of the Royal Statistical Society Series B: Statistical Methodology</title>
		<imprint>
			<biblScope unit="volume">85</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page">1705</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Uncertainty sets for image classifiers using conformal prediction</title>
		<author>
			<persName><forename type="first">A</forename><surname>Angelopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bates</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Malik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">I</forename><surname>Jordan</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2009.14193" />
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Distribution-free, risk-controlling prediction sets</title>
		<author>
			<persName><forename type="first">S</forename><surname>Bates</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Angelopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Lei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Malik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">I</forename><surname>Jordan</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2101.02703" />
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Autoregressive structured prediction with language models</title>
		<author>
			<persName><forename type="first">T</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Monath</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Cotterell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sachan</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2210.14698" />
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Pac confidence sets for deep neural networks via calibrated prediction</title>
		<author>
			<persName><forename type="first">S</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Bastani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Matni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Lee</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2001.00106" />
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">A gentle introduction to conformal prediction and distributionfree uncertainty quantification</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Angelopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bates</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2107.07511" />
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Angelopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bates</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Fisch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Lei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Schuster</surname></persName>
		</author>
		<idno>ArXiv, abs/2208.02814</idno>
		<ptr target="https://api.semanticscholar.org/CorpusID:251320513" />
		<title level="m">Conformal risk control</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Fair conformal predictors for applications in medical imaging</title>
		<author>
			<persName><forename type="first">C</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lemay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Hobel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kalpathy-Cramer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the AAAI Conference on Artificial Intelligence</title>
				<meeting>the AAAI Conference on Artificial Intelligence</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="page" from="12008" to="12016" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Robust validation: Confident predictions even when distributions shift</title>
		<author>
			<persName><forename type="first">M</forename><surname>Cauchois</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Aliand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Duchi</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2008.04267</idno>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">Conformal prediction is robust to label noise</title>
		<author>
			<persName><forename type="first">B.-S</forename><surname>Einbinder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bates</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Angelopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gendler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Romano</surname></persName>
		</author>
		<idno>ArXiv, abs/2209.14295</idno>
		<ptr target="https://api.semanticscholar.org/CorpusID:262091979" />
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">COLEP: Certifiably robust learning-reasoning conformal prediction via probabilistic circuits</title>
		<author>
			<persName><forename type="first">M</forename><surname>Kang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">M</forename><surname>Gurel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Li</surname></persName>
		</author>
		<ptr target="https://openreview.net/forum?id=XN6ZPINdSg" />
	</analytic>
	<monogr>
		<title level="m">The Twelfth International Conference on Learning Representations</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Uncertainty quantification over graph with conformalized graph neural networks</title>
		<author>
			<persName><forename type="first">K</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Jin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Candes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Leskovec</surname></persName>
		</author>
		<ptr target="https://arxiv.org/pdf/2305.14535" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Tight certificates of adversarial robustness for randomly smoothed classifiers</title>
		<author>
			<persName><forename type="first">G.-H</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yuan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Jaakkola</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in Neural Information Processing Systems</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Efficient robustness certificates for discrete data: Sparsity-aware randomized smoothing for graphs, images and more</title>
		<author>
			<persName><forename type="first">A</forename><surname>Bojchevski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gasteiger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gunnemann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Machine Learning</title>
				<imprint>
			<publisher>PMLR</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="1003" to="1013" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Provably robust deep learning via adversarially trained smoothed classifiers</title>
		<author>
			<persName><forename type="first">H</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Salman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Razenshteyn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bubeck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Yang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in Neural Information Processing Systems</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">population</title>
	</analytic>
	<monogr>
		<title level="j">Journal of the American Statistical Association</title>
		<imprint>
			<biblScope unit="volume">63</biblScope>
			<biblScope unit="page" from="677" to="691" />
			<date type="published" when="1968">1968</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Characterization of the uniform distribution using order statistics</title>
		<author>
			<persName><forename type="first">I</forename><surname>Madreimov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yu</forename><forename type="middle">I</forename><surname>Petunin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Theory of Probability and Mathematiical Statistics</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="page" from="105" to="110" />
			<date type="published" when="1983">1983</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Minimal Ellipsoids and Maximal Simplexes in 3D Euclidean Space</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">I</forename><surname>Lyashko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">V</forename><surname>Rublev</surname></persName>
		</author>
		<idno type="DOI">10.1023/B:CASA.0000020224.83374.d7</idno>
	</analytic>
	<monogr>
		<title level="j">Cybernetics and Systems Analysis</title>
		<imprint>
			<biblScope unit="volume">39</biblScope>
			<biblScope unit="page" from="831" to="834" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
