<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Ellipsoidal distribution-free set</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dmitriy Klyushin</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrii Tymoshenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>60 Volodymyrska Street, 01033, Kyiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <fpage>28</fpage>
      <lpage>37</lpage>
      <abstract>
        <p>This paper introduces a distribution-free approach based on the Hill's assumption and the Petunin ellipsoids. Several distributions are used to generate points and build ellipsoids, which are then used to check if test points with same distribution are created inside largest ellipsoid. As a result, a new prediction set is constructed in the form of Petunin ellipsoid, while confidence level refers to the number of points. The method described here works effectively for chosen distributions. Moreover, statistical analysis of the quantity of points inside is performed. This method is a useful tool for solving many urgent problems of machine learning, e.g. generalization of training samples, effective cross-validation etc.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;data mining</kwd>
        <kwd>prediction set</kwd>
        <kwd>Petunin ellipsoid</kwd>
        <kwd>outlier detection 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>rigorous error control is obtained for many tasks, demonstrated on five large-scale machine
learning problems.</p>
      <p>
        Some more works related to predictions offer various approaches: prediction based on language
models [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], neural networks compared to calibrated predictions [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], distribution-free uncertainty
quantification and conformal prediction [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], conformal risk control [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], conformal predictors
applied for medical imaging [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], confident prediction in case distributions shift [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], conformal
prediction robust to label noise [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], conformal prediction via probabilistic circuits [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
      </p>
      <p>
        Not only predictions attract modern scientists. Some related topics are also worth being
mentioned: uncertainty quantification is performed over graph using conformalized graph neural
networks [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], adversarial robustness applied to randomly smoothed classifiers [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], randomized
smoothing for graphs and images [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], adversarially trained smoothed classifiers [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
      </p>
      <p>The purpose of our paper is to describe a method to construct ellipsoidal prediction set for a set
of the randomly generated points, based on chosen distribution. The main tools for our forecast are
predictive sets represented by ellipses, constructed using generated points. Test points are
generated with same distribution, the more ellipses contain a point the higher probability of
belonging to same class can be expected. Consider the problem of creating conformal prediction
based on points x1, x2,..., xm  d . The aim is to find a prediction set E ( x1, x2 ,..., xm )  d resulting
into probability p ( xm+1  E )  1−  , where 0   1 is a chosen significance level, so that
p ( xm+1  E ) is the confidence level of the predictive set.
2. Hill`s Assumption A(m)
As x1, x2 ,..., xm we denote a sample drawn from a population generated according to absolutely
continuous distribution F. Next, we arrange it in the increasing order and create the variance series
x(1)  x(2)  ...  x(m) , where x(i) is i-th order statistics. The resulting order statistics x(1) , x(2) ,..., x(m)
are dependent. The distribution Fk ( x) of the k-th order statistics x(k) can be calculated as
m
Fk (u) = Cmi F (u)i 1− F (u)m−1 , where F (u) = p ( x  u) .</p>
      <p>i=k
distribution F then</p>
      <p>A(m) [19] states that if x(i) is chosen from the population according to
p ( xm+1 ( x(i) , x( j) )) = j − i , j  i. (1)</p>
      <p>m +1</p>
      <p>
        A(n) was proven in papers of Yu.I. Petunin [
        <xref ref-type="bibr" rid="ref19">20</xref>
        ] and by several other
scientists. Let us recall the proof for random variables  and  . If they are independent, then

p (  ) =  F (u ) dF (u ) , (2)
      </p>
      <p>−
where F (u) and F (u) denote the distribution functions of  and  , respectively. The
probability density of i-th order statistics is:</p>
      <p>m m
fk (u) = Cmi F (u)i 1− F (u)m−i = Gi (u).</p>
      <p>i=k i=k
m
Hence, Fk(u) = Gi' (u) ,</p>
      <p>i=k
Gk (u) = Cmk k (F (u))k−1 (1− F (u))m−k f (u) −(F (u))k (m − k )(1− F (u ))m−k−1 f (u )

(3)
(4)</p>
      <sec id="sec-1-1">
        <title>The second term is compensated by the first term:</title>
        <p>−Cmk (m − k ) + Cmk+1 (k +1) = − m!(m − k ) + m!(k +1)
(m − k )!k ! (m − k −1)!(k +1)!</p>
      </sec>
      <sec id="sec-1-2">
        <title>The last term of the previous sum is equal to zero</title>
        <p>( F (u))m (1− F (u))0 (m − m) f (u ) = 0 .</p>
      </sec>
      <sec id="sec-1-3">
        <title>Thus,</title>
        <p>= −</p>
        <p>m! + m!
(m − k −1)!k! (m − k −1)!k!
= 0.</p>
        <p>fk (u) = mCmk−−11 F (u)k−1 1− F (u)m−k f (u) .</p>
        <p>Let us find p( x  x(i) ) and p ( x  x( j) ) . Using the above equations, we have
  1
p ( x  x(i) ) =  F (u ) dFi (u ) = mCmi−−11  F (u )i 1− F (u )m−i dF (u ) = mCmi−−11  vi (1− v)m−1 dv.</p>
        <p>− − 0</p>
      </sec>
      <sec id="sec-1-4">
        <title>It is proven that,</title>
        <p>1 x p−1 (1− x)q−1 dx = ( p) (q)
0 ( p + q −1)
We can apply this equation as
( p −1)!(q −1)!
=
( p + q −1)!
.
1 xi+1−1 (1− x)m−i+1−1 dx = B(i +1, m − i +1) = (i +1)(m − i +1) = (i +1)(m − i +1) = i!(m − i)!
0 (i +1+ m +1− i) (m + 2) (m +1)!
p ( xm+1  x( j) ) = mCmi−−11 j(!(mm+−1)j!)! = (mm−(mj)!−( 1j)−!(1)j!−m1()m!j+!(1m)(−mj−)!1)! = m +j1.</p>
        <p>Previous equation was obtained by multiplying numerator and denominator by (j 1). So, we get
p ( xm+1  ( x(i) , x( j) )) = p ( xm+1  xj ) − p ( xm+1  xi ) = m +j1 − mi+1 = mj −+i1.</p>
        <p>So, in case a random variable x is independent from x1, x2 ,..., xm and it is chosen by sampling from
the same population based on distribution F (u) , then
p ( x  ( x(1) , x(m) )) = m −1 .</p>
        <p>m +1
Remark 1. The confidence level of the tolerance interval ( x(1) , x(m) ) is m −1 , thus for m  39 the
m +1
confidence level of this interval is less than 0.05.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>3. Petunin Ellipsoids</title>
      <p>
        The algorithm for construction of the ellipsoid containing the set as m random points with the
probability m −1 was proposed by Yu. I. Petunin. Statistical and geometrical properties of the
m +1
Petunin ellipsoids were investigated in [
        <xref ref-type="bibr" rid="ref20">21</xref>
        ].
      </p>
      <p>Here we applied -dimensional case. First, we find two points
farthest from each other xk and xl of the set Mn = x1,..., xm . Connect them with a line (next
mentioned as diameter), then project all the points to the hyperplane orthogonal to this line. To
simplify this, we can rotate all objects together around line center to make it horizontal. But then
we will need to rotate it back in the end (Figure 1).</p>
      <p>Next, we need to find the farthest points from the line. Construct lines parallel to the diameter
through these points. Create lines that are orthogonal to diameter and pass through the farthest
from each other points. As a result, we obtain a rectangle, which covers the given set of points and
lies on a two-dimensional plane (Figure 2).</p>
      <p>By dividing the shorter side length by the longer side length, we can obtain the shrinking
coefficient. Translate, rotate and shrink mentioned above rectangle to construct a square covering
the given points.</p>
      <p>Find its center and all distances from the center of the square to every image of point. Then, we
need to find maximal distance and create a circle with center same as square center and radius that
is equal to maximum distance from the center to the images of points.
Perform inverse transformations of this circle. The result is the Petunin ellipse (Figure 5).</p>
      <p>In high-dimensional case, we can construct a minimum volume axis-aligned orthogonal
parallelepiped that contains images x1,..., xm of initial points. Perform shrinking transformation
from the orthogonal parallelepiped to a hypercube. Find its center and distances from it to x1,..., xm .
Next find the maximum distance. After that, construct a hypersphere with the center and radius
that is equal to the maximum distance from the center to the images of the points. Perform inverse
transformations (translation, rotation and stretching) and obtain the Petunin ellipsoid Em . Hill s
assumption A(m) is true, so P ( xm+1  E ) = m −1.</p>
      <p>m +1</p>
      <p>Since at the last stage of construction of the Petunin ellipsoid we obtain the concentric spheres
with one unique point at the surface, using the Petunin ellipsoids we can arrange the points by
their statistical depth. The median point of the set (the most probable point) is the point nearest to
the center of the Petunin ellipsoid and the outlier is the point at the boundary of the Petunin
ellipsoid.</p>
    </sec>
    <sec id="sec-3">
      <title>4. Numerical results</title>
      <p>In this section testing results are described. First, we generate a chosen distribution-based set of
1000 points and build ellipses through each point. Then we generate 1000 more points with the
same distribution and check the number of points inside the largest ellipse. Statistical
characteristics of these results are shown below for three different distributions.</p>
      <sec id="sec-3-1">
        <title>4.1. Normal distribution</title>
        <p>The first test was performed on normal distribution testing, 3 to 1. We generate 12 random
numbers from 0 to 1, calculate their sum and subtract 6. Then we modify values by multiplying the
result and adding value so that horizontal and vertical coordinate values can be generated in
proportion 3 to 1.</p>
        <p>As we can see, the largest ellipse contains almost all points and areas slowly grow until last 50
points defining largest ellipses.</p>
      </sec>
      <sec id="sec-3-2">
        <title>4.2. Exponential distribution</title>
        <p>Exponential with parameters -17, -50 as multipliers for logarithm from random value from 0 to 1.
Here the largest ellipse contains almost all points, but ellipse areas grow much faster.</p>
      </sec>
      <sec id="sec-3-3">
        <title>4.3. Gamma distribution</title>
        <p>Gamma distribution was used here based on pseudorandom numbers with parameters 50 and 90
for horizontal and vertical values respectively.</p>
        <p>Expected probability 0.997760000000000</p>
        <sec id="sec-3-3-1">
          <title>Gamma distribution, 100 tests</title>
          <p>In this test ellipse contains almost all points and ellipse areas increase very fast after ellipse
based on 800 points.</p>
          <p>More tests were performed for these distributions with other parameters and results were alike.
Ellipse areas increased smoothly at first, but for the last 100-200 most distant points faster increase
was reported. As for accuracy, we expected values to be approximately 0.998 and received alike
results.
distribution with rectangular area covered.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>5. Conclusion</title>
      <p>Constructing Petunin ellipsoid is a useful approach for arranging data and detecting anomalies
using statistical depth. According to obtained results, the algorithm leads to effective prediction
35
sets based on Petunin ellipsoid. The confidence level reached is theoretically precise for tested
distributions. It allows us to compute statistical depth based on every point and detect outliers of
the set. Experimental results approved theoretical properties of the Petunin ellipses.</p>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <sec id="sec-5-1">
        <title>The authors have not employed any Generative AI tools.</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Khakhar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Bastani</surname>
          </string-name>
          ,
          <source>PAC Prediction Sets for Large Language Models of Code</source>
          .
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2302.08703
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S. H.</given-names>
            <surname>Zargarbashi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Akhondzadeh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bojchevski</surname>
          </string-name>
          ,
          <source>Robust Yet Efficient Conformal Prediction Sets</source>
          .
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2407.09165.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>C.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Podkopaev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ramdas</surname>
          </string-name>
          ,
          <article-title>Distribution-free binary classification: prediction sets, confidence intervals</article-title>
          and calibration,
          <year>2020</year>
          . URL: https://proceedings.neurips.cc/paper/2020/file/26d88423fc6da243ffddf161ca712757-Paper.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>H.</given-names>
            <surname>Qiu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Dobriban</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Tchetgen</surname>
          </string-name>
          ,
          <article-title>Prediction sets adaptive to unknown covariate shift</article-title>
          ,
          <source>Journal of the Royal Statistical Society Series B: Statistical Methodology</source>
          <volume>85</volume>
          (
          <issue>5</issue>
          ) (
          <year>2023</year>
          )
          <fpage>1680</fpage>
          1705. doi:
          <volume>10</volume>
          .1093/jrsssb/qkad069.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Angelopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bates</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Malik</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. I. Jordan</surname>
          </string-name>
          ,
          <article-title>Uncertainty sets for image classifiers using conformal prediction</article-title>
          ,
          <year>2020</year>
          . URL: https://arxiv.org/abs/
          <year>2009</year>
          .14193.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bates</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Angelopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Lei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Malik</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. I. Jordan</surname>
          </string-name>
          ,
          <article-title>Distribution-free, risk-controlling prediction sets</article-title>
          ,
          <year>2021</year>
          . URL: https://arxiv.org/abs/2101.02703.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Monath</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cotterell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sachan</surname>
          </string-name>
          ,
          <article-title>Autoregressive structured prediction with language models</article-title>
          ,
          <year>2022</year>
          . URL: https://arxiv.org/abs/2210.14698.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Bastani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Matni</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Lee</surname>
          </string-name>
          ,
          <article-title>Pac confidence sets for deep neural networks via calibrated prediction</article-title>
          ,
          <year>2020</year>
          . URL: https://arxiv.org/abs/
          <year>2001</year>
          .00106.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Angelopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bates</surname>
          </string-name>
          ,
          <article-title>A gentle introduction to conformal prediction and distributionfree uncertainty quantification</article-title>
          ,
          <year>2021</year>
          . URL: https://arxiv.org/abs/2107.07511.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Angelopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bates</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fisch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Lei</surname>
          </string-name>
          , T. Schuster,
          <article-title>Conformal risk control</article-title>
          .
          <source>ArXiv, abs/2208.02814</source>
          ,
          <year>2022</year>
          . URL: https://api.semanticscholar.org/CorpusID:251320513.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>C.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Lemay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Hobel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kalpathy-Cramer</surname>
          </string-name>
          ,
          <article-title>Fair conformal predictors for applications in medical imaging</article-title>
          ,
          <source>in: Proceedings of the AAAI Conference on Artificial Intelligence</source>
          , volume
          <volume>36</volume>
          ,
          <year>2022</year>
          , pp.
          <fpage>12008</fpage>
          <lpage>12016</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Cauchois</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Aliand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.C.</given-names>
            <surname>Duchi</surname>
          </string-name>
          ,
          <article-title>Robust validation: Confident predictions even when distributions shift</article-title>
          .
          <source>arXiv preprint arXiv:2008.04267</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>B.-S.</given-names>
            <surname>Einbinder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bates</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Angelopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gendler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Romano</surname>
          </string-name>
          ,
          <article-title>Conformal prediction is robust to label noise</article-title>
          .
          <source>ArXiv, abs/2209.14295</source>
          ,
          <year>2022</year>
          . URL: https://api.semanticscholar.org/CorpusID:262091979.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. M.</given-names>
            <surname>Gurel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <surname>COLEP</surname>
          </string-name>
          :
          <article-title>Certifiably robust learning-reasoning conformal prediction via probabilistic circuits</article-title>
          .
          <source>In The Twelfth International Conference on Learning Representations</source>
          ,
          <year>2024</year>
          . URL: https://openreview.net/forum?id=
          <fpage>XN6ZPINdSg</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>K.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Candes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Leskovec</surname>
          </string-name>
          ,
          <article-title>Uncertainty quantification over graph with conformalized graph neural networks</article-title>
          ,
          <year>2023</year>
          . URL: https://arxiv.org/pdf/2305.14535.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>G.-H.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yuan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Jaakkola</surname>
          </string-name>
          ,
          <article-title>Tight certificates of adversarial robustness for randomly smoothed classifiers</article-title>
          .
          <source>Advances in Neural Information Processing Systems</source>
          ,
          <volume>32</volume>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bojchevski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gasteiger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gunnemann</surname>
          </string-name>
          ,
          <article-title>Efficient robustness certificates for discrete data: Sparsity-aware randomized smoothing for graphs, images and more</article-title>
          .
          <source>In: International Conference on Machine Learning</source>
          , pp.
          <fpage>1003</fpage>
          <lpage>1013</lpage>
          . PMLR,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>H.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Salman</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Razenshteyn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , S. Bubeck,
          <string-name>
            <surname>G</surname>
          </string-name>
          . Yang,
          <article-title>Provably robust deep learning via adversarially trained smoothed classifiers</article-title>
          .
          <source>Advances in Neural Information Processing Systems</source>
          , volume
          <volume>32</volume>
          ,
          <year>2019</year>
          . population,
          <source>Journal of the American Statistical Association</source>
          <volume>63</volume>
          (
          <year>1968</year>
          ) 677
          <fpage>691</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [20]
          <string-name>
            <surname>I. Madreimov</surname>
          </string-name>
          ,
          <string-name>
            <surname>Yu. I. Petunin</surname>
          </string-name>
          ,
          <article-title>Characterization of the uniform distribution using order statistics</article-title>
          ,
          <source>Theory of Probability and Mathematiical Statistics</source>
          <volume>27</volume>
          (
          <year>1983</year>
          ) 105
          <fpage>110</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>S. I.</given-names>
            <surname>Lyashko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. V.</given-names>
            <surname>Rublev</surname>
          </string-name>
          ,
          <source>Minimal Ellipsoids and Maximal Simplexes in 3D Euclidean Space, Cybernetics and Systems Analysis</source>
          <volume>39</volume>
          (
          <year>2003</year>
          )
          <fpage>831</fpage>
          834. doi:
          <volume>10</volume>
          .1023/B:CASA.
          <volume>0000020224</volume>
          .83374.
          <year>d7</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>