<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Nonparametric Change Point Detection in Time Series Using Dempster-Hill procedure</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dmitriy Klyushin</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Irina Martynenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National University of Life and Environmental Sciences of Ukraine</institution>
          ,
          <addr-line>Ukraine, 03041, Kyiv, Henerala Rodimtseva, 19, build.1</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Taras Shevchenko National University of Kyiv, Ukraine</institution>
          ,
          <addr-line>03680, Kyiv, Akademika Glushkova Avenue, 4D</addr-line>
        </aff>
      </contrib-group>
      <fpage>350</fpage>
      <lpage>357</lpage>
      <abstract>
        <p>The paper describes a nonparametric test for recognizing the point of change in the time series, before and after which the values of the time series obey different distributions. The test is based on the Matveychuk-Petunin scheme, which is a generalization of the Bernoulli scheme using Dempster-Hill procedure. To recognize the point of change in the time series, a simplified Klyushin-Petunin homogeneity criterion is used, based on an exact confidence interval. The test works equally well with samples that do not have ties, as well as with samples having ties. It allows both online and offline implementations. The test compares segments of time series with high accuracy with a significance level of no more than 0.05. The sensitivity and stability of the proposed test is higher than that of its classical counterparts. The test provides high accuracy of recognition of two heterogeneous random samples for both the location shift hypothesis and the scale shift hypothesis. The proposed approach has wide practical applications in all areas where time series arise.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Time series</kwd>
        <kwd>change point</kwd>
        <kwd>Dempster-Hill procedure</kwd>
        <kwd>Klyushin‒Petunin test</kwd>
        <kwd>Bernoulli scheme</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The problem of finding change points in time series has now become ubiquitous. It arises, for
example, in medical applications in which it is necessary to continuously monitor the vital signs of
patients. This task is typical for technological processes monitoring also. Early recognition of changes
in the distribution of time series values makes it possible to identify and prevent unfavorable
situations, including deterioration in the condition of patients, disruption of the flow of technological
processes, industrial accidents, etc. Therefore, the development of accurate and stable algorithms for
the appearance of change points in time series is an urgent task.</p>
      <p>The problem of finding points of change in the time series is posed as follows: to find points
before and after which the values of the time series obey different distributions. To do this, it is
necessary to test the hypothesis that the distributions of random values of the time series in adjacent
intervals are identical. If this hypothesis is rejected, the point separating these intervals is called the
change point. The paper describes a new approach to finding change points in a time series based on
an exact confidence interval.</p>
      <p>Change point recognition methods are divided into online and offline methods. Online methods
find change points by analyzing data streams in real time. Offline methods detect change points by
analyzing the time series as a whole. For an overview of the corresponding algorithms, see [1]. In this
paper, we consider one-dimensional random variables. Since multidimensional time series are
widespread in various subject areas, there are many methods for finding transition points in
multivariate data streams. An overview of modern methods for finding transition points in
multidimensional time series are published in [2, 3].</p>
      <p>Our approach use the Dempster–Hill procedure (aka Hill Assumprion A(n) or Nonparametric
Predictive Inference) [4], that is thoroughly investigated and applied for solving various problems in
papers of F. Coolen (see, for example, [5–8]) and V. Vovk (see, for example, [9–11]).</p>
      <p>Coolen, Coolen-Maturi, and Alqifari [5] presented nonparametric predictive inference for future
order statistics and joint and conditional probabilities for events involving multiple future order
statistics. The authors shown the use of predictive probabilities for order statistics in statistical
inference. Bakera, Coolen-Maturi, and Coolen [6] introduced nonparametric predictive inference
(NPI) for stock returns and presented the inference on future stock returns, illustrating the proposed
NPI methods by historical stock market data. Yin, Coolen, and Coolen-Maturi [8] provided an
exploration of the statistical methods based on imprecise probabilities for accelerated life testing,
applying nonparametric predictive inference. Alqifari and Coolen [7] considered robustness of
Nonparametric Predictive Inference (NPI), in particular inference involving future order statistics.
The authors introduced new concepts for assessing the robustness of statistical procedures to the NPI
and demostrated that most of their nonparametric inferences had good robustness to small changes in
the data. Vovk et al [9] derived predictive distributions that are valid under a nonparametric
assumption using applied conformal prediction. The authors introduced and explored predictive
distribution functions that always guarantee coverage for i.i.d. observations. Their algorithm
generalizes the classical Dempster-Hill predictive distributions. Vovk et al. [10] proposed schemes
based on exchangeability martingales. Their method is general and may be applied to any prediction
algorithm. Vovk [11] described a universal probability forecasting systems, i.e. a system that is
consistent for any distribution, provided that the observations are i.i.d., and proved the existence of
universal conformal predictive systems.</p>
      <p>In opposite to papers mentioned above, we construct our approach on the Matveychuk–Petunin
and Jonson–Kotz models [12–15] that are generalized Bernoulli schemes.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Homogeneity and change-point detection test</title>
      <p>Consider two samples x   x1, x2 ,..., xn  and y   y1, y2 ,..., yn  drawn fro the distributions F1 and
F2 , respectively. The null hypothesis H0 states that F1  F2 . The alternative hypothesis is F1  F2 .
The Matveychuk–Petunin and Jonson–Kotz models [1215] allow construction of a two-sided
confidence interval  p1, p2  with a given the significance level for both the true and false null
hypothesis H0.</p>
      <p>Let x1 , x2 ,..., xn be variance series constructed using the sample x . If H is true and the sample
0
x obeys an exchangeable continuous distribution, then the Hill‘s assumption [4] states that
P x  xi , x j  
j  i
n 1</p>
      <p>, j  i,</p>
      <p>
        If the null hypothesis H0 is false, then the probability of the random event Aij  x   xi , x j 
significantly deviates from (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ). To estimate this deviation we construct N  n 1 n 2 confidence
limits Iij   pij1 , pij2  for the binomial proportion pij corresponding to given significance level
βusing various formulas [16]. Since these intervals have different coverage probability and lengths,
the most natural choice is to use an exact confidence interval, like the Clopper–Pearson interval [17].
It allows avoiding problems connected with the varying coverage probability and selection of
parameters. Let L be the number of intervals Iij containing pij and   x, y  L N is the relative
frequency of the random event B   pij  Iij  with the probability 1−β. Using the arguments described
above, we can construct the confidence interval I for the probability p  B with the significance level
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
rejected. The statistics   x, y is a heterogeneous measure of the samples x and y.
      </p>
      <p>As far as N can be quite large, the original version of the test may request quite long computations.
Therefore, it is desirable to simplify the test. We propose do not use all intervals  xi , x j  but
randomly choose the fixed number M of such intervals.</p>
      <p>Consider the process of construction exact confidence interval for binomial proportion based on
the 3-rule. Let x be a unimodal random value. Then, the 3-rule holds [18]
p  x  m x  3  x 
where m x is the mean and   x is the standard deviation . Therefore, the coverage probability of
the confidence interval m x  3  x, m x  3  x is greater than 0.95</p>
      <p>In the classical Bernoulli model we have
a  m x  3  x  np 
Therefore, the coverage probability of the confidence interval a,b
follows the binomial
distribution, i.e. the significance level of the confidence interval</p>
      <p>
I   np 

does not exceed 0.05.</p>
      <p>Let us re-state the random event x  I as follows:
Therefore, in the Bernoulli model we have
x  np </p>
      <p>The graph   p is the upper half of an ellipse E passing through the points</p>
      <p> 1 
A    n 
 2n 
n
3</p>
      <p>   1 1
 n2 ,0 , B   2 , 12n

1   1 </p>
      <p> , C    n 
4   2n 
n
3</p>
      <p>   1
 n2 ,0 , D   2 , 
 
with the center  1 , 0  . The graph of   p is the restriction of the graph of   p on the segment
 2 
0,1 stretching or shrinking the graph by
3
and shifting it by
1</p>
      <p>n 2n
Therefore, the graph of the function   p which does not depend on h is an arc of ellipse 
 1  1  
passing through the points 0, 0 ,  ,    , 1, 1 , such that the function   p reach the
 2  2  
1
minimum at the point p </p>
      <p>and it is symmetrical with respect to this point.</p>
      <p>2
The lower confidence limit p1 is a root of the quadratic equation
1 9  p2   9 
 n   n
1
n</p>
      <p>
 2h  p  h2 

h
n
</p>
      <p>1
2n2</p>
      <p> 0.</p>
      <p>
        , then the lower confidence limit p1 is the least root of (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ). If h  0 ,
If h  0 
1
2n

      </p>
      <p>3</p>
      <p>The upper confidence limit p2 is a root of the quadratic equation
1 3  p2   3 
 n   n
1
n</p>
      <p>
 2h  p  h2 

h
n
</p>
      <p>1
2n2
 0.</p>
      <p>
        If 1 h  1 , then the upper confidence limit p2 is the largest root of (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ). If 1 h  1, then
p2  1.
      </p>
      <p>
        For the generalized Bernoulli model similar reasoning gives the following quadratic equation for
lower confidence limit:
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
2m m 12
the least root of (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ). If h   , then p1  0 .
      </p>
      <p>
1 

9m  n  1  p2   1
n  2 m   m

9m  n  1
n  2 m</p>
      <p>
 2h  p  h2 

h
m
</p>
      <p>1
2m2
 0
  , then the lower confidence limit p1 for the generalized Bernoulli model is</p>
      <p>Similar, the upper confidence limit p2 for the generalized Bernoulli model is the root of the
equation

1 

9m  n  1  p2   1
n  2 m   m

9m  n  1
n  2 m</p>
      <p>
 2h  p  h2 

h
m
</p>
      <p>
        1
2m2
 0
(
        <xref ref-type="bibr" rid="ref4">4</xref>
        )
      </p>
      <p>
        If 1 h   , then the upper confidence limit p2 is the largest root of (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ). If 1 h   , then p2  1.
By virtue of the previous results the significance level of the confidence interval does not exceed 0.05.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Comparison of the sensitivity of versions of Klyushin–Petunin test</title>
      <p>In [19] we compared the sensitivity and precision of the Klyushin–Petunin test based on the
Wilson confidence interval with the sensitivity and precision of the Kolmogorov–Smirnov test and
Wilcoxon test. Now, we compare the sensitivity of the Klyushin–Petunin test, Klyushin–Petunin exact
confidence interval based on the 3-rule with complete selection of the intervals  xi , x j  and its
simplified version when we use only given number of randomly selected intervals  xi , x j  . In the
simplified version we do not make exhaust selection of the intervals  xi , x j  but just randomly
select 100 intervals, using the practical observation that the relative frequency almost exactly
approximates the probability after 100 trials [20]. We generated samples (n = 40) drawn from the
distributions which have the same location and the different scale, the same scale and the different
locations, and the different scales and locations. Hereinafter we use the following notation: N(μ, σ) is
the normal distribution, where μ is the mean and σ is the standard deviation, U(a, b) is the uniform
distribution on an interval  a,b , LN(μ, σ) is the lognormal distribution, E(λ) is the exponential
distribution with the parameter λ, and G(k, Θ) is the gamma distribution with parameters k and Θ.</p>
      <p>Consider the segment of the time series  x1, x2 ,... . The change point of this time series is the point
xm such that  x1, x2 ,..., xm  , m  n has the distribution F1 and  xm1, xm2 ,... has the different
distribution F . We propose to find a change point in the following way. Consider the sample
2
 x1, x2 ,..., xk  and a sliding segment  xi , xi1,..., xik  where i = 1, …, n. As i increases the sliding
window sample becomes “contaminated” by the elements of the second sample. Ideally, when we
reach a change point, the homogeneity measure attains its minimum value, and when the sliding
window moves across the change point the homogeneity measure increases. Therefore, the graph of
the homogeneity measure shows a saw tooth pattern. The Klyushin–Petunin homogeneity measure is
monotonically decreasing before the change points and monotonically increasing after the change
point. In Table 1 we show the result of the comparison of the sensitivity of the various version of the
Klyushin–Petunin test. If the test detect a change-point earlier than its counterparts, it is considered as
more sensitive. 6In Table 1 the order numbers of the contaminants detected by the Klyushin–Petunin
test when we consider all the intervals  xi , x j  (original version), complete exact Klyushin–Petunin
test and simplified exact version 100 with 5%- significance level for various distributions are
represented. The change-point in Table 1 is such point xk that the test accepts the hypothesis H for
0
samples  x1, x2 ,..., xk   F1 and  xk1, xk2 ,..., xn   F2 , k ≤ n.</p>
      <p>Remember, that this fact does not effect on the precision of the change point detection because
despite the detection of a contamination the homogeneity measure monotonically decreases until the
left end of the sliding window attain the change point. After this the Klyushin–Petunin homogeneity
measure becomes monotonically increasing.</p>
      <p>
        For instance, when the first segment  x1, x2 ,..., x40  has the distribution N(
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ) and the second
segment  x41, x42 ,..., x80  has the distribution N(
        <xref ref-type="bibr" rid="ref1 ref3">3,1</xref>
        ), the sample  x1, x2 ,..., x40  is considered
contaminated when m &gt; 15 according to the complete Klyushin-Petunin original test (see Table 1). It
is easy to see, that the Klyushin–Petunin test is more stable than its counterparts in all considered
cases. If the entry of Table 1 is 40 then the corresponding test did nor reject the null hypothesis H .
0
      </p>
      <p>
        The Table 1 shows that the Klyushin–Petunin test is more sensitive for shifted distributions with
the different means and the same standard deviation (N(
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ) vs N(
        <xref ref-type="bibr" rid="ref1 ref1">1,1</xref>
        ), N(
        <xref ref-type="bibr" rid="ref1 ref2">2,1</xref>
        ), and N(
        <xref ref-type="bibr" rid="ref1 ref3">3,1</xref>
        ), LN(
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ) vs
Lognogmal(
        <xref ref-type="bibr" rid="ref1 ref1">1,1</xref>
        ), LN(
        <xref ref-type="bibr" rid="ref1 ref2">2,1</xref>
        ), and LN(
        <xref ref-type="bibr" rid="ref1 ref3">3,1</xref>
        ), U(
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ) vs U(
        <xref ref-type="bibr" rid="ref2 ref3">2,3</xref>
        ), U(
        <xref ref-type="bibr" rid="ref1 ref2">1,2</xref>
        ), and U(0.5,1.5)) than its
counterparts. For exponential and gamma distributions the exact Klyushin–Petunin test is in average
more sensitive.
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>We considered a nonparametric test for recognizing the point of change in the time series, before
and after which the values of the time series obey different distributions. This test uses the
Matveychuk–Petunin scheme, which is a generalization of the Bernoulli scheme using Dempster–Hill
procedure. To recognize the point of change in the time series, we uses the original Klyushin–Petunin
test, the exact Klyushin–Petunin test and simplified Klyushin–Petunin homogeneity test based on the
proposed exact confidence interval. All the tests compared segments of time series with high accuracy
with a significance level of no more than 0.05. The sensitivity and stability of the proposed tests is
higher than that of its classical counterparts. The tests provide high accuracy of recognition of two
heterogeneous segments for both the location shift hypothesis and the scale shift hypothesis. The
proposed approach has wide practical applications in all areas where time series arise.</p>
      <p>The original KlyushinPetunin test based on the Wilson confidence interval is the most sensitive,
robust and accurate for almost all considered distributions. The modifications of this test using the
exact KlyushinPetunin confidence interval has the same precision, require less computation, but are
less robust. Therefore, they could be used as tools for detection change points in data streams in
situations when the speed of computations is more important than the robustness. Nevertheless, future
work will focus on improving the robustness of the proposed test and investigating the multivariate
case.</p>
    </sec>
    <sec id="sec-5">
      <title>5. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Truong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Oudre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Vayatis</surname>
          </string-name>
          ,
          <article-title>A review of change point detection methods</article-title>
          , CoRR abs/
          <year>1801</year>
          .00718 (
          <year>2018</year>
          ), http://arxiv.org/abs/
          <year>1801</year>
          .00718.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>C.</given-names>
            <surname>Truong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Oudre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Vayatis</surname>
          </string-name>
          ,
          <article-title>Selective review of offline change-point detection methods</article-title>
          ,
          <source>Signal Processing 167</source>
          (
          <year>2020</year>
          )
          <article-title>107299</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.sigpro.
          <year>2019</year>
          .
          <volume>107299</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Aminikhanghahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. J.</given-names>
            <surname>Cook</surname>
          </string-name>
          ,
          <article-title>A Survey of Methods for Time Series Change Point Detection, Knowl</article-title>
          . Inf. Syst.
          <volume>51</volume>
          (
          <year>2017</year>
          )
          <fpage>339</fpage>
          -
          <lpage>367</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10115-016-0987-z.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>H. N.</given-names>
            <surname>Alqifari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. P. A.</given-names>
            <surname>Coolen</surname>
          </string-name>
          ,
          <source>Robustness of Nonparametric Predictive Inference for Future Order Statistics, J Stat Theory Pract</source>
          <volume>12</volume>
          (
          <year>2019</year>
          )
          <article-title>13</article-title>
          . https://doi.org/10.1007/s42519-018-0011-x
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>B.</given-names>
            <surname>Hill</surname>
          </string-name>
          ,
          <article-title>Posterior distribution of percentiles: Bayes' theorem for sampling from a population</article-title>
          ,
          <source>Journal of the American Statistician Association</source>
          <volume>63</volume>
          (
          <year>1968</year>
          )
          <fpage>677</fpage>
          -
          <lpage>691</lpage>
          . doi:
          <volume>10</volume>
          .1080/01621459.
          <year>1968</year>
          .
          <volume>11009286</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Bakera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Coolen-Maturi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. P. A.</given-names>
            <surname>Coolen</surname>
          </string-name>
          .
          <article-title>Nonparametric Predictive Inference for Stock Returns</article-title>
          ,
          <source>Journal of Applied Statistics</source>
          <volume>44</volume>
          (
          <year>2017</year>
          )
          <fpage>1333</fpage>
          -
          <lpage>1349</lpage>
          . doi:
          <volume>10</volume>
          .1080/02664763.
          <year>2016</year>
          .
          <volume>1204429</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Y.-C.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. P. A.</given-names>
            <surname>Coolen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Coolen-Maturi</surname>
          </string-name>
          .
          <article-title>An imprecise statistical method for accelerated life testing using the power-Weibull model</article-title>
          ,
          <source>Reliability Engineering &amp; System Safety</source>
          <volume>167</volume>
          (
          <year>2017</year>
          )
          <fpage>158</fpage>
          -
          <lpage>167</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.ress.
          <year>2017</year>
          .
          <volume>05</volume>
          .045.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>F. P. A.</given-names>
            <surname>Coolen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Coolen-Maturi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. N.</given-names>
            <surname>Alqifari</surname>
          </string-name>
          ,
          <article-title>Nonparametric predictive inference for future order statistics</article-title>
          ,
          <source>Communications in Statistics - Theory and Methods</source>
          ,
          <volume>47</volume>
          (
          <year>2018</year>
          )
          <fpage>2527</fpage>
          -
          <lpage>2548</lpage>
          . doi:
          <volume>10</volume>
          .1080/03610926.
          <year>2017</year>
          .
          <volume>1342834</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>V.</given-names>
            <surname>Vovk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Manokhin</surname>
          </string-name>
          , Min-ge
          <string-name>
            <surname>Xie</surname>
          </string-name>
          ,
          <article-title>Nonparametric predictive distributions based on conformal prediction</article-title>
          ,
          <source>Machine Learning</source>
          <volume>108</volume>
          (
          <year>2019</year>
          )
          <fpage>445</fpage>
          -
          <lpage>474</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>V.</given-names>
            <surname>Vovk</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Petej</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Nouretdinov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Ahlber</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. Carlsson,</surname>
          </string-name>
          , A. Gammerman,.
          <article-title>Retrain or not retrain: Conformal test martingales for change-point detection</article-title>
          , in: L.
          <string-name>
            <surname>Carlsson</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          <string-name>
            <surname>Luo</surname>
            , G. Cherubin,
            <given-names>K.</given-names>
          </string-name>
          Nguyen (Eds),
          <source>Proceedings of Machine Learning Research</source>
          <volume>152</volume>
          (
          <year>2021</year>
          )
          <fpage>191</fpage>
          -
          <lpage>210</lpage>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2102.10439
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>V.</given-names>
            <surname>Vovk</surname>
          </string-name>
          ,
          <article-title>Universal predictive systems</article-title>
          .
          <source>Pattern Recognition</source>
          .
          <volume>126</volume>
          (
          <year>2022</year>
          )
          <article-title>108536</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.patcog.
          <year>2022</year>
          .
          <volume>108536</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Matveichuk</surname>
          </string-name>
          , Yu. Petunin,
          <article-title>A generalization of the Bernoulli model</article-title>
          occurring in
          <source>order statistics. I, Ukrainian Mathematical Journal</source>
          <volume>42</volume>
          (
          <year>1990</year>
          )
          <fpage>459</fpage>
          -
          <lpage>466</lpage>
          . doi:
          <volume>10</volume>
          .1007/BF01071335.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Matveichuk</surname>
          </string-name>
          , Yu. Petunin,
          <article-title>A generalization of the Bernoulli model occurring in order statistics</article-title>
          . II,
          <source>Ukrainian Mathematical Journal</source>
          <volume>43</volume>
          (
          <year>1991</year>
          )
          <fpage>728</fpage>
          -
          <lpage>734</lpage>
          . doi:
          <volume>10</volume>
          .1007/BF01058940.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>N.</given-names>
            <surname>Johnson</surname>
          </string-name>
          , S. Kotz,
          <article-title>Some generalizations of Bernoulli and Polya eggenberger contagion models</article-title>
          ,
          <source>Statistical Papers</source>
          <volume>32</volume>
          (
          <year>1991</year>
          )
          <fpage>1</fpage>
          -
          <lpage>17</lpage>
          . doi:
          <volume>10</volume>
          .1007/BF02925473.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>D.</given-names>
            <surname>Klyushin</surname>
          </string-name>
          , Yu. Petunin,
          <article-title>A nonparametric test for the equivalence of populations based on a measure of proximity of samples</article-title>
          ,
          <source>Ukrainian Mathematical Journal</source>
          ,
          <volume>55</volume>
          (
          <year>2003</year>
          )
          <fpage>181</fpage>
          -
          <lpage>198</lpage>
          . doi:
          <volume>10</volume>
          .1023/A:
          <fpage>1025495727612</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>A.</given-names>
            <surname>Pires</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Amado</surname>
          </string-name>
          ,
          <article-title>Interval estimators for a binomial proportion: Comparison of twenty methods</article-title>
          ,
          <source>REVSTAT-Statistical Journal</source>
          ,
          <volume>6</volume>
          (
          <year>2008</year>
          )
          <volume>165</volume>
          
          <fpage>197</fpage>
          . doi:
          <volume>10</volume>
          .1080/01621459.
          <year>1968</year>
          .
          <volume>11009286</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>R.</given-names>
            <surname>Andrushkiw</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Klyushin</surname>
          </string-name>
          , Yu. Petunin, and
          <string-name>
            <given-names>M.</given-names>
            <surname>Savkina</surname>
          </string-name>
          ,
          <article-title>The exact confidence limits for unknown probability Bernoulli models</article-title>
          ,
          <source>Proceedings of the International Conference on Information Technology Interfaces</source>
          ,
          <string-name>
            <surname>ITI</surname>
          </string-name>
          ,
          <year>2005</year>
          , pp.
          <fpage>164</fpage>
          -
          <lpage>168</lpage>
          . doi:
          <volume>10</volume>
          .1109/ITI.
          <year>2005</year>
          .
          <volume>1491116</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>D.</given-names>
            <surname>Vysochanskii</surname>
          </string-name>
          , Yu. Petunin,
          <article-title>Justification of the 3-sigma rule for unimodal distributions</article-title>
          ,
          <source>Theory of Probability and Mathematical Statistics</source>
          ,
          <volume>21</volume>
          (
          <year>1980</year>
          )
          <volume>25</volume>
          
          <fpage>37</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>D.</given-names>
            <surname>Klyushin</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Martynenko</surname>
          </string-name>
          ,
          <article-title>Nonparametric test for change-point detection in data stream</article-title>
          ,
          <source>2020 IEEE Third International Conference on Data Stream Mining and Processing (DSMP)</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>281</fpage>
          -
          <lpage>286</lpage>
          . doi:
          <volume>10</volume>
          .1109/DSMP47368.
          <year>2020</year>
          .9204193
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>H.</given-names>
            <surname>Freudenthal</surname>
          </string-name>
          ,
          <article-title>The empirical law of large numbers or the stability of frequencies</article-title>
          .
          <source>Educational Studies in Mathematics 4</source>
          (
          <year>1972</year>
          ),
          <volume>484</volume>
          
          <fpage>490</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>