<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>CITRisk'</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Non-parametric Methods of in Multidimensional Time Series</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dmitriy Klyushin</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrii Urazovskyi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>prospekt Glushkova, 4D, 03680, Kyiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <volume>3</volume>
      <fpage>0000</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>Modern companies contain a huge number of processes. The larger and more complex the business that one person or company is engaged in, the more impossible it becomes to keep track of all the processes in manual mode. Each element is at risk of failure, and in order to prevent or quickly respond to various breakdowns, you need to be able to recognize errors automatically so that in case of critical situations, seek help from a specialist who can fix everything. For modern systems, which can be important every second and which constantly process huge flows of information, a method is needed that will allow you to quickly recognize changes in state, for example, a deterioration in the patient's condition or server failure. To solve these and similar problems, a new method based on the use of Fisher's linear discriminant and Petunin statistics is proposed. To simulate the process, after turning on the sensors to capture data and a continuous flow of information, a multidimensional time series will be generated, after which the method will recognize changepoints that indicate that the object has changed state or something has happened to it. A clear probabilistic interpretation of the method underlying this classification greatly expands its capabilities within the framework of risk-informed systems.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The use of automatic systems and artificial intelligence to recognize changepoints in
multidimensional time series provides great opportunities for risk-informed systems. For example, in
the fields of medicine, engineering, economics, cybersecurity, which may be narrowly focused or
if the region lacks qualified employees. This will help to optimize the use of human resources,
which can be directed to the management or solving critical issues, for example, those related to
people's lives.</p>
      <p>Nuclear power plants provide another important example of the responsible use of
riskinformed systems. In the design, use, economics, and licensing of such energy sources, safety plays a
key role. Since such facilities have been used for several decades, it is necessary to ensure the
integrity and operability of vital elements of nuclear power plants in order to prevent, and
otherwise reduce or mitigate the consequences of accidents that have occurred. Historically,
plant designers have redesigned nuclear power plant systems to provide reliability in the form
of redundant and varied safety features and to ensure that even in the event of abnormal and
unplanned situations, the health and safety of workers and the public can be protected with a high
degree of confidence.</p>
      <p>For a method to be useful, it must have the following properties, namely:
1. High accuracy to minimize the possibility of false positive and false negative results
2. Stability so that single outliers or anomalies cannot severely corrupt the data series and create a
false changepoint.
3. Independence from the main distributions, so that the method is as versatile as possible and can
be applied in different areas and for different situations, processes, objects.</p>
      <p>4. Low cost of computing to be able to work online without using a lot of computing power and
without overloading the server.</p>
      <p>
        5. Balanced sensitivity, that is, not too high so as not to react if the patient simply turned over on
the other side, but not too low so as not to miss the explosion of a reactor at a nuclear power plant.
This article will talk about a new non-parametric method for detecting changepoints in multivariate
time series based on the metric developed in
        <xref ref-type="bibr" rid="ref1">(Klyushin and Petunin, 2003)</xref>
        and demonstrated
advantages over Kolmogorov-Smirnov and Wilcoxon statistics
        <xref ref-type="bibr" rid="ref2">(Klyushin and Urazovskyi, 2021)</xref>
        , and
to show its medicinal benefits.
      </p>
      <p>In section 2.1, we describe the state of the art in the field of detection of changepoints in
multivariate time series. In section 2.2 we will consider the algorithm for calculating the Petunin
statistic and its properties. In section 2.3 we will consider the algorithm for constructing the Fisher
linear discriminant. In section 3.1 we present the results of numerous numerical experiments with a
wide range of distributions. Section 3.2 considers possible applications of the proposed algorithm. For
example, we can investigate the presence of various diseases in a virtual patient by measuring its
parameters, such as heart rate, blood oxygen saturation and body temperature, which need to be
monitored.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Theoretical part</title>
    </sec>
    <sec id="sec-3">
      <title>2.1. Literature review</title>
      <p>
        When considering a task of searching and recognizing change points of random
multidimensional time series there are many ideas for applying the results obtained. A change point of
time series is a point that separates two pieces of a series into ones that have different distributions.
The developed methods for solving the problem of finding a point of change in multidimensional time
series are usually separated into algorithms for streaming (for data coming one by one) and already
fully known (when the whole series is already given completely) data. Streaming data goes constantly
in time, and already known ones are given in their entirety. A detailed review of the change point
detection method in the already given time series was done in
        <xref ref-type="bibr" rid="ref7">(Truong, Oudre and Vayatis, 2020)</xref>
        .
Since we want to consider an independent of initial distributions method for streaming data, we will
consider the streaming algorithms discussed in articles over the past few years.
A common and complex problem that arises when considering our problem is the increase in
dimension, which can add, for example, problems with the speed of calculations. In
        <xref ref-type="bibr" rid="ref8">(Alippi et al,
2016)</xref>
        , the problem of determining the point of change in the time series was discussed using
Kullback–Leibler divergence and the log-likelihood function with different distributions. The authors
showed that the more the data dimension grows, the worse we can notice changes in the set value.
In the article
        <xref ref-type="bibr" rid="ref11 ref9">(Wang and Zwetsloot, 2021)</xref>
        , attention is paid to the problems associated with increasing
the data dimension. The authors described a method for detecting a change point using control charts.
Their algorithm makes it possible to detect sparse shifts of the mean vector.
Detection of a change point can be considered with different variations: simply look for the presence
of a change point anywhere in the time series or localization of exactly the coordinates of the desired
point. In
        <xref ref-type="bibr" rid="ref12">(Jaehyeok, Ramdas, and Rinaldo, 2022)</xref>
        , attention is paid to the presence of a change point
somewhere, but the determination of the coordinate of the desired point does not have sufficient
accuracy and localization. A Bayesian method proposed in
        <xref ref-type="bibr" rid="ref13">(Sorba and Geissler, 2021)</xref>
        has lineax
computational complexity with respect to the number of points, but puts the researcher in a tradeoff of
speed and accuracy.
      </p>
      <p>
        The method discussed in (Navarro, Allen, and Weylandt, 2021) showed excellent performance of
the convex network clustering method, which unfortunately requires large computational costs. One
of the main methods for finding change points, discussed in
        <xref ref-type="bibr" rid="ref10">(Tickle, Eckley, and Fearnhead, 2021)</xref>
        ,
like many of today's change point detection methods (used in this article for terrorist research), has the
assumption that the data flow is independent of time. Although this criterion may be resistant to the
violation of this condition, however power may decrease.
      </p>
      <p>
        The proposed method is needed to process a continuous stream of data, can track outliers and
does not rely on any specific assumptions and known facts about the distribution of data. Below we
consider articles whose authors approached the problem from the same angle. In
        <xref ref-type="bibr" rid="ref14">(Wendelberger et al.,
2021)</xref>
        the extended Bayesian Online Changepoint Detection was developed. A method proposed in
        <xref ref-type="bibr" rid="ref15">(Adams and MacKay, 2007)</xref>
        is dedicated to the exploration of geographical data. In
        <xref ref-type="bibr" rid="ref16">(Cooney and
White, 2021)</xref>
        the authors considered algorithms proposed exclusively for exponential models. To
increase the accuracy of functioning, in
        <xref ref-type="bibr" rid="ref17">(Castillo-Matteo, 2021)</xref>
        the authors made assumptions on data
distribution. In (Hallgren, Heard, and Turcotte, 2021), data on the type of distribution helped to
optimize the computational complexity of the. The paper (Fotoohinasab, Hocking, and Afghah, 2021)
has the same drawback. It is required to make a priori assumptions about the data in order to find the
changepoints in the model. To determine the points of change in a multivariate time series more
precisely, it is often necessary to pre-process the data (Fearnhead and Rigaill , 2018). In (Harle et al.,
2014), the authors reviewed the Bayesian method for segmenting multivariate time series using the
MCMC method and Gibbs sampling. The authors have demonstrated that change points are stably
detected and their coordinates localized by implicitly examining the dependency structure. Similar
ideas were proposed in the article (Renz et al., 2021) for gesture recognition. Сhange point estimation
using the Yule-Walker moment estimator (Gallagher et al., 2021) is unstable due to the large shifts in
the means.
      </p>
      <p>
        In (Wang et al, 2019), the authors consider an algorithm that is used to stream data using a huge
matrix dependent on the dimension of the source data space. Similar methods discussed in
        <xref ref-type="bibr" rid="ref10">(Romano
et al., 2021)</xref>
        where the authors proposed a method called Functional Online CuSUM (FOCuS). The
idea is rolling the window and running the previously developed methods in parallel for all window
sizes. The efficiency and applicability of the algorithm was shown by detecting anomalies in the
computer server data.
      </p>
      <p>Analysis of papers on this topic shows that the most desirable qualities by searching for points of
change in our problem are 1) stability, 2) high accuracy of calculations, 3) speed of work and 4)
independence from basic distributions. Below, we describe such an algorithm based on so-called
Petuninʼs statistics.
2.2.</p>
    </sec>
    <sec id="sec-4">
      <title>Petunin’s statistics</title>
      <p>The Petunin’s statistic (-statistic) is a measure of proximity between samples proposed by the
Ukrainian mathematician Yuriy Petunin. It is used to test the hypothesis that the distribution functions
of two samples are equal.</p>
      <p>Let us consider two general populations  and ′ and corresponding distribution functions   and
 ′ .</p>
      <p>Let there be two samples  = ( 1,  2, …   ) ∈  and  ′ = ( 1′, 2′,…   ′) ∈ ′ , and  (1), ≤
 (2) ≤  (3) … ≤  () and  (′1) ≤  (′2) ≤  (′3) … ≤  ( ) - corresponding ordinal statistics and it is
′
necessary to determine whether they belong to the same distributions. Suppose that   ( ) =   ′() ,
then
 ( ( )) =  ( ′ ∈ ( ( ),  ( ))) =  ( ) =  − 
 + 1</p>
      <p>
        If we have a sample  ′ ∈ ( (′1),  (′2),  (′3), … ,  (′ )), we can find the frequency ℎ random event
  and confidence intervals (Δ(1), Δ(2)) for probability   at a given level of significance  , i.e
 = {  ∈ (Δ(1), Δ(2))} ,  ( ) = 1 − 
According to
        <xref ref-type="bibr" rid="ref4">(Van der Waerden, 1969)</xref>
        Δ(1) =
      </p>
      <p>2  2
ℎ( ) + 2 −  √ℎ( ) (1 − ℎ( ))  + 4
( ( ) – the density of the normal distribution).</p>
      <p>
        determines the level of significance of the confidence interval  
According to rule 3
        <xref ref-type="bibr" rid="ref5">(Petunin, Klyushin, Ganina, Borodai and Andrushkiv, 2001)</xref>
        at  = 3 the level
of significance of this interval does not exceed 0.05. Let's denote by 
number of all the confidence
( , ) = (Δ(1), Δ
(2));
intervals   = (Δ(ij1), Δ(ij2)). It is clear that  =
which contain probability 

( ). Statistics ℎ( ) =
 ( −1). Denote by  – the number of those intervals   ,
      </p>
      <p>we will call  -statistics and it will be a measure of
closeness (, ′) between samples  and ′ . Let's substitute the obtained value ℎ in the formula for
calculating confidence intervals, we will get a confidence interval  = (Δ(1), Δ(2)) to test the
hypothesis</p>
      <p>
        with a level of significance approximately equal to 0.05
        <xref ref-type="bibr" rid="ref1">(Klyushin and Petunin, 2003)</xref>
        2.3.
      </p>
      <p>
        Fisher’s linear discriminant

2

2
1
2
The terms Fisher's linear discriminant and LDA are often used interchangeably, although Fisher's
original article
        <xref ref-type="bibr" rid="ref3">(Fisher, 1936)</xref>
        actually describes a slightly different discriminant, which does not
make some of the assumptions of LDA such as normally distributed classes or equal class
covariances.
      </p>
      <p>Suppose two classes of observations have means ⃗⃗ 0⃗, ⃗⃗ 1⃗ and covariances Σ0, Σ1. Then the linear
combination of features⃗ ⃗ ⋅ 
will have means⃗ ⃗ ⋅⃗⃗  and variances⃗ ⃗  Σ⃗ ⃗
for  = 0,1. Fisher
defined the separation between these two distributions to be the ratio of the variance between the
classes to the variance within the classes:
 =
 
2
  2
ℎ
=
(⃗⃗ ⋅ ⃗⃗⃗ − ⃗⃗ ⋅ ⃗⃗⃗⃗0)2
⃗⃗  Σ1⃗⃗ + ⃗⃗  Σ0⃗⃗
=
(⃗⃗ ⋅ (⃗⃗⃗ − ⃗⃗⃗⃗0))
⃗⃗  (Σ0 + Σ1)⃗⃗
2</p>
      <p>This measure is, in some sense, a measure of the signal-to-noise ratio for the class labelling. It can
be shown that the maximum separation occurs when</p>
      <p>⃗⃗ ∝ (Σ0 + Σ1)−1(⃗⃗⃗ − ⃗⃗⃗⃗0)
When the assumptions of LDA are satisfied, the above equation is equivalent to LDA.</p>
      <p>Be sure to note that the vector ⃗⃗ is the normal to the discriminant hyperplane. As an example, in a
two dimensional problem, the line that best divides the two groups is perpendicular to ⃗⃗ .</p>
      <p>Generally, the data points to be discriminated are projected onto ⃗⃗ ; then the threshold that best
separates the data is chosen from analysis of the one-dimensional distribution. There is no general rule
for the threshold. However, if projections of points from both classes exhibit approximately the same
and ⃗⃗ ⋅ ⃗⃗⃗⃗1. In this case the parameter  in threshold condition ⃗⃗ ⋅  &gt;  can be found explicitly:
distributions, a good choice would be the hyperplane between projections of the two means, ⃗⃗ ⋅ ⃗⃗⃗⃗0
1
2
 = ⃗⃗ ⋅</p>
      <p>(⃗⃗⃗⃗0 + ⃗⃗⃗⃗1) = ⃗⃗⃗⃗1 Σ1−1⃗⃗⃗⃗1 −</p>
      <p>⃗⃗⃗⃗0 Σ0−1⃗⃗⃗⃗0
1
2</p>
    </sec>
    <sec id="sec-5">
      <title>3. Practice part 3.1.</title>
    </sec>
    <sec id="sec-6">
      <title>Numerical experiments</title>
      <p>The purpose of our experiments is to demonstrate the accuracy of the following algorithm for a
stationary time series, which should find the first changepoint and test the homogeneity hypothesis.</p>
      <p>At the beginning we take ℎ
and designate the elements 1, … ,  ℎ
- starting ones, witwhhich
we will continue to work using the sliding window method. When we have a sample
( +1 ,  +2 , … ,  +ℎ

(  +1,   +2, … ,   +</p>
      <p>ℎ) and find the projections on the line.
a
linear</p>
      <p>Fisher
discriminant
for
samples
( 1,  2, … ,  
ℎ
)
and
 Rotate the resulting straight line so that only one coordinate remains, and make the rest the
same. Getting projections ( 1,  2, … ,   ℎ) and (  +1,   +2, … ,   + ℎ)
 Calculate the Petunin’s statistics   for the resulting sets of projections
 If   ≥ 0.95, then we say that the new sample has the same distribution as the original one,
otherwise we say that the other.
 Shifting the sample (  +1,   +2, … ,   + ℎ) one position to the right and start the algorithm
from the beginning. We do this until all the data is gone.</p>
      <p>If sample after element   become inhomogeneous, then the point  +1 regarded as a changepoint.</p>
      <p>To demonstrate how the algorithm works, we take a series of length  = 400 and divide it into 4
equal intervals with different distributions. Then we run our algorithm 100 times and average the
values of Petunin's statistics (P statistics), after which we display the obtained values in two colors:
blue is not less than 0.95, that is, for those samples that have the same distribution as the original and
red less than 0.95 - having a different distribution.</p>
      <p>For each experiment, we calculated five measures of error: mean absolute error (MAE), mean
squared error (MSE), mean squared deviation (MSD), root mean squared error (RMSE), root mean
squared error (RMSE), and root mean squared error (RMSE), and normalized root mean squared error
(NRMSE). To demonstrate the effectiveness of the described algorithm, we will rely on the latter
value. As is well known, if NRSMR &gt; 0.5 the results can be considered as random. If a NRMSE is
close to 0, then the results are considered good.</p>
    </sec>
    <sec id="sec-7">
      <title>3.1.1. Almost non-overlapping uniform distributions with different means</title>
      <p>Let's consider a saltatory time series, which is composed of uniform distributions that practically
do not overlap. On this time series, we will be able to test the shift hypothesis.</p>
      <p>Table 1</p>
      <sec id="sec-7-1">
        <title>Time intervals and uniform distributions with different means</title>
      </sec>
      <sec id="sec-7-2">
        <title>Time interval</title>
        <p>0-99
100-199
200-299
300-399</p>
        <sec id="sec-7-2-1">
          <title>Distribution  1</title>
          <p>U(65;75)
U(100;110)
U(65;75)
U(70,90)</p>
        </sec>
        <sec id="sec-7-2-2">
          <title>Distribution  2</title>
          <p>U(96.5;97.5)
U(97.0;99.0)
U(96.5, 97.5)
U(97.5;99.0)</p>
        </sec>
        <sec id="sec-7-2-3">
          <title>Distribution  3</title>
          <p>U(36.4;36.7)
U(38.0;39.0)
U(36.4;36.7)
U(37.0;37.5)</p>
        </sec>
      </sec>
      <sec id="sec-7-3">
        <title>Value</title>
        <p>21.13
583.21
16.95
21.13
0.21
As can be seen from Table 1 and Figure 1, the desired change point is 100. In Figure 2, we see that
the p-statistic takes values greater than 0.95 only near intervals that have a distribution similar to the
first one and the measures of error we can see in Table 2.</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>3.1.2. Uniform distributions with different means, which are initially strongly overlap, then slightly overlap, and finally no overlap</title>
      <p>Let's consider a saltatory time series, which is composed of uniform distributions that are initially
strongly overlap, then slightly overlap, and finally no overlap. On this time series, we will be able
to test the shift hypothesis.</p>
      <p>Table 3</p>
      <sec id="sec-8-1">
        <title>Time intervals and uniform distributions with different means, which are initially strongly overlap, then slightly overlap, and finally no overlap</title>
        <sec id="sec-8-1-1">
          <title>Time interval Distribution  1 Distribution  2 Distribution  3</title>
          <p>0-99 U(60;70) U(96.0;97.0) U(36.4;36.7)
100-199 U(63;73) U(96.3;97.3) U(36.5;36.8)
200-299 U(70;80) U(97.0, 98.0) U(36.7;37.0)
300-399 U(85,95) U(99.0;99.9) U(37.5;37.8)
As can be seen from Table 3 and Figure 3, the desired change point is 100. In Figure 4, we see that
the p-statistic takes values greater than 0.95 only in the first interval and the measures of error we can
see in Table 4.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>3.1.3. Normal distributions with different means that almost do not overlap</title>
      <p>Let's consider a saltatory time series, which is composed of normal distributions with different
means that almost do not overlap. On this time series, we will be able to test the shift hypothesis.
Table 5</p>
      <sec id="sec-9-1">
        <title>Time intervals and normal distributions with different means that almost do not overlap</title>
        <sec id="sec-9-1-1">
          <title>Time interval Distribution  1 Distribution  2 Distribution  3</title>
          <p>0-99 N(70;2) N(96.0;0.15) N(36.5;0.05)
100-199 N(105;2) N(96.3;0.33) N(38.5;0.15)
200-299 N(70;2) N(97.0, 0.15) N(36.5;0.05)
300-399 N(80,4) N(99.0;0.25) N(37.3;0.98)</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-10">
      <title>3.1.4. Normal distributions with the same means, but with variances that gradually begin to differ</title>
    </sec>
    <sec id="sec-11">
      <title>3.1.5. Normal distributions with the same means, but with variances that differ more strongly</title>
      <p>Let's consider a saltatory time series, which is composed of normal distributions with the same
means, but with variances that differ more strongly. On this time series, we will be able to test
the scale hypothesis.</p>
      <p>Table 9</p>
      <sec id="sec-11-1">
        <title>Time intervals and normal distributions with the same means, but with variances that differ more strongly</title>
        <sec id="sec-11-1-1">
          <title>Time interval Distribution  1 Distribution  2 Distribution  3</title>
          <p>0-99 N(70;1) N(97.0;0.10) N(36.55;0.05)
100-199 N(70;5) N(97.0;0.50) N(36.55;0.25)
200-299 N(70;7) N(97.0,1.00) N(36.55;0.5)
300-399 N(70;10) N(97.0,1.50) N(36.55;0.75)
As can be seen from Table 9 and Figure 9, the desired change point is 100. In Figure 10, we see that
the p-statistic takes values greater than 0.95 only in the first interval and the measures of error we can
see in Table 10.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-12">
      <title>4. Conclusion</title>
      <p>In this chapter, an algorithm for finding changepoints using Fisher's linear discriminant and Petunin's
statistics was described. Experiments demonstrate fairly fast and accurate recognition when changing
the distribution function for a wide range of distributions. This gives a clear presentation of the
results, which means that this algorithm can be applied to risk-informed systems, in particular, to
work in clinics, to monitor the condition of patients with coronavirus.
[18] K. L. Hallgren, N. A. Heard, M. J. M. Turcotte, Changepoint detection on a graph of time series,
2021. arXiv preprint arXiv:2102.04112v1. DOI:10.48550/arXiv.2102.04112
[19] A. Fotoohinasab, T. Hocking, F. Afghah, A Greedy Graph Search Algorithm Based on
Changepoint Analysis for Automatic QRS Complex Detection, Computers in Biology and
Medicine, 130, 2021, 104208. DOI:10.1016/j.compbiomed.2021.104208
[20] P. Fearnhead, G. Rigaill, Changepoint Detection in the Presence of Outliers, Journal of the
American Statistical Association, 114, 2018, pp. 169–183.</p>
      <p>DOI:10.1080/01621459.2017.1385466
[21] F. Harlé, F. Chatelain, C. Gouy-Pailler, S. Achard, Rank-based multiple change-point detection
in multivariate time series, 22nd European Signal Processing Conference (EUSIPCO), 2014, pp.
1337–1341. DOI:10.5281/zenodo.43927.
[22] K. Renz, N. C. Stache, N. Fox, G. Varol, S. Albanie, Sign Segmentation with
ChangepointModulated Pseudo-Labelling, 2021. arXiv preprint arXiv:2104.13817v1.</p>
      <p>DOI:10.48550/arXiv.2104.13817
[23] C. Gallagher, R. Killick, R. Lund, X. Shi, Autocovariance Estimation in the Presence of</p>
      <p>Changepoints, 2021. arXiv preprint arXiv:2102.10669v2. DOI:10.48550/arXiv.2102.10669
[24] M. Navarro, G. I. Allen, M. Weylandt, Network Clustering for Latent State and Changepoint</p>
      <p>Detection, 2021. arXiv preprint arXiv:2111.01273v1. DOI:10.48550/arXiv.2111.01273.
[25] S. O. Tickle, I. A. Eckley, P. Fearnhead, A computationally efficient, high-dimensional multiple
changepoint procedure with application to global terrorismincidence, 2020. arXiv
preprint arXiv:2011.03599v2. DOI:10.1111/rssa.12695</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.A.</given-names>
            <surname>Klyushin</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y.I. Petunin</surname>
          </string-name>
          ,
          <article-title>Nonparametric population equivalence test based on measure of closeness between samples</article-title>
          ,
          <source>Ukrainian Mathematical Journal</source>
          ,
          <year>2nd</year>
          . ed.,
          <year>2003</year>
          , pp.
          <fpage>147</fpage>
          -
          <lpage>163</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D.A.</given-names>
            <surname>Klyushin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.V.</given-names>
            <surname>Urazovskyi</surname>
          </string-name>
          ,
          <article-title>Nonparametric Test for Change-Point Detection of IoT TimeSeries Data</article-title>
          , Chapter in: Kumar P.,
          <string-name>
            <surname>Obaid</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cengiz</surname>
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Balas</surname>
            <given-names>A</given-names>
          </string-name>
          . (Eds.)
          <source>A Fusion of Artificial Intelligence and Internet of Things for Emerging Cyber Systems, Intelligent Systems Reference Library</source>
          , volume
          <volume>210</volume>
          , Springer,
          <year>2021</year>
          , pp.
          <fpage>99</fpage>
          -
          <lpage>122</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>R. A</surname>
          </string-name>
          . Fisher,
          <article-title>The Use of Multiple Measurements in Taxonomic Problems</article-title>
          .
          <source>Annals of Eugenics</source>
          , volume
          <volume>7</volume>
          , 2nd. ed.,
          <year>1936</year>
          , pp.
          <fpage>179</fpage>
          -
          <lpage>188</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>B.L. Van der Waerden</surname>
          </string-name>
          , Mathematische Statistic, Springer-Verlag, Berlin,
          <year>1957</year>
          ; English. transl. of 2nd. ed. (
          <year>1965</year>
          ) Springer-Verlag, Berlin and New York,
          <year>1969</year>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Y. I.</given-names>
            <surname>Petunin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Klyushin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. P.</given-names>
            <surname>Ganina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. V.</given-names>
            <surname>Borodai</surname>
          </string-name>
          ,
          <string-name>
            <surname>R. I. Andrushkiv</surname>
          </string-name>
          , Computer diagnosis of breast cancer, Bulletin of Kyiv University, Ser. cybernetics, 2,
          <year>2001</year>
          , pp.
          <fpage>58</fpage>
          -
          <lpage>68</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mika</surname>
          </string-name>
          ,
          <article-title>Fisher Discriminant Analysis with Kernels</article-title>
          ,
          <source>IEEE Conference on Neural Networks for Signal Processing IX</source>
          ,
          <year>1999</year>
          , pp.
          <fpage>41</fpage>
          -
          <lpage>48</lpage>
          . DOI:
          <volume>10</volume>
          .1109/NNSP.
          <year>1999</year>
          .788121
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>C.</given-names>
            <surname>Truong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Oudre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Vayatis</surname>
          </string-name>
          ,
          <article-title>Selective review of offline change point detection methods</article-title>
          ,
          <source>Signal Processing</source>
          , volume
          <volume>167</volume>
          ,
          <year>2020</year>
          , 107299. DOI:
          <volume>10</volume>
          .1016/j.sigpro.
          <year>2019</year>
          .
          <volume>107299</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>C.</given-names>
            <surname>Alippi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Boracchi</surname>
          </string-name>
          ,
          <string-name>
            <surname>D. Carrera M. Roveri</surname>
            , Change Detection in Multivariate Datastreams: Likelihood and
            <given-names>Detectability</given-names>
          </string-name>
          <string-name>
            <surname>Loss</surname>
          </string-name>
          , Twenty-Fifth
          <source>International Joint Conference on Artificial Intelligence (IJCAI-16)</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>1368</fpage>
          -
          <lpage>1374</lpage>
          .DOI:
          <volume>10</volume>
          .48550/arXiv.1510.04850.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mishra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Sriharsha</surname>
          </string-name>
          ,
          <source>Online Changepoint Detection on a Budget</source>
          ,
          <source>2021 International Conference on Data Mining Workshops (ICDMW)</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>414</fpage>
          -
          <lpage>420</lpage>
          . DOI:
          <volume>10</volume>
          .1109/ICDMW53433.
          <year>2021</year>
          .
          <volume>00057</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>G.</given-names>
            <surname>Romano</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Eckley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Fearnhead</surname>
          </string-name>
          , G. Rigaill, Fast Online Changepoint Detection via Functional Pruning CUSUM statistics,
          <year>2021</year>
          . arXiv preprint arXiv:
          <volume>2110</volume>
          .08205v2. DOI:
          <volume>10</volume>
          .48550/arXiv.2110.08205.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. M.</given-names>
            <surname>Zwetsloot</surname>
          </string-name>
          ,
          <article-title>A Change-Point Based Control Chart for Detecting Sparse Changes in High-Dimensional Heteroscedastic Data</article-title>
          ,
          <year>2021</year>
          . arXiv preprint arXiv:
          <volume>2101</volume>
          .09424v1. DOI:
          <volume>10</volume>
          .48550/arXiv.2101.09424.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Jaehyeok</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ramdas</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Rinaldo</surname>
          </string-name>
          , E-detectors:
          <article-title>a nonparametric framework for online changepoint detection</article-title>
          ,
          <source>arXiv preprint arXiv:2203.03532v1</source>
          ,
          <year>2022</year>
          .DOI:
          <volume>10</volume>
          .48550/ arXiv.2203.03532.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>O.</given-names>
            <surname>Sorba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Geissler</surname>
          </string-name>
          ,
          <article-title>Online Bayesian inference for multiple changepoints and risk assessment</article-title>
          .
          <source>arXiv preprint arXiv:2106.05834v1</source>
          ,
          <year>2021</year>
          . DOI:
          <volume>10</volume>
          .48550/arXiv.2106.05834.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>L.</given-names>
            <surname>Wendelberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gray</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Reich</surname>
          </string-name>
          , A. Wilson,
          <source>Monitoring Deforestation Using Multivariate Bayesian Online Changepoint Detection with Outliers</source>
          ,
          <year>2021</year>
          . arXiv preprint arXiv:
          <volume>2112</volume>
          .
          <year>12899v2</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>P.</given-names>
            <surname>Adams</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Mackay</surname>
          </string-name>
          , Bayesian Online Changepoint Detection,
          <year>2007</year>
          . arXiv preprint arXiv:
          <volume>0710</volume>
          .3742v1.
          <source>DOI:10.48550/arXiv.0710.3742</source>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>P.</given-names>
            <surname>Cooney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>White</surname>
          </string-name>
          , Change-point
          <source>Detection for Piecewise Exponential Models</source>
          ,
          <year>2021</year>
          . arXiv preprint arXiv:
          <volume>2112</volume>
          .03962v1.
          <source>DOI:10.48550/arXiv.2112.03962</source>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>J.</given-names>
            <surname>Castillo-Mateo</surname>
          </string-name>
          , Distribution-Free
          <source>Changepoint Detection Tests Based on the Breaking of Records</source>
          ,
          <year>2021</year>
          . arXiv preprint arXiv:
          <volume>2105</volume>
          .08186v1. DOI:
          <volume>10</volume>
          .48550/arXiv.2105.08186.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>