<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Locating Changepoints in Multidimensional Time Series Using Non-parametric Methods</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Dmitriy</forename><surname>Klyushin</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Taras Shevchenko National University of Kyiv</orgName>
								<address>
									<addrLine>prospekt Glushkova, 4D</addrLine>
									<postCode>03680</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author role="corresp">
							<persName><forename type="first">Andrii</forename><surname>Urazovskyi</surname></persName>
							<email>urazovskya@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">Taras Shevchenko National University of Kyiv</orgName>
								<address>
									<addrLine>prospekt Glushkova, 4D</addrLine>
									<postCode>03680</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Locating Changepoints in Multidimensional Time Series Using Non-parametric Methods</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">4D629C3708D794C8657EC49D8D14D5C5</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-12-29T06:04+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Time series</term>
					<term>changepoint</term>
					<term>nonparametric statistics</term>
					<term>computer modeling</term>
					<term>intelligent systems</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In many fields, from finance to healthcare to engineering, there is a growing need to monitor and analyze large and complex multivariate time series. These time series often contain critical information that can be used to improve decision-making and optimize system performance. However, these time series can also be noisy and subject to various forms of interference, making it difficult to extract meaningful insights. One important challenge is identifying the moments when the underlying process changes, also known as changepoints. Detecting these changepoints in real-time is crucial for timely intervention and improved outcomes. In this paper, we explore the use of Fisher's linear discriminant and Petunin statistics for detecting changepoints in multivariate time series. We show how this approach can be applied to computer modeling and intelligent systems to improve the accuracy and efficiency of decisionmaking in a wide range of fields.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Automatic systems and artificial intelligence can be used to recognize changepoints in multidimensional time series, providing valuable opportunities in various fields such as medicine, engineering, economics, and cybersecurity. This can help optimize the allocation of human resources, allowing them to focus on management and critical issues that affect people's lives. Nuclear power plants serve as an important example of the responsible use of computer modeling and intelligent systems. Safety is crucial in the design, use, economics, and licensing of such energy sources. To prevent and mitigate the consequences of accidents, it is essential to ensure the integrity and operability of vital elements within nuclear power plants. Designers have historically incorporated redundant and diverse safety features into these plants to provide reliability, ensuring that the health and safety of workers and the public can be protected with a high level of confidence even in abnormal and unplanned situations.</p><p>To be practical, a method should possess several characteristics, including: 1. High precision to minimize false negative and false positive outcomes. 2. Robustness to withstand individual outlying data points that may skew the entire data series and generate false changepoints.</p><p>3. Insensitivity to underlying distributions to maximize its applicability across different domains, scenarios, and objects.</p><p>4. Low computational cost to enable real-time processing without excessive resource utilization or server overload. <ref type="bibr" target="#b4">5</ref>. Optimal sensitivity that is neither too high to detect insignificant changes nor too low to miss critical events, such as nuclear reactor meltdowns or medical emergencies. This paper will introduce a novel method for detecting changepoints in multivariate time series, which is based on a metric developed in a previous study <ref type="bibr" target="#b0">[1]</ref>. This new method has been shown to outperform the Kolmogorov-Smirnov and Wilcoxon statistics, as demonstrated in a recent study <ref type="bibr" target="#b1">[2]</ref>. The paper will also discuss the potential applications of this method in the field of computer modeling and intelligent systems, specifically in the area of medicine. Section 2.1 will describe the algorithm used to calculate the Petunin statistic and its properties, while section 2.2 will discuss the algorithm for constructing the Fisher linear discriminant. Section 2.3 will provide an overview of the current state of research on detecting changepoints in multivariate time series. Section 3.1 will present the results of various numerical experiments involving different distributions.</p><p>Through a series of experiments and tests, we demonstrate the strength and effectiveness of our method, showing that it is capable of producing good results. Overall, we believe that our method represents a step forward in scientific research, offering a truly unique and innovative approach to solving some of the most challenging problems of our time. We hope that our work will inspire others to explore new and unconventional methods, leading to even more groundbreaking discoveries in the future.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Theoretical part</head><p>This chapter provides a comprehensive overview of the existing literature, as well as the statistical tools, namely Petunin's statistic and Fisher's linear discriminant analysis, that will be used in this study. This paper presents novel theoretical results that combine Fisher's linear discriminant analysis and Petunin's statistic for data analysis in our topic. This is the first study to use these two statistical tools in combination for analyzing data and our results demonstrate the unique benefits of this approach.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Petunin's statistics</head><p>Yuriy Petunin, a mathematician from Ukraine, introduced the p-statistic, which measures the closeness between two samples. The p-statistic is employed to test the hypothesis that the distribution functions of two samples are identical.</p><p>Let us consider two general populations 𝐺 and 𝐺′ and corresponding distribution functions 𝐹 𝐺 and 𝐹 𝐺′ .</p><p>Let there be two samples 𝑥 = (𝑥 1 , 𝑥 -corresponding ordinal statistics and it is necessary to determine whether they belong to the same distributions. Suppose that 𝐹 𝐺 (𝑢) = 𝐹 𝐺 ′ (𝑢), then</p><formula xml:id="formula_0">𝑃 (𝐴 𝑖𝑗 (𝑘) ) = 𝑃 (𝑥 𝑘 ′ ∈ (𝑥 (𝑖) , 𝑥 (𝑗) )) = 𝑝 𝑖𝑗 (𝑛) = 𝑗 − 𝑖 𝑛 + 1 If we have a sample 𝑥 ′ ∈ (𝑥 (1) ′ , 𝑥 (2) ′ , 𝑥<label>(3)</label></formula><p>′ , … , 𝑥 (𝑚) ′ ), we can find the frequency ℎ 𝑖𝑗 random event 𝐴 𝑖𝑗 and confidence intervals (Δ 𝑖𝑗 (1) , Δ 𝑖𝑗 (2) ) for probability 𝑝 𝑖𝑗 at a given level of significance 𝛽, i.e 𝐵 = {𝑝 𝑖𝑗 ∈ (Δ 𝑖𝑗 (1) , Δ 𝑖𝑗 (2) )} , 𝑝(𝐵) = 1 − 𝛽</p><formula xml:id="formula_1">According to [4] Δ 𝑖𝑗 (1) = ℎ 𝑖𝑗 (𝑛) 𝑛 + 𝑔 2 2 − 𝑔 √ ℎ 𝑖𝑗 (𝑛) (1 − ℎ 𝑖𝑗 (𝑛) ) 𝑛 + 𝑔 2 4 𝑛 + 𝑔 2 Δ 𝑖𝑗 (2) = ℎ 𝑖𝑗 (𝑛) 𝑛 + 𝑔 2 2 + 𝑔 √ ℎ 𝑖𝑗 (𝑛) (1 − ℎ 𝑖𝑗 (𝑛) ) 𝑛 + 𝑔 2 4 𝑛 + 𝑔 2</formula><p>Assuming 𝜙(𝑔) = 1 − 𝛽 2 (𝜙(𝑔) where 𝜙(𝑔) is the normal distribution density, we can determine the significance level of the confidence interval 𝐼 𝑖𝑗 (𝑛,𝑚) = (Δ 𝑖𝑗 (1) , Δ 𝑖𝑗 (2) ), using the value of 𝑔. As per the 3𝜎 rule <ref type="bibr" target="#b4">[5]</ref>, at 𝑔 = 3, the significance level of this interval is no more than 0.05. Let 𝑁 be the total number of confidence intervals 𝐼 𝑖𝑗 = (Δ ij (1) , Δ ij (2) ), where 𝑁 = 𝑛(𝑛−1)</p><p>intervals 𝐼 𝑖𝑗 that contain the probability 𝑝 𝑖𝑗 (𝑛) . The p-statistics, ℎ (𝑛) = 𝐿 𝑁 , is a measure of closeness 𝜌(𝑥, 𝑥′) between samples 𝑥 and 𝑥′. By substituting the obtained value of ℎ into the formula for calculating confidence intervals, we obtain the confidence interval 𝐼 = (Δ (1) , Δ (2) ) to test the hypothesis 𝐻 with an approximate significance level of 0.05 <ref type="bibr" target="#b0">[1]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Fisher's linear discriminant</head><p>Fisher's linear discriminant and LDA (Linear discriminant analysis) are terms that are often used interchangeably, but Fisher's original paper <ref type="bibr" target="#b2">[3]</ref> describes a discriminant that differs slightly from LDA. Fisher's method does not rely on some of the assumptions of LDA, such as classes with normal distributions or equal class covariances.</p><p>Consider two classes of observations with means 𝜇 0 ⃗⃗⃗⃗ , 𝜇 1 ⃗⃗⃗⃗ and covariances Σ 0 , Σ 1 . If we use the linear combination of features 𝑤 ⃗⃗ ⋅ 𝑥 , the means of the resulting distribution will be 𝑤 ⃗⃗ ⋅ 𝜇 𝑖 ⃗⃗⃗ , and the variances will be 𝑤 ⃗⃗ 𝑇 Σ 𝑖 𝑤 ⃗⃗ for 𝑖 = 0,1. Fisher defined the separation between these two distributions to be the ratio of the variance between the classes to the variance within the classes:</p><formula xml:id="formula_2">𝑆 = 𝜎 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 2 𝜎 𝑤𝑖𝑡ℎ𝑖𝑛 2 = (𝑤 ⃗⃗ ⋅ 𝜇 𝑖 ⃗⃗⃗ − 𝑤 ⃗⃗ ⋅ 𝜇 0 ⃗⃗⃗⃗ ) 2 𝑤 ⃗⃗ 𝑇 Σ 1 𝑤 ⃗⃗ + 𝑤 ⃗⃗ 𝑇 Σ 0 𝑤 ⃗⃗ = (𝑤 ⃗⃗ ⋅ (𝜇 𝑖 ⃗⃗⃗ − 𝜇 0 ⃗⃗⃗⃗ )) 2</formula><p>𝑤 ⃗⃗ 𝑇 (Σ 0 + Σ 1 )𝑤 ⃗⃗ The measure described here is a way to evaluate the effectiveness of class labelling by comparing the separation between two sets of observations to the variance within each set. The maximum separation is achieved when a certain linear combination of features is used, with the vector representing the normal to the discriminant hyperplane.</p><p>𝑤 ⃗⃗ ∝ (Σ 0 + Σ 1 ) −1 (𝜇 𝑖 ⃗⃗⃗ − 𝜇 0 ⃗⃗⃗⃗ ) In a two-dimensional problem, this hyperplane is represented by a line that is perpendicular to this vector. The data points are then projected onto this hyperplane, and a threshold is chosen based on analysis of the one-dimensional distribution of the projections. One possible way to set this threshold is by placing it between the projections of the means of the two sets of observations.</p><formula xml:id="formula_3">𝑐 = 𝑤 ⃗⃗ ⋅ 1 2 (𝜇 0 ⃗⃗⃗⃗ + 𝜇 1 ⃗⃗⃗⃗ ) = 1 2 𝜇 1 ⃗⃗⃗⃗ 𝑇 Σ 1 −1 𝜇 1 ⃗⃗⃗⃗ − 1 2 𝜇 0 ⃗⃗⃗⃗ 𝑇 Σ 0 −1 𝜇 0 ⃗⃗⃗⃗</formula><p>The value of the parameter c in the threshold condition 𝑤 ⃗⃗ ⋅ 𝑥 &gt; 𝑐 can be explicitly determined in this case. It should be noted that Fisher's original discriminant differs slightly from LDA in terms of the assumptions it makes about the classes being normally distributed or having equal covariances.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Related works</head><p>There are numerous approaches to detection of changepoints in multidimensional time series of random values. A changepoint of a time series is such point where values of time series have different distribution that before and after that point. Methods for detection changepoints in multidimensional time series can be classified into online and offline algorithms. Online algorithms work on portion of data in time, but offline algorithms work on completed sets of data. Article <ref type="bibr" target="#b5">[6]</ref> provided a comprehensive review of offline methods for changepoint detection. For online algorithms, we aim to consider a method that is independent of initial distributions.</p><p>The article <ref type="bibr" target="#b6">[7]</ref> proposes a novel approach for discriminant analysis, called Kernel Fisher Discriminant, which shows competitive performance compared to other classification techniques and has potential for further extensions in multi-class discriminants and generalization error bounds.</p><p>Increasing of dimension can slow down computations. Article <ref type="bibr" target="#b7">[8]</ref> discussed application of divergence measures to detect a changepoint in a time series. Changepoint detection can be performed in different ways, such as detection of a changepoint in time series or localization of its coordinates. Article <ref type="bibr" target="#b8">[9]</ref> focused on detecting a changepoint, but their method has not high accuracy in localization. Article <ref type="bibr" target="#b9">[10]</ref> proposed a Bayesian method with linear computational complexity, but its accuracy is insufficient. Article <ref type="bibr" target="#b10">[11]</ref> developed an effective convex network clustering algorithm, but it is computationally complex. The article <ref type="bibr" target="#b11">[12]</ref> proposes a change-point based control chart for monitoring sparse changes in high-dimensional mean vector in HDLSS scenarios, which is robust to correlation, non-normality, and heteroscedasticity and shows efficient detection of large sparse shifts with accurate estimation of the change-point and potential OC variables, as shown by experimentation and a real case study.</p><p>To process online data, identify outliers without restricted assumptions about the data distribution, we will examine papers that consider our problem from the same point of view. Articles <ref type="bibr" target="#b12">[13]</ref> and <ref type="bibr" target="#b13">[14]</ref> developed a Bayesian method for exploring geographical data. Article <ref type="bibr" target="#b14">[15]</ref> considered algorithms for exponential models only, while article <ref type="bibr" target="#b15">[16]</ref> requires information about type of distribution to increase the accuracy of their method. Article <ref type="bibr" target="#b16">[17]</ref> made assumptions on distribution to decrease computational complexity. Article <ref type="bibr" target="#b17">[18]</ref> also required prior assumptions about the data. Pre-processing the data can increase the precision of changepoint detection in multivariate time series <ref type="bibr" target="#b18">[19]</ref>. Articles <ref type="bibr" target="#b19">[20]</ref> and <ref type="bibr" target="#b20">[21]</ref> proposed Bayesian methods for segmenting multivariate time series with implicit examination of a dependency structure.</p><p>In the study by <ref type="bibr" target="#b21">[22]</ref>, an algorithm was examined for streaming data, which relied on a massive matrix that was contingent on the size of the original data space. Comparable techniques were explored in a different research paper by Romano and others in <ref type="bibr" target="#b22">[23]</ref>.</p><p>The Petuninʼs statistics is a measure of distance between two distributions, which can be used to detect a changepoint in a time series. The proposed algorithm is based on this statistic and has the following properties:</p><p>1. Stability: The algorithm is designed to be stable over time, meaning that it can accurately detect changepoints even when underlying distribution of data changes over time.</p><p>2. High accuracy: The Petuninʼs statistics is a robust measure of distance between distributions. It allows accurate detection of changepoints even in the presence of outliers or other noise in data.</p><p>3. Speed: The algorithm is designed to be computationally efficient, allowing for real-time processing of streaming data.</p><p>4. Independence from basic distributions: The algorithm does not require any assumptions or prior knowledge about the underlying distribution of the data, making it applicable to a wide range of time series data.</p><p>In summary, the proposed algorithm based on Petuninʼs statistic offers a stable, accurate, and computationally efficient method for detecting changepoints in time series data, without requiring any specific assumptions about the underlying distribution of data.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Practice part</head><p>In this chapter, we present a method for detecting changepoints in time series data and conduct several numerical experiments to evaluate its performance with different distributions.</p><p>Our method is based on a combination of statistical tools, including Petunin's statistic and Fisher's linear discriminant analysis. By using these tools in combination, we can identify changepoints in time series data with high accuracy.</p><p>The purpose of our experiments is to demonstrate the accuracy of the following algorithm for a stationary time series, which should find the first changepoint and test the homogeneity hypothesis.</p><p>At the beginning we take 𝑤𝑖𝑑𝑡ℎ and designate the elements 𝑥 1 , … , 𝑥 𝑤𝑖𝑑𝑡ℎ -starting ones, with which we will continue to work using the sliding window method. When we have a sample (𝑥 𝑖+1 , 𝑥 𝑖+2 , … , 𝑥 𝑖+𝑤𝑖𝑑𝑡ℎ ), we do the following with it:</p><p> Building a linear Fisher discriminant for samples (𝑥 1 , 𝑥 2 , … , 𝑥 𝑤𝑖𝑑𝑡ℎ ) and (𝑥 𝑖+1 , 𝑥 𝑖+2 , … , 𝑥 𝑖+𝑤𝑖𝑑𝑡ℎ ) and find the projections on the line.  Rotate the resulting straight line so that only one coordinate remains, and make the rest the same. Getting projections (𝑝 1 , 𝑝 2 , … , 𝑝 𝑤𝑖𝑑𝑡ℎ ) and (𝑝 𝑖+1 , 𝑝 𝑖+2 , … , 𝑝 𝑖+𝑤𝑖𝑑𝑡ℎ )  Calculate the Petunin's statistics 𝑝 𝑠𝑡𝑎𝑡 for the resulting sets of projections  If 𝑝 𝑠𝑡𝑎𝑡 ≥ 0.95, then we say that the new sample has the same distribution as the original one, otherwise we say that the other and shift the initial sample to position (𝑝 𝑖+𝑤𝑖𝑑𝑡ℎ+1 , … , 𝑝 𝑖+2⋅𝑤𝑖𝑑𝑡ℎ ).  Shifting our window one position to the right and start the algorithm from the beginning. We do this until all the data is gone. If sample after element 𝑥 𝑛 become inhomogeneous, then the point 𝑥 𝑛+1 regarded as a changepoint.</p><p>To demonstrate how the algorithm works, we take a series of length 𝑁 = 400 and divide it into 4 equal intervals with different distributions. Then we run our algorithm 100 times and average the values of Petunin's statistics (P statistics), after which we display the obtained values in two colors: blue is not less than 0.95, that is, for those samples that have the same distribution as the original and red less than 0.95 -having a different distribution.</p><p>For each experiment, we calculated five measures of error: mean absolute error (MAE), mean squared error (MSE), mean squared deviation (MSD), root mean squared error (RMSE), and normalized root mean squared error (NRMSE). To demonstrate the effectiveness of the described algorithm, we will rely on the latter value. As is well known, if NRSME &gt; 0.5 the results can be considered as random. If a NRMSE is close to 0, then the results are considered good.</p><p>We aim to conduct a numerical experiment that involves analyzing several time series with jumps of varying distributions. In the first scenario, we will analyze a time series consisting of nearly nonoverlapping uniform distributions and test the hypothesis of a shift in this series. In the second scenario, we will analyze a time series with jumps of uniform distributions initially exhibiting significant overlap, followed by mild overlap, and ultimately no overlap, with the purpose of testing the hypothesis of a shift. Additionally, we will examine time series with jumps of normal distributions, where distinct means exhibit minimal overlap, to test the hypothesis of a shift. In the fourth scenario, we will examine time series with jumps that comprise normal distributions with identical means but gradually differing variances, aiming to test the scale hypothesis. Finally, in the fifth scenario, we will analyze a time series with jumps of normal distributions with the same means but differing variances, testing the scale hypothesis on this series.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Nearly non-overlapping uniform distributions with different means</head><p>We will analyze time series with jumps that consists of nearly non-overlapping uniform distributions. The aim is to test the hypothesis of a shift in this series.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1 Time intervals and uniform distributions with different means</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Time interval</head><p>Distribution    <ref type="table">1</ref> and Figure <ref type="figure" target="#fig_0">1</ref> illustrate that the intended changepoints are 100, 200 and 300. In Figure <ref type="figure" target="#fig_1">2</ref>, we saw that the almost all found changepoints are close to the actual ones, while the measures of error are presented in Table <ref type="table" target="#tab_1">2</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Uniform distributions with distinct means that display significant overlap at the outset, followed by mild overlap, and ultimately no overlap</head><p>We will analyze time series with jumps that consists of uniform distributions initially exhibiting significant overlap, followed by mild overlap, and ultimately no overlap. The purpose is to test the hypothesis of a shift in this series.   </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 4</head><p>Measures of error for uniform distributions with distinct means that exhibit significant overlap initially, followed by mild overlap, and ultimately no overlap.   <ref type="table" target="#tab_3">3</ref> show that the desired changepoints are 100, 200, and 300. Figure <ref type="figure" target="#fig_3">4</ref> demonstrates that the almost all detected changepoints are close to the actual ones, and the corresponding error measures are displayed in Table <ref type="table">4</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Normal distributions with distinct means that exhibit minimal overlap</head><p>We will analyze time series with jumps composed of normal distributions with distinct means that exhibit minimal overlap. The aim is to test the hypothesis of a shift in this series.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 5</head><p>Time intervals and normal distributions with distinct means that exhibit minimal overlap.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Time interval</head><p>Distribution     <ref type="figure" target="#fig_5">5</ref> indicate that the intended changepoints are 100, 200, and 300. In Figure <ref type="figure" target="#fig_6">6</ref>, we observe that the almost all detected changepoints are in close proximity to the true ones, and Table <ref type="table" target="#tab_5">6</ref> displays the corresponding error measures.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Normal distributions with identical means, but whose variances begin to differ gradually</head><p>We will examine time series with jumps that comprises normal distributions with identical means, but with gradually differing variances. The objective is to test the scale hypothesis on this series.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 7</head><p>Time intervals and normal distributions with same means, but whose variances gradually begin to differ.     <ref type="figure" target="#fig_7">7</ref> indicate that the intended changepoints are 100, 200, and 300. In Figure <ref type="figure" target="#fig_8">8</ref>, we observe that the almost all detected changepoints are in close proximity to the true ones, and Table <ref type="table" target="#tab_7">8</ref> displays the corresponding error measures.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Time interval</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.5.">Normal distributions with the same means, but with variances that differ more strongly</head><p>Let's consider time series with jumps, which is composed of normal distributions with the same means, but with variances that differ more strongly. On this time series, we will be able to test the scale hypothesis. As can be seen from Table <ref type="table" target="#tab_9">9</ref> and Figure <ref type="figure" target="#fig_9">9</ref>, the desired changepoint is 100. In Figure <ref type="figure" target="#fig_10">10</ref>, we see that the p-statistic takes values greater than 0.95 only in the first interval and the measures of error we can see in Table <ref type="table" target="#tab_10">10</ref>.  Indeed, it is worth noting that hypotheses about scale are generally more challenging to test than those about shift. However, our algorithm is designed to detect changepoints even in scenarios where the scale hypothesis is being tested, allowing for a comprehensive analysis of the time series. By identifying these points, we can gain valuable insights into the behavior of the series and validate or reject our hypotheses.   <ref type="figure" target="#fig_9">9</ref> indicate that the intended changepoints are 100, 200, and 300. In Figure <ref type="figure" target="#fig_10">10</ref>, we observe that the almost all detected changepoints are in close proximity to the true ones, and Table <ref type="table" target="#tab_10">10</ref> displays the corresponding error measures.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusion</head><p>In conclusion, our study has presented a novel algorithm for detecting changepoints in time series data that combines Fisher's linear discriminant and Petunin's statistics. Our numerical experiments have shown that this algorithm can accurately and quickly detect changes in a wide range of distribution functions.</p><p>Additionally, our algorithm has several advantages over existing changepoint detection methods. Firstly, our algorithm does not require any assumptions about the distribution of the data, making it more flexible and applicable to a wider range of scenarios. Secondly, the computational complexity of our algorithm is relatively low, which makes it efficient and scalable to larger datasets. Finally, our algorithm provides interpretable results, which can help researchers and practitioners to better understand the nature of changes in the time series data.</p><p>The implications of our results are significant, as our algorithm could have practical applications in monitoring the health status of COVID-19 patients in clinics. By accurately detecting changes in vital signs or symptoms, medical professionals could intervene earlier and improve patient outcomes. Furthermore, we have evaluated the performance of our algorithm using NRMSE, which measures the accuracy of the detected changepoints. Our NRMSE values demonstrate that our algorithm works accurately.</p><p>However, we acknowledge that there are limitations to our study, such as using simulated data in our experiments. Therefore, the performance of our algorithm may differ when applied to real-world data. Nevertheless, our algorithm provides a valuable contribution to the field of changepoint detection, and we plan to evaluate its performance on real-world data in future research.</p><p>We hope that our combination of Fisher's linear discriminant and Petunin's statistics will inspire further research in this area and contribute to improving the accuracy and efficiency of changepoint detection algorithms. Overall, our study provides a promising foundation for future research in this field.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Time series composed of samples from nearly non-overlapping uniform distributions with varying means and their respective changepoints.</figDesc><graphic coords="5,110.76,554.64,373.44,186.96" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Time series composed of samples derived from nearly non-overlapping uniform distributions that have distinct means and corresponding changepoints, as denoted by blue crosses using the algorithm.</figDesc><graphic coords="6,83.64,72.00,427.68,214.08" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Time series composed of samples derived from uniform distributions that exhibit distinct means, initially showing significant overlap, followed by mild overlap, and ultimately no overlap with respective changepoints.</figDesc><graphic coords="7,72.00,72.00,459.84,230.16" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Time series composed of samples derived from uniform distributions that exhibit distinct means, initially showing significant overlap, followed by mild overlap, and ultimately no overlap and corresponding changepoints, as denoted by blue crosses using the algorithm.</figDesc><graphic coords="7,83.40,355.92,428.16,214.32" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 3</head><label>3</label><figDesc>Figure 3 and Table3show that the desired changepoints are 100, 200, and 300. Figure4demonstrates that the almost all detected changepoints are close to the actual ones, and the corresponding error measures are displayed in Table4.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Time series composed of samples derived from normal distributions that exhibit distinct means and almost no overlap.</figDesc><graphic coords="8,119.04,325.92,356.64,178.56" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Time series composed of samples derived from normal distributions that exhibit distinct means and almost no overlap and corresponding changepoints, as denoted by blue crosses using the algorithm</figDesc><graphic coords="8,115.08,544.80,364.80,182.64" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: Time series consisting of samples from normal distributions with the same means, but with variances that gradually begin to differ</figDesc><graphic coords="9,72.00,477.24,474.36,237.36" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 8 :</head><label>8</label><figDesc>Figure 8: Time series composed of samples derived from normal distributions with identical means, but whose variances begin to differ gradually and corresponding changepoints, as denoted by blue crosses using the algorithm</figDesc><graphic coords="10,83.40,72.00,428.16,214.32" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_9"><head>Figure 9 :</head><label>9</label><figDesc>Figure 9: Time series consisting of samples from normal distributions with the same means, but with variances that differ more strongly</figDesc><graphic coords="11,72.00,123.36,476.52,238.56" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_10"><head>Figure 10 :</head><label>10</label><figDesc>Figure 10: Time series composed of samples derived from normal distributions with the same means, but with variances that differ more strongly and corresponding changepoints, as denoted by blue crosses using the algorithm</figDesc><graphic coords="11,72.00,402.24,460.44,230.52" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>Measures of error for nearly non-overlapping uniform distributions that have distinct means.</figDesc><table><row><cell>Error measure</cell><cell>Value</cell></row><row><cell>MAE</cell><cell>44.74</cell></row><row><cell>MSE</cell><cell>2002.73</cell></row><row><cell>MSD</cell><cell>20.73</cell></row><row><cell>RMSE</cell><cell>42.41</cell></row><row><cell>NRMSE</cell><cell>0.21</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table</head><label></label><figDesc></figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 3</head><label>3</label><figDesc>Time intervals and uniform distributions with distinct means that initially display significant overlap, followed by mild overlap, and ultimately no overlap</figDesc><table><row><cell>Time interval</cell><cell>Distribution 𝑇 1</cell><cell>Distribution 𝑇 2</cell><cell>Distribution 𝑇 3</cell></row><row><cell>0-99</cell><cell>U(60;70)</cell><cell>U(96.0;97.0)</cell><cell>U(36.4;36.7)</cell></row><row><cell>100-199</cell><cell>U(63;73)</cell><cell>U(96.3;97.3)</cell><cell>U(36.5;36.8)</cell></row><row><cell>200-299</cell><cell>U(70;80)</cell><cell>U(97.0, 98.0)</cell><cell>U(36.7;37.0)</cell></row><row><cell>300-399</cell><cell>U(85,95)</cell><cell>U(99.0;99.9)</cell><cell>U(37.5;37.8)</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_5"><head>Table 6</head><label>6</label><figDesc>Measures of error for normal distributions with distinct means that exhibit minimal overlap.</figDesc><table><row><cell>Error measure</cell><cell>Value</cell></row><row><cell>MAE</cell><cell>43.67</cell></row><row><cell>MSE</cell><cell>2077.24</cell></row><row><cell>MSD</cell><cell>22.14</cell></row><row><cell>RMSE</cell><cell>42.81</cell></row><row><cell>NRMSE</cell><cell>0.21</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_6"><head>Table 5 and</head><label>5</label><figDesc>Figure</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_7"><head>Table 8</head><label>8</label><figDesc>Error measures for normal distributions with the same means, but with variances that gradually begin to differ</figDesc><table><row><cell>Error measure</cell><cell>Value</cell></row><row><cell>MAE</cell><cell>47.75</cell></row><row><cell>MSE</cell><cell>2468.69</cell></row><row><cell>MSD</cell><cell>19.96</cell></row><row><cell>RMSE</cell><cell>46.73</cell></row><row><cell>NRMSE</cell><cell>0.23</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_8"><head>Table 7 and</head><label>7</label><figDesc>Figure</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_9"><head>Table 9</head><label>9</label><figDesc>Time intervals and normal distributions with the same means, but with variances that differ more strongly</figDesc><table><row><cell>Time interval</cell><cell>Distribution 𝑇 1</cell><cell>Distribution 𝑇 2</cell><cell>Distribution 𝑇 3</cell></row><row><cell>0-99</cell><cell>N(70;1)</cell><cell>N(97.0;0.10)</cell><cell>N(36.55;0.05)</cell></row><row><cell>100-199</cell><cell>N(70;5)</cell><cell>N(97.0;0.50)</cell><cell>N(36.55;0.25)</cell></row><row><cell>200-299</cell><cell>N(70;7)</cell><cell>N(97.0,1.00)</cell><cell>N(36.55;0.5)</cell></row><row><cell>300-399</cell><cell>N(70;10)</cell><cell>N(97.0,1.50)</cell><cell>N(36.55;0.75)</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_10"><head>Table 10</head><label>10</label><figDesc>Error measures for normal distributions with the same means, but with variances that differ more strongly</figDesc><table><row><cell>Error measure</cell><cell>Value</cell></row><row><cell>MAE</cell><cell>43.49</cell></row><row><cell>MSE</cell><cell>2166.74</cell></row><row><cell>MSD</cell><cell>20.34</cell></row><row><cell>RMSE</cell><cell>43.19</cell></row><row><cell>NRMSE</cell><cell>0.22</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_11"><head>Table 9 and</head><label>9</label><figDesc>Figure</figDesc><table /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Nonparametric population equivalence test based on measure of closeness between samples</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Klyushin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">I</forename><surname>Petunin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Ukrainian Mathematical Journal</title>
		<imprint>
			<biblScope unit="page" from="147" to="163" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
	<note>2nd. ed</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Nonparametric Test for Change-Point Detection of IoT Time-Series Data</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Klyushin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">V</forename><surname>Urazovskyi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">A Fusion of Artificial Intelligence and Internet of Things for Emerging Cyber Systems, Intelligent Systems Reference Library</title>
				<editor>
			<persName><forename type="first">P</forename><surname>Kumar</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Obaid</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Cengiz</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Balas</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="volume">210</biblScope>
			<biblScope unit="page" from="99" to="122" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">The Use of Multiple Measurements in Taxonomic Problems</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Fisher</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Annals of Eugenics</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="179" to="188" />
			<date type="published" when="1936">1936</date>
		</imprint>
	</monogr>
	<note>2nd. ed</note>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">L</forename><surname>Van Der Waerden</surname></persName>
		</author>
		<title level="m">Mathematische Statistic</title>
				<meeting><address><addrLine>Berlin; Berlin and New York</addrLine></address></meeting>
		<imprint>
			<publisher>Springer-Verlag</publisher>
			<date type="published" when="1957">1957. 1965. 1969</date>
		</imprint>
	</monogr>
	<note>English. transl. of 2nd. ed.</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Computer diagnosis of breast cancer</title>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">I</forename><surname>Petunin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Klyushin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">P</forename><surname>Ganina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">V</forename><surname>Borodai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">I</forename><surname>Andrushkiv</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Bulletin of Kyiv University, Ser. cybernetics</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="58" to="68" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Selective review of offline changepoint detection methods</title>
		<author>
			<persName><forename type="first">C</forename><surname>Truong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Oudre</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Vayatis</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.sigpro.2019.107299</idno>
	</analytic>
	<monogr>
		<title level="j">Signal Processing</title>
		<imprint>
			<biblScope unit="volume">167</biblScope>
			<biblScope unit="page">107299</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Fisher Discriminant Analysis with Kernels</title>
		<author>
			<persName><forename type="first">S</forename><surname>Mika</surname></persName>
		</author>
		<idno type="DOI">10.1109/NNSP.1999.788121</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE Conference on Neural Networks for Signal Processing IX</title>
				<imprint>
			<date type="published" when="1999">1999</date>
			<biblScope unit="page" from="41" to="48" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Change Detection in Multivariate Datastreams: Likelihood and Detectability Loss</title>
		<author>
			<persName><forename type="first">C</forename><surname>Alippi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Boracchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Carrera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Roveri</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.1510.04850</idno>
	</analytic>
	<monogr>
		<title level="m">Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16)</title>
				<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1368" to="1374" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Online Changepoint Detection on a Budget</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mishra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Sriharsha</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICDMW53433.2021.00057</idno>
	</analytic>
	<monogr>
		<title level="m">International Conference on Data Mining Workshops (ICDMW)</title>
				<imprint>
			<date type="published" when="2021">2021. 2021</date>
			<biblScope unit="page" from="414" to="420" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Jaehyeok</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ramdas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rinaldo</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2203.03532</idno>
		<idno type="arXiv">arXiv:2203.03532v1</idno>
		<title level="m">E-detectors: a nonparametric framework for online changepoint detection</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Online Bayesian inference for multiple changepoints and risk assessment</title>
		<author>
			<persName><forename type="first">O</forename><surname>Sorba</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Geissler</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2106.05834</idno>
		<idno type="arXiv">arXiv:2106.05834v1</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Network Clustering for Latent State and Changepoint Detection</title>
		<author>
			<persName><forename type="first">M</forename><surname>Navarro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">I</forename><surname>Allen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Weylandt</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2111.01273</idno>
		<idno type="arXiv">arXiv:2111.01273v1</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">A Change-Point Based Control Chart for Detecting Sparse Changes in High-Dimensional Heteroscedastic Data</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">M</forename><surname>Zwetsloot</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2101.09424</idno>
		<idno type="arXiv">arXiv:2101.09424v1</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Monitoring Deforestation Using Multivariate Bayesian Online Changepoint Detection with Outliers</title>
		<author>
			<persName><forename type="first">L</forename><surname>Wendelberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Reich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Wilson</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2112.12899v2</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Bayesian Online Changepoint Detection</title>
		<author>
			<persName><forename type="first">P</forename><surname>Adams</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Mackay</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.0710.3742</idno>
		<idno type="arXiv">arXiv:0710.3742v1</idno>
		<imprint>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">Change-point Detection for Piecewise Exponential Models</title>
		<author>
			<persName><forename type="first">P</forename><surname>Cooney</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>White</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2112.03962</idno>
		<idno type="arXiv">arXiv:2112.03962v1</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Distribution-Free Changepoint Detection Tests Based on the Breaking of Records</title>
		<author>
			<persName><forename type="first">J</forename><surname>Castillo-Mateo</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2105.08186</idno>
		<idno type="arXiv">arXiv:2105.08186v1</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">Changepoint detection on a graph of time series</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">L</forename><surname>Hallgren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">A</forename><surname>Heard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J M</forename><surname>Turcotte</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2102.04112</idno>
		<idno type="arXiv">arXiv:2102.04112v1</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">A Greedy Graph Search Algorithm Based on Changepoint Analysis for Automatic QRS Complex Detection</title>
		<author>
			<persName><forename type="first">A</forename><surname>Fotoohinasab</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Hocking</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Afghah</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.compbiomed.2021.104208</idno>
	</analytic>
	<monogr>
		<title level="j">Computers in Biology and Medicine</title>
		<imprint>
			<biblScope unit="volume">130</biblScope>
			<biblScope unit="page">104208</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Changepoint Detection in the Presence of Outliers</title>
		<author>
			<persName><forename type="first">P</forename><surname>Fearnhead</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Rigaill</surname></persName>
		</author>
		<idno type="DOI">10.1080/01621459.2017.1385466</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of the American Statistical Association</title>
		<imprint>
			<biblScope unit="volume">114</biblScope>
			<biblScope unit="page" from="169" to="183" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Rank-based multiple change-point detection in multivariate time series</title>
		<author>
			<persName><forename type="first">F</forename><surname>Harlé</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Chatelain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gouy-Pailler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Achard</surname></persName>
		</author>
		<idno type="DOI">10.5281/zenodo.43927</idno>
	</analytic>
	<monogr>
		<title level="m">22nd European Signal Processing Conference (EUSIPCO)</title>
				<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="1337" to="1341" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<title level="m" type="main">Sign Segmentation with Changepoint-Modulated Pseudo-Labelling</title>
		<author>
			<persName><forename type="first">K</forename><surname>Renz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">C</forename><surname>Stache</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Fox</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Varol</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Albanie</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2104.13817</idno>
		<idno type="arXiv">arXiv:2104.13817v1</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<title level="m" type="main">Fast Online Changepoint Detection via Functional Pruning CUSUM statistics</title>
		<author>
			<persName><forename type="first">G</forename><surname>Romano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Eckley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Fearnhead</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Rigaill</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2110.08205</idno>
		<idno type="arXiv">arXiv:2110.08205v2</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
