<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">The Use of The Kolmogorov-Wiener Filter for Prediction of Heavy-Tail Stationary Processes</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Vyacheslav</forename><surname>Gorev</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Alexander</forename><surname>Gusev</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Valerii</forename><surname>Korniienko</surname></persName>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="institution">Dnipro University of Technology</orgName>
								<address>
									<addrLine>19 Dmytra Yavornytskoho Ave</addrLine>
									<postCode>49005</postCode>
									<settlement>Dnipro</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="department">International Workshop on Intelligent Information Technologies and Systems of Information Security</orgName>
								<address>
									<addrLine>March 23-25</addrLine>
									<postCode>2022</postCode>
									<settlement>Khmelnytskyi</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">The Use of The Kolmogorov-Wiener Filter for Prediction of Heavy-Tail Stationary Processes</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">6162EA3C021E603E765382A5E3AC184C</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T06:25+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Kolmogorov-Wiener filter</term>
					<term>prediction</term>
					<term>heavy-tail stationary random process</term>
					<term>power-law correlation function</term>
					<term>telecommunication traffic</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>We investigate the possibility of the practical use of the Kolmogorov-Wiener filter for the prediction of a heavy-tail stationary random process. A discrete process and a discrete filter are considered. Nowadays telecommunication traffic in telecommunication systems with data packet transfer is considered to be a heavy-tail random process, so the problem under consideration may be applied to the prediction of telecommunication traffic, which may be important, for example, for the prevention of network congestion, for the maximization of the network utilization rate and for cyber security, because a comparison of the actual traffic with the predicted one may help to detect cyber-attacks. There are a lot of different and rather sophisticated approaches to traffic prediction, for example, the ARIMA approach, neural network approaches and so on, which may be applicable to the prediction of a non-stationary traffic in various cases. However, in the rather simple case of a stationary telecommunication traffic, more simple approaches may be applied. For example, such a simple prediction approach as the Kolmogorov-Wiener filter is not sufficiently developed in the literature. In this paper it is shown that if a stationary heavy-tail random process is smooth enough, then the Kolmogorov-Wiener filter may be used for its practical prediction. The obtained results may be taken into account for practical telecommunication traffic prediction in telecommunication systems with data packet transfer.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction and related works</head><p>The problem of telecommunication traffic prediction is important for telecommunications. For example, it is important for the prevention of network congestion and for the maximization of the network utilization rate <ref type="bibr" target="#b0">[1]</ref>; it is significant for understanding future market dynamics and reducing the decision risks <ref type="bibr" target="#b1">[2]</ref>. The telecommunication traffic prediction is also important for cyber security <ref type="bibr" target="#b2">[3]</ref> because the comparison of the actual traffic with the predicted one may help to detect cyber-attacks.</p><p>There are a lot of different approaches to traffic prediction. For example, the following ones can be indicated: Auto Regressive Integrated Moving Average (ARIMA), Markov Modulated Poisson Process models (MMPP), Kalman filtering, Seasonal ARIMA (SA), a neural network approach (including deep neural networks <ref type="bibr" target="#b3">[4]</ref>), wavelet transforms <ref type="bibr" target="#b0">[1]</ref>, the least-squares support vector machine (LSSVM), gray models <ref type="bibr" target="#b1">[2]</ref>, Holt-Winters models <ref type="bibr" target="#b2">[3]</ref>. Of course, rather complicated approaches should be used for non-stationary randomly fluctuating traffic prediction. But if the traffic is stationary and rather smooth, sophisticated approaches may not be needed. For example, in <ref type="bibr" target="#b1">[2]</ref> some methods are presented for a description of rather simple cases. In <ref type="bibr" target="#b1">[2]</ref> it is stressed that in stationary cases the ARMA approach may be used too, and in the case of a smooth monotone process the gray model may be applied.</p><p>As is known <ref type="bibr" target="#b4">[5]</ref>, such a simple filter as the Kolmogorov-Wiener one may be used for the prediction of stationary random processes. However, as far as we know, such an approach is not sufficiently developed in the literature for traffic prediction even for rather simple cases. The Kolmogorov-Wiener filter is widely used for signal extraction in different fields of knowledge <ref type="bibr" target="#b5">[6]</ref>. It is widely used in econometric analyses <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8]</ref> and in image restoration <ref type="bibr" target="#b8">[9]</ref>. The theoretical fundamentals of the Kolmogorov-Wiener filter for continuous telecommunication traffic prediction are developed in our recent paper <ref type="bibr" target="#b9">[10]</ref>. The paper <ref type="bibr" target="#b9">[10]</ref> is dedicated to the solution of the Wiener-Hopf integral equation in the unknown filter weight function for two telecommunication traffic models: the power-law structure function model and the model of fractional Gaussian noise; the solutions based on the truncated polynomial expansion method and the truncated trigonometric Fourier series method are obtained.</p><p>However, the possibility of using the Kolmogorov-Wiener filter for practical traffic prediction is still under question. The aim of this work is to show that the Kolmogorov-Wiener filter may be applicable to traffic prediction if the traffic is stationary and smooth enough. As is known <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12]</ref>, the telecommunication traffic in systems with data packet transfer is considered to be a self-similar heavy-tail random process. So, if we show that the Kolmogorov-Wiener filter is applicable to the prediction of simulated data of a stationary random self-similar heavy-tail process, then we will be able to conclude that it may be applied to practical telecommunication traffic prediction. In this paper we restrict ourselves to the investigation of a discrete process and a discrete filter. The corresponding simulated data may be generated via the symmetric moving average approach <ref type="bibr" target="#b12">[13]</ref>, the generated process is in fact similar to the fractional Gaussian noise process, which may describe telecommunication traffic, see <ref type="bibr" target="#b13">[14]</ref>.</p><p>The paper is organized as follows. In Sec. 1 the introduction and the literature review are given. In Sec. 2 the discrete Kolmogorov-Wiener filter and the symmetric moving average approach for obtaining simulated stationary heavy-tail data are described. In Sec. 3 heavy-tail simulated data are obtained. In Sec. 4 the prediction results are described, and in Sec. 5 conclusions are made.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Description of the discrete Kolmogorov-Wiener filter and of the method of generation of heavy-tail simulated data</head><p>Let the filter input 𝑥 𝑡 be a stationary random process which is the sum of the signal 𝑠 𝑡 and the noise 𝑛 𝑡 :</p><p>𝑥′ 𝑡 = 𝑠 𝑡 + 𝑛 𝑡 .</p><p>(1) The Kolmogorov-Wiener filter output 𝑦 𝑡 should be «the closest» to the value 𝑠 𝑡+𝑧 where 𝑧 is the number of points for which the prediction is made, so we have the following requirement:</p><p>〈(𝑦 𝑡 − 𝑠 𝑡+𝑧 ) 2 〉 → min.</p><p>(2) The correlation function 𝑅 𝑥′ (𝑡) of the filter input 𝑥′ 𝑡 and the cross-correlation function of the processes 𝑠 𝑡 and 𝑥′ 𝑡 𝑅 𝑠𝑥′ (𝑡) are considered to be given. The Kolmogorov-Wiener filter is considered to be a linear one, so the filter output is expressed in terms of the filter input as follows:</p><formula xml:id="formula_0">𝑦 𝑡 = ∑ ℎ 𝑖 𝑥′ 𝑡−𝑖 𝑇 𝑖=0 (3)</formula><p>where ℎ 𝑖 are the unknown filter weight coefficients and the input data are given for 𝑡 = 0,1,2, . . , 𝑇. The coefficients ℎ 𝑖 should minimize expression <ref type="bibr" target="#b1">(2)</ref>. The term 〈𝑠 𝑡+𝑧 2 〉 is a constant and does not depend on the weight coefficients ℎ 𝑖 , so (2) can be rewritten as The function 𝑓(ℎ 0 , ℎ 1 , … , ℎ 𝑇 ) is a quadratic one, and thus it has one minimum, which is described by the conditions 𝜕𝑓(ℎ 0 , ℎ 1 , … , ℎ 𝑇 ) 𝜕ℎ 𝑘 = 0; 𝑘 = 0,1,2, … , 𝑇.</p><formula xml:id="formula_1">〈𝑦 𝑡 2 〉 − 2〈𝑦 𝑡 𝑠 𝑡+𝑧 〉 → min,<label>(4</label></formula><p>These conditions with account for the evenness of the correlation function and the fact that</p><formula xml:id="formula_3">𝜕ℎ 𝑖 𝜕ℎ 𝑗 = 𝛿 𝑖𝑗 = { 1, 𝑖 = 𝑗 0, 𝑖 ≠ 𝑗 (10) lead to ∑ ℎ 𝑖 𝑅 𝑥′ (𝑖 − 𝑘) 𝑇 𝑖=0 = 𝑅 𝑠𝑥′ (𝑘 + 𝑧); 𝑘 = 0,1,2, … , 𝑇,<label>(11)</label></formula><p>which is a set of linear equations in the unknown coefficients ℎ 𝑖 . In matrix form, this set may be presented as</p><formula xml:id="formula_4">𝑅 𝑥′ • ℎ = 𝑅 𝑠𝑥′<label>(12)</label></formula><p>where</p><formula xml:id="formula_5">𝑅 𝑥′ = ( 𝑅 𝑥′ (0) 𝑅 𝑥′ (1) 𝑅 𝑥′ (2) ⋮ 𝑅 𝑥′ (𝑇) 𝑅 𝑥′ (1) 𝑅 𝑥′ (0) 𝑅 𝑥′ (1) ⋮ 𝑅 𝑥′ (𝑇 − 1) 𝑅 𝑥′ (2) 𝑅 𝑥′ (1) 𝑅 𝑥′ (0) ⋮ 𝑅 𝑥′ (𝑇 − 2) ⋯ ⋯ ⋯ ⋱ ⋯ 𝑅 𝑥′ (𝑇) 𝑅 𝑥′ (𝑇 − 1) 𝑅 𝑥′ (𝑇 − 2) ⋮ 𝑅 𝑥′ (0) )<label>(13)</label></formula><p>is the correlation matrix <ref type="bibr" target="#b4">[5]</ref>, ℎ is the vector column of the unknown weight coefficients, and 𝑅 𝑠𝑥 is the vector column of the free terms:</p><formula xml:id="formula_6">ℎ = ( ℎ 0 ℎ 1 ℎ 2 ⋮ ℎ 𝑇) , 𝑅 𝑠𝑥′ = ( 𝑅 𝑠𝑥′ (𝑧) 𝑅 𝑠𝑥′ (𝑧 + 1) 𝑅 𝑠𝑥′ (𝑧 + 2) ⋮ 𝑅 𝑠𝑥′ (𝑧 + 𝑇)) . (<label>14</label></formula><formula xml:id="formula_7">)</formula><p>So, the vector column ℎ may be found as ℎ = 𝑅 𝑥′ −1 • 𝑅 𝑠𝑥′ .</p><p>(15) Then the filter output may be obtained by formula <ref type="bibr" target="#b2">(3)</ref>.</p><p>It should be noticed that all the above-mentioned calculations are described in <ref type="bibr" target="#b5">[6]</ref>. The Kolmogorov-Wiener filter may be used both for the extraction of a signal form the sum of a signal and a noise and for the signal prediction. In the case where the input signal is non-noisy, the Kolmogorov-Wiener filter may be used for the prediction of the stationary process given at the filter input. In the non-noisy case, the filter weight coefficients are given by formula <ref type="bibr" target="#b14">(15)</ref> with account for the fact that 𝑅 𝑠𝑥′ = (𝑅 𝑥′ (𝑧) 𝑅 𝑥′ (𝑧 + 1) 𝑅 𝑥′ (𝑧 + 2) … 𝑅 𝑥′ (𝑧 + 𝑇)) 𝑇 .</p><p>(16) Now let us describe the method of the generation of heavy-tail simulated data, which is used in the paper. In this paper we use the symmetric moving average approach, which is described in detail in <ref type="bibr" target="#b12">[13]</ref>. Such an approach was chosen because of its simplicity.</p><p>Let 𝑉 𝑡 be a stationary white noise process with an average value equal to zero and a variance equal to 1. Then a heavy-tail process 𝑋 𝑖 similar to the fractional Gaussian noise may be generated as follows <ref type="bibr" target="#b12">[13]</ref>:</p><formula xml:id="formula_8">𝑋 𝑖 = ∑ 𝑎 |𝑗| 𝑉 𝑖+𝑗 𝑞 𝑗=−𝑞 = 𝑎 𝑞 𝑉 𝑖−𝑞 + 𝑎 𝑞−1 𝑉 𝑖−𝑞+1 + ⋯ + 𝑎 𝑞 𝑉 𝑖+𝑞 ,<label>(17)</label></formula><p>theoretically, 𝑞 should be infinite; in practical calculation it may be a rather large, but finite number; and the coefficients 𝑎 𝑗 are as follows:</p><formula xml:id="formula_9">𝑎 0 = √(2 − 2𝐻)𝛾 0 1.5 − 𝐻<label>(18)</label></formula><p>and</p><formula xml:id="formula_10">𝑎 𝑗 = 𝑎 0 2 ((𝑗 + 1) 𝐻+0.5 + (𝑗 − 1) 𝐻+0.5 − 2𝑗 𝐻+0.5 ),<label>(19)</label></formula><p>here, 𝛾 0 is the variance and 𝐻 is the Hurst exponent of the process 𝑋 𝑖 . The number 𝑞 may be very large, it is estimated as follows <ref type="bibr" target="#b12">[13]</ref>:</p><formula xml:id="formula_11">𝑞 ≥ max (𝑚, ( 𝐻 2 − 0.25 2𝛽 ) 1 1.5−𝐻 ) (<label>20</label></formula><formula xml:id="formula_12">)</formula><p>where 𝑚 is the number of correlation function points of the process 𝑋 𝑖 which should be obtained and a small number 𝛽 is in fact the given accuracy of the coefficient 𝑎 𝑗 in (17); the values 𝑎 𝑗&gt;𝑞 should be less than 𝛽𝑎 0 . The accuracy of this method depends on 𝑞, and the method is not exact even in the case where 𝑞 → ∞. However, for a rather large 𝑞 the method may lead to good practical results <ref type="bibr" target="#b12">[13]</ref>.</p><p>3. The generation of non-smooth and smooth heavy-tail simulated data 10 6 points of the white noise process 𝑉 𝑡 with an average value equal to 0 and a variance equal to 1 are generated on the basis of the generator built in the Wolfram Mathematica package. The following parameters were chosen: 𝑚 = 10 5 , 𝛽 = 10 −4 , 𝐻 = 0.8, 𝛾 0 = 1.</p><p>(21) The corresponding number 𝑞 = 3 • 10 5 is chosen. In fact, the inequality (21) holds even for 𝑞 = 10 5 , the value 𝑞 = 3 • 10 5 was chosen for a higher accuracy. On the basis of the idea (17)-(19), 10 5 points of the process 𝑋 𝑖 were generated as follows:</p><formula xml:id="formula_13">𝑋 𝑖 = ∑ 𝑎 |𝑗| 𝑉 𝑖+𝑗+𝑞 𝑞 𝑗=−𝑞 , (<label>22</label></formula><formula xml:id="formula_14">)</formula><p>in fact, the quantities 𝑉 𝑖+𝑗+𝑞 and 𝑉 𝑖+𝑗 are independent because 𝑉 𝑡 is the white noise, no matter whether formula (17) or formula (22) is used; formula (22) is chosen in order to avoid indices beyond the array 𝑉 𝑖 bounds. The coefficients 𝑎 𝑗 are calculated on the basis of (19).</p><p>The average value of 𝑋 𝑖 is close to zero. We have to construct simulated data that may describe telecommunication traffic, which is obviously non-negative. So we build the array 𝑥 𝑖 as follows:</p><p>𝑥 𝑖 = 𝑋 𝑖 + |min(𝑋)| + 10 −3 , (23) a small summand 10 -3 is added in order to avoid obtaining an infinite value of the prediction mean average percentage error (MAPE). The process 𝑥 𝑖 is a non-negative random stationary heavy-tail process; its graph is given in Fig. <ref type="figure">1</ref>.</p><p>Let us make sure that the generated process 𝑥 𝑖 is a heavy-tail one. Let us describe the corresponding centralized process 𝑥𝑐 𝑖 :</p><p>𝑥𝑐 𝑖 = 𝑥 𝑖 − 〈𝑥〉 (24) where the average value 〈𝑥〉 is</p><formula xml:id="formula_15">〈𝑥〉 = 1 10 5 ∑ 𝑥 𝑖 10 5 𝑗=1 ,<label>(25)</label></formula><p>here we take into account the fact that the number of points of the generated array 𝑥 𝑖 is equal to 10 5 . The correlation function of the process 𝑥𝑐 𝑖 is built as follows:</p><formula xml:id="formula_16">𝑅 𝑥 (𝜏) = 〈𝑥𝑐 𝑖 • 𝑥𝑐 𝑖+𝜏 〉 = 1 10 5 − 𝜏 ∑ (𝑥𝑐 𝑖 • 𝑥𝑐 𝑖+𝜏 ) 10 5 −𝜏 𝑖=1 . (<label>26</label></formula><formula xml:id="formula_17">)</formula><p>The corresponding correlation function and its least-square fit are given in Fig. <ref type="figure">2</ref>. The corresponding least-square fit is sought as 𝑅 fit (𝑡) = 𝑎 • 𝑡 𝑏 , (27) The following numerical coefficients were obtained: 𝑎 = 0.39, 𝑏 = −0.44, (28) here, the coefficients are rounded off to two significant digits. So, 𝑅 fit (𝑡) = 0.39 • 𝑡 −0.44 , (29) and on the basis of formula (29) and Fig. <ref type="figure">2</ref> one can conclude that the correlation function exhibits a power law decay rather than an exponential one. So, indeed, the generated process is a heavy-tail one.</p><p>It should also be noticed that according to <ref type="bibr" target="#b12">[13]</ref> the following property should be valid for ≥ 1 : 𝑅 𝑥 (𝑡)~𝑡 2𝐻−2 , (30) so, according to the least-square fit 2𝐻 − 2 = −0.44, (31) which leads to 𝐻 = 0.78, (32) which is very close to the value 0.8, see (21). The variance of the process is equal to 𝑅 𝑥 (0) = 0.93, (33) which is rather close to the value 𝛾 0 = 1, see (21). So one can conclude that the generated process is close to the fractional Gaussian noise with given variance and Hurst exponent.</p><p>The generated process is non-smooth, i.e. it is really highly fluctuating, so it is rather difficult to predict it. So it is reasonable to investigate smooth heavy-tail processes. In order to obtain smoother processes, we use a very simple smoothing algorithm <ref type="bibr" target="#b14">[15]</ref>:</p><formula xml:id="formula_18">𝑋 ̃𝑖 = 1 2𝑙 + 1 ∑ 𝑋 𝑖+𝑗 𝑙 𝑗=−𝑙 (34)</formula><p>where 𝑋 ̃𝑖 are the values of a smooth process, expression (34) is valid for every point except for the first 𝑙 and the last 𝑙 ones. The first 𝑙 and the last 𝑙 points of the process 𝑋 ̃𝑖 may be obtained as the corresponding linear least-square fit of the first 𝑙 and the last 𝑙 points of the process 𝑋 𝑖 , respectively. The corresponding non-negative process may be expressed similarly to ( <ref type="formula">23</ref> .</p><p>(37)</p><p>The simulated data for the process 𝑥 ̃𝑖 for 𝑙 = 3 are given in Fig. <ref type="figure">3</ref>. It should be stressed that the obtained smooth process 𝑥 ̃𝑖 is also a heavy-tail one. Let us consider the corresponding correlation function:</p><formula xml:id="formula_19">𝑅 𝑥 ̃(𝜏) = 〈𝑥𝑐 ̃𝑖 • 𝑥𝑐 ̃𝑖+𝜏 〉 = 1 10 5 − 𝜏 ∑ (𝑥𝑐 ̃𝑖 • 𝑥𝑐 ̃𝑖+𝜏 ) 10 5 −𝜏 𝑖=1 . (<label>38</label></formula><formula xml:id="formula_20">)</formula><p>For example, for 𝑙 = 3 the following correlation function and its fit are obtained, see Fig. <ref type="figure">4</ref>. The least-square fit is sought in the form (27) , the following numerical coefficients were obtained: 𝑎 = 0.43, 𝑏 = −0.46, (39) here, the coefficients are rounded off to two significant digits. So, 𝑅 fit (𝑡) = 0.43 • 𝑡 −0.46 . (40) As can be seen form Fig. <ref type="figure">4</ref>, the correlation function of a smooth process is also well described by a power-law function, the obtained smooth process 𝑥 ̃𝑖 is also a heavy-tail one, and, in fact, this process may also be roughly considered as fractional Gaussian noise.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Prediction on the basis of the Kolmogorov-Wiener filter</head><p>The prediction for non-smooth data is built as follows. In fact, the prediction for the centralized process is used. The filter weight coefficients are built on the basis of ( <ref type="formula" target="#formula_5">13</ref>)-( <ref type="formula">16</ref>); the corresponding correlation function is taken in the form (26).</p><p>First of all, the points 𝑥𝑐 1 , 𝑥𝑐 2 ,…, 𝑥𝑐 𝑇+1 of the simulated process 𝑥𝑐 are taken as the filter input, and the points 𝑥𝑐 𝑇+2 , 𝑥𝑐 𝑇+3 ,…, 𝑥𝑐 𝑇+𝑧+1 are predicted. Then the points 𝑥𝑐 2 , 𝑥𝑐 3 , … , 𝑥𝑐 𝑇+2 are taken from the simulated data, and the points 𝑥𝑐 𝑇+3 , 𝑥𝑐 𝑇+4 ,…, 𝑥𝑐 𝑇+𝑧+2 are predicted, and so on.</p><p>At the 𝑖 th iteration of the algorithm the predcition is calculated as follows. The filter input data are</p><formula xml:id="formula_21">𝑥′ 0 = 𝑥𝑐 𝑖 , 𝑥′ 1 = 𝑥𝑐 𝑖+1 , … , 𝑥′ 𝑇 = 𝑥𝑐 𝑖+𝑇 ,<label>(41) so</label></formula><p>𝑥′ 𝑗 = 𝑥𝑐 𝑖+𝑗 .</p><p>(42) The filter output 𝑦 𝑡 is the predicted value for 𝑥′ 𝑡+𝑧 (the non-noisy case is investigated). According to (3) we have</p><formula xml:id="formula_22">𝑦 𝑡 = ∑ ℎ 𝑘 𝑥′ 𝑡−𝑘 , 𝑡 𝑘=0<label>(43)</label></formula><p>here, the upper bound of summation is changed in order to avoid obtaining indices beyond the array of the input data. Such a change of the bound does not lead to a significant error for the prediction under consideration. On the basis of (41)-(43) one can conclude that</p><formula xml:id="formula_23">𝑥𝑐 ̂𝑖+𝑡+𝑧 = ∑ ℎ 𝑘 𝑥𝑐 𝑡+𝑖−𝑘 𝑡 𝑘=0 (44)</formula><p>where 𝑥𝑐 ̂𝑖+𝑡+𝑧 is the predicted value of 𝑥𝑐 𝑖+𝑡+𝑧 . Obviously, the prediction is made only for the values 𝑖 + 𝑡 + 𝑧 = 𝑇 + 1 + 𝑖, 𝑇 + 2 + 𝑖, … , 𝑇 + 𝑧 + 𝑖. We should also remember that we should make the prediction for the non-negative simulated data. So, the predicted non-negative data may be expressed as</p><formula xml:id="formula_24">𝑥 ̂𝑖+𝑡+𝑧 = 𝑥𝑐 ̂𝑖+𝑡+𝑧 + 〈𝑥〉 = 〈𝑥〉 + ∑ ℎ 𝑘 𝑥𝑐 𝑡+𝑖−𝑘 , 𝑡 = 𝑇 + 1 − 𝑧, 𝑇 ̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅ 𝑡 𝑘=0 .<label>(45)</label></formula><p>The MAPE and MAE errors for the corresponding prediction are calculated as</p><formula xml:id="formula_25">MAPE = 1 𝑧 ∑ | 𝑥 ̂𝑖+𝑡+𝑧 − 𝑥 𝑖+𝑡+𝑧 𝑥 𝑖+𝑡+𝑧 | 𝑇 𝑡=𝑇+𝑧−1 • 100%<label>(46)</label></formula><p>and</p><formula xml:id="formula_26">MAE = 1 𝑧 ∑ |𝑥 ̂𝑖+𝑡+𝑧 − 𝑥 𝑖+𝑡+𝑧 | 𝑇 𝑡=𝑇+𝑧−1 .<label>(47)</label></formula><p>The corresponding prediction errors are calculated at each iteration. Let us tell a few words why the above-mentioned change of the upper bound of summation has no significant effect on the result. In order to make the prediction for 𝑥𝑐 ̂𝑇+1+𝑖 , one should calculate the sum of 𝑇 + 2 − 𝑧 summands, in order to make the prediction for 𝑥𝑐 ̂𝑇+2+𝑖 one should calculate the sum of 𝑇 + 3 − 𝑧 summands, and so on. We obviously deal with the case where 𝑇 ≫ 𝑧, so the value 𝑇 + 1 − 𝑧 is rather close to 𝑇 + 1, so the above-mentioned change of the upper bound is not significant for the calculations.</p><p>Similarly, the prediction for the smooth heavy-tail process is made as follows. At the 𝑖 th iteration of the algorithm the prediction is calculated as follows:  The following results are obtained. The MAPE and MAE histograms in the case of the non-smooth process are shown in Fig. <ref type="figure" target="#fig_0">5</ref>. The y-axes of the histograms indicate the number of MAPE and MAE values that belong to the corresponding intervals. For the non-smooth process the average MAPE is 24.7%, and the average MAE is 0.70 (the average value of the process is 〈𝑥〉 = 3.88). It should also be stressed that for some points the MAPE are more than 100%. So one can conclude that the prediction accuracy is not high in the case of the non-smooth process. So, if the process is a highly fluctuating one, then the prediction based on the Kolmogorov-Wiener filter may not lead to good results.</p><formula xml:id="formula_27">𝑥</formula><p>But if the process is rather smooth, the prediction results are much better. The corresponding results are given in Table <ref type="table" target="#tab_0">1</ref>. In Table <ref type="table" target="#tab_0">1</ref> 𝑙 is the parameter used in (34), i.e. 2𝑙 + 1 is the number of smoothing points. As can be seen, the smother the process is, the better the prediction results are, and the prediction accuracy increases with 𝑙. The corresponding histograms for 𝑙 = 3 are given in Fig. <ref type="figure">6</ref>. The predictions for 𝑙 ≥ 6 have an average MAPE value less than 3%.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 6:</head><p>The MAPE and MAE histograms for the prediction of a non-smooth heavy-tail process For example, for 𝑙 = 3 the average MAPE is less than 5%. As can be seen from the corresponding histogram, the MAPE for the overwhelming majority of points is less than 10%. For some very rare points the MAPE may be rather high (up to 40%), but in our opinion this may be explained as follows. As can be seen from Fig. <ref type="figure">3</ref>, the values for some points of the process 𝑥 ̃ are rather close to zero, and the MAPE may not be an adequate characteristic for the prediction of points close to zero. So, one can conclude that the Kolmogorov-Wiener filter may give good results for the prediction of a stationary heavy-tail random process if the process is smooth enough.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions and plans for the future</head><p>The use of the Kolmogorov-Wiener filter for the prediction of stationary random heavy-tail processes is considered. The attention is paid to the discrete case. The problem under consideration may be connected with the telecommunication traffic prediction, which is important, for example, for cyber security, see <ref type="bibr" target="#b2">[3]</ref>. There are many rather sophisticated approaches to telecommunication traffic prediction <ref type="bibr" target="#b0">[1]</ref>. For rather simple cases (stationary or smooth traffic) the ARMA or gray model approaches may be used <ref type="bibr" target="#b1">[2]</ref>. The traffic in telecommunication systems with data packet transfer is considered to be a self-similar heavy-tail process, see <ref type="bibr" target="#b10">[11]</ref>. Such a simple filter as the Kolmogorov-Wiener one may be used in the prediction of stationary random processes <ref type="bibr" target="#b5">[6]</ref>. However, as far as we know, the corresponding approach for traffic prediction is not sufficiently developed in the literature.</p><p>In this paper we generate data for a stationary heavy-tail process on the basis of the symmetric moving average approach <ref type="bibr" target="#b12">[13]</ref>. The corresponding non-smooth and smooth data are generated. The prediction for 1 point forward on the basis of the previous 101 points is investigated. It is shown that the Kolmogorov-Wiener filter is not good for non-smooth processes, but may give a good prediction for a stationary random heavy-tail process if the process is rather smooth. So, if the traffic is stationary and rather smooth, the Kolmogorov-Wiener filter may be used for its prediction. The advantage of the corresponding approach is the simplicity of the method in contrast with, for example, neural networks or ARIMA models.</p><p>The plans for the future are as follows. In this paper only the values T = 100 and z = 1 are investigated. So the prediction investigation for a wider range of parameters may be a plan for the future. In our recent paper <ref type="bibr" target="#b9">[10]</ref> the theoretical approach to the Kolmogorov-Wiener filter construction in the continuous case is considered. In this paper we generated a large number of data points, which may allow one to try to investigate the continuous case, so the investigation of the applicability of the method <ref type="bibr" target="#b9">[10]</ref> may be another plan for the future. This paper is based on the generation of simulated data, so the investigation of real experimental traffic data may be another plan for the future. It should also be stressed that the use of the Kolmogorov-Wiener filter for the prediction of stationary</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>5 )</head><label>5</label><figDesc>) which in view of (3) gives ∑ ℎ 𝑖 ℎ 𝑗 〈𝑥′ 𝑡−𝑖 𝑥′ 𝑡−𝑗 〉 𝑇 𝑖,𝑗=0 − 2 ∑ ℎ 𝑖 〈𝑥′ 𝑡−𝑖 𝑠 𝑡+𝑧 〉 = 𝑓(ℎ 0 , ℎ 1 , … , ℎ 𝑇 ) → min. 𝑇 𝑖=0 (With account for the facts that 〈𝑥′ 𝑡−𝑖 𝑥′ 𝑡−𝑗 〉 = 𝑅 𝑥′ (𝑖 − 𝑗) (6) and 〈𝑥′ 𝑡−𝑖 𝑠 𝑡+𝑧 〉 = 𝑅 𝑠𝑥′ (𝑖 + 𝑧) (7) one can finally write ∑ ℎ 𝑖 ℎ 𝑗 𝑅 𝑥′ (𝑖 − 𝑗) 𝑇 𝑖,𝑗=0 − 2 ∑ ℎ 𝑖 𝑅 𝑠𝑥′ (𝑖 + 𝑧) = 𝑓(ℎ 0 , ℎ 1 , … , ℎ 𝑇 ) → min.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 1 :Figure 2 :</head><label>12</label><figDesc>Figure 1: The values of the simulated non-smooth heavy-tail non-negative random process</figDesc><graphic coords="5,140.55,72.00,313.90,204.73" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>): 𝑥 ̃𝑖 = 𝑋 ̃𝑖 + |min(𝑋 ̃)| + 10 −3 , (35) and the corresponding centralized process 𝑥𝑐 ̃𝑖 = 𝑥 ̃𝑖 − 〈𝑥 ̃〉 (</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 : 3 Figure 4 :</head><label>334</label><figDesc>Figure 3: The values of the simulated smooth heavy-tail non-negative random process for 𝑙 = 3</figDesc><graphic coords="6,153.88,321.06,287.22,180.45" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: The MAPE and MAE histograms for the prediction of a non-smooth process</figDesc><graphic coords="8,72.00,337.19,451.00,139.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="9,72.00,148.63,451.00,139.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>The prediction results for a smooth heavy-tail process</figDesc><table><row><cell>𝑙</cell><cell>〈𝑥 ̃〉</cell><cell>Average MAPE, %</cell><cell>Average MAE</cell></row><row><cell>1</cell><cell>2.98</cell><cell>9.11</cell><cell>0.235</cell></row><row><cell>2</cell><cell>2.52</cell><cell>6.26</cell><cell>0.142</cell></row><row><cell>3</cell><cell>2.34</cell><cell>4.85</cell><cell>0.103</cell></row><row><cell>4</cell><cell>2.31</cell><cell>3.92</cell><cell>0.081</cell></row><row><cell>5</cell><cell>2.22</cell><cell>3.37</cell><cell>0.067</cell></row><row><cell>6</cell><cell>2.11</cell><cell>2.98</cell><cell>0.057</cell></row><row><cell>7</cell><cell>2.04</cell><cell>2.68</cell><cell>0.050</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Prediction of Data Traffic in Telecom Networks based on Deep Neural Networks</title>
		<author>
			<persName><forename type="first">Q</forename><forename type="middle">H</forename><surname>Do</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">T H</forename><surname>Doan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">V A</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">T</forename><surname>Duong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Van Linh</surname></persName>
		</author>
		<idno type="DOI">10.3844/jcssp.2020.1268.1277</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Computer Science</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="page" from="1268" to="1277" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Telecommunication Traffic Prediction Based on Improved LSSVM</title>
		<author>
			<persName><forename type="first">J.-X</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z.-H</forename><surname>Jia</surname></persName>
		</author>
		<idno type="DOI">10.1142/S0218001418500076</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal of Pattern Recognition and Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page">1850007</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Proceedings of the Seminars Future Internet and Innovative Internet Technologies and Mobile Communication Focal Topic: Advanced Persistent Threats</title>
		<author>
			<persName><forename type="first">H</forename><surname>Brugner</surname></persName>
		</author>
		<idno type="DOI">10.2313/NET-2017-09-1_04.d</idno>
	</analytic>
	<monogr>
		<title level="j">Summer Semester</title>
		<imprint>
			<biblScope unit="volume">2017</biblScope>
			<biblScope unit="page" from="25" to="32" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note>Holt-Winters Traffic Prediction on Aggregated Flow Data</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Traffic Prediction in Telecom Systems Using Deep Learning</title>
		<author>
			<persName><forename type="first">P</forename><surname>Kaushik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Yadav</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICRITO.2018.8748386</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of 7th International Conference on Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions)</title>
				<meeting>7th International Conference on Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions)<address><addrLine>Noida, India</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">August 29-31, 2018. 2018</date>
			<biblScope unit="page" from="302" to="207" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Adaptive Filtering Algorithms and Practical Implementation</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">S R</forename><surname>Diniz</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-29057-3</idno>
		<imprint>
			<date type="published" when="2020">2020</date>
			<publisher>Springer Nature Switzerland AG</publisher>
			<pubPlace>Cham</pubPlace>
		</imprint>
	</monogr>
	<note>5th ed</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Signal extraction: experimental evidence</title>
		<author>
			<persName><forename type="first">T</forename><surname>Bao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Duffy</surname></persName>
		</author>
		<idno type="DOI">10.1007/s11238-020-09785-x</idno>
	</analytic>
	<monogr>
		<title level="j">Theory and Decision</title>
		<imprint>
			<biblScope unit="volume">90</biblScope>
			<biblScope unit="page" from="219" to="232" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Filters, Waves and Spectra</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">G</forename><surname>Pollock</surname></persName>
		</author>
		<idno type="DOI">10.3390/econometrics6030035</idno>
	</analytic>
	<monogr>
		<title level="j">Econometrics</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">33</biblScope>
			<biblScope unit="page">35</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A Wiener-Kolmogorov Filter for Seasonal Adjustment and the Cholesky Decomposition of a Toeplitz Matrix</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">G</forename><surname>Pollock</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Mise</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10614-020-10087-1</idno>
	</analytic>
	<monogr>
		<title level="j">Computational Economics</title>
		<imprint>
			<biblScope unit="volume">59</biblScope>
			<biblScope unit="page" from="913" to="933" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Microscopy Image Restoration with Deep Wiener-Kolmogorov Filters</title>
		<author>
			<persName><forename type="first">V</forename><surname>Pronina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Kokkinos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">V</forename><surname>Dylov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lefkimmiatis</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-58565-5_12</idno>
	</analytic>
	<monogr>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>A. Vedaldi, H. Bischof, T. Brox, JM. Frahm</editor>
		<imprint>
			<biblScope unit="volume">12365</biblScope>
			<biblScope unit="page" from="185" to="201" />
			<date type="published" when="2020">2020</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Kolmogorov-Wiener Filter Weight Function for Stationary Traffic Forecasting: Polynomial and Trigonometric Solutions</title>
		<author>
			<persName><forename type="first">V</forename><surname>Gorev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gusev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Korniienko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Aleksieiev</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-76343-5_7</idno>
	</analytic>
	<monogr>
		<title level="s">Lecture Notes in Networks and Systems</title>
		<editor>P. Vorobiyenko, M. Ilchenko, I. Strelkovska</editor>
		<imprint>
			<biblScope unit="volume">212</biblScope>
			<biblScope unit="page" from="111" to="129" />
			<date type="published" when="2021">2021</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Loss Analysis for Networks based on Heavy-Tailed and Self-Similar Traffic</title>
		<author>
			<persName><forename type="first">D</forename><surname>Zhuang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Li</surname></persName>
		</author>
		<idno type="DOI">10.1088/1742-6596/1584/1/012054</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Physics: Conference Series</title>
		<imprint>
			<biblScope unit="volume">1584</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page">12054</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Advanced models and algorithms for self-similar IP network traffic simulations and pefformance analysis</title>
		<author>
			<persName><forename type="first">D</forename><surname>Radev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Lokshina</surname></persName>
		</author>
		<idno type="DOI">10.2478/v10187-010-0053-0</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Electrical Engineering</title>
		<imprint>
			<biblScope unit="volume">61</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="341" to="349" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">The Hurst phenomenon and fractional Gaussian noise made easy</title>
		<author>
			<persName><forename type="first">D</forename><surname>Koutsoyiannis</surname></persName>
		</author>
		<idno type="DOI">10.1080/02626660209492961</idno>
	</analytic>
	<monogr>
		<title level="j">Hydrological Sciences Journal</title>
		<imprint>
			<biblScope unit="volume">47</biblScope>
			<biblScope unit="page" from="573" to="595" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Generalized fractional Gaussian noise and its application to traffic modeling</title>
		<author>
			<persName><forename type="first">M</forename><surname>Li</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.physa.2021.126138</idno>
	</analytic>
	<monogr>
		<title level="j">Physica A</title>
		<imprint>
			<biblScope unit="volume">579</biblScope>
			<biblScope unit="issue">22</biblScope>
			<biblScope unit="page">126138</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Statistical Techniques for Transportation Engineering</title>
		<author>
			<persName><forename type="first">K</forename><surname>Molugaram</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">S</forename><surname>Rao</surname></persName>
		</author>
		<idno type="DOI">10.1016/B978-0-12-811555-8.00012-X</idno>
		<imprint>
			<date type="published" when="2017">2017</date>
			<publisher>Elsevier</publisher>
			<pubPlace>Oxford</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Mathematical modeling of power supply reliability at low voltage quality</title>
		<author>
			<persName><forename type="middle">A</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">H</forename><surname>Papaika</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ye</forename><forename type="middle">V</forename><surname>Lysenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">H</forename><surname>Koshelenko</surname></persName>
		</author>
		<author>
			<persName><surname>Olishevskyi</surname></persName>
		</author>
		<idno type="DOI">10.33271/nvngu/2021-2/097</idno>
	</analytic>
	<monogr>
		<title level="j">Naukovyi Visnyk Natsionalnoho Hirnychoho Universytetu</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="97" to="103" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
