<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Filter for Prediction of Heavy-Tail Stationary Processes</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vyacheslav Gorev</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alexander Gusev</string-name>
          <email>gusev1950@ukr.net</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Valerii Korniienko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dnipro University of Technology</institution>
          ,
          <addr-line>19 Dmytra Yavornytskoho Ave., 49005 Dnipro</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>We investigate the possibility of the practical use of the Kolmogorov-Wiener filter for the prediction of a heavy-tail stationary random process. A discrete process and a discrete filter are considered. Nowadays telecommunication traffic in telecommunication systems with data packet transfer is considered to be a heavy-tail random process, so the problem under consideration may be applied to the prediction of telecommunication traffic, which may be important, for example, for the prevention of network congestion, for the maximization of the network utilization rate and for cyber security, because a comparison of the actual traffic with the predicted one may help to detect cyber-attacks. There are a lot of different and rather sophisticated approaches to traffic prediction, for example, the ARIMA approach, neural network approaches and so on, which may be applicable to the prediction of a non-stationary traffic in various cases. However, in the rather simple case of a stationary telecommunication traffic, more simple approaches may be applied. For example, such a simple prediction approach as the Kolmogorov-Wiener filter is not sufficiently developed in the literature. In this paper it is shown that if a stationary heavy-tail random process is smooth enough, then the Kolmogorov-Wiener filter may be used for its practical prediction. The obtained results may be taken Kolmogorov-Wiener filter, prediction, heavy-tail stationary random process, power-law correlation function, telecommunication traffic be indicated: Auto Regressive Integrated Moving Average (ARIMA), Markov Modulated Poisson IntelITSIS'2022: 3rd International Workshop on Intelligent Information Technologies and Systems of Information Security, March 23-25, ORCID: 0000-0002-9528-9497 (V. Gorev); 0000-0002-0548-728X (A. Gusev); 0000-0002-0800-3359 (V. Korniienko)</p>
      </abstract>
      <kwd-group>
        <kwd>1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>into
account for
practical telecommunication
traffic
prediction in
telecommunication systems with data packet transfer.
1. Introduction and related works</p>
      <p>
        The problem of telecommunication traffic prediction is important for telecommunications. For
example, it is important for the prevention of network congestion and for the maximization of the
network utilization rate [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]; it is significant for understanding future market dynamics and reducing
the decision risks [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The telecommunication traffic prediction is also important for cyber security [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]
because the comparison of the actual traffic with the predicted one may help to detect cyber-attacks.
      </p>
      <p>
        There are a lot of different approaches to traffic prediction. For example, the following ones can
Process models (MMPP), Kalman filtering, Seasonal ARIMA (SA), a neural network approach
(including deep neural networks [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]), wavelet transforms [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], the least-squares support vector machine
(LSSVM), gray models [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], Holt-Winters models [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Of course, rather complicated approaches
should be used for non-stationary randomly fluctuating traffic prediction. But if the traffic is
stationary and rather smooth, sophisticated approaches may not be needed. For example, in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] some
methods are presented for a description of rather simple cases. In [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] it is stressed that in stationary
      </p>
      <p>2021 Copyright for this paper by its authors.
cases the ARMA approach may be used too, and in the case of a smooth monotone process the gray
model may be applied.</p>
      <p>
        As is known [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], such a simple filter as the Kolmogorov–Wiener one may be used for the
prediction of stationary random processes. However, as far as we know, such an approach is not
sufficiently developed in the literature for traffic prediction even for rather simple cases. The
Kolmogorov–Wiener filter is widely used for signal extraction in different fields of knowledge [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. It
is widely used in econometric analyses [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ] and in image restoration [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The theoretical
fundamentals of the Kolmogorov–Wiener filter for continuous telecommunication traffic prediction
are developed in our recent paper [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The paper [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] is dedicated to the solution of the Wiener–Hopf
integral equation in the unknown filter weight function for two telecommunication traffic models: the
power-law structure function model and the model of fractional Gaussian noise; the solutions based
on the truncated polynomial expansion method and the truncated trigonometric Fourier series method
are obtained.
      </p>
      <p>
        However, the possibility of using the Kolmogorov–Wiener filter for practical traffic prediction is
still under question. The aim of this work is to show that the Kolmogorov–Wiener filter may be
applicable to traffic prediction if the traffic is stationary and smooth enough. As is known [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ], the
telecommunication traffic in systems with data packet transfer is considered to be a self-similar
heavy-tail random process. So, if we show that the Kolmogorov–Wiener filter is applicable to the
prediction of simulated data of a stationary random self-similar heavy-tail process, then we will be
able to conclude that it may be applied to practical telecommunication traffic prediction. In this paper
we restrict ourselves to the investigation of a discrete process and a discrete filter. The corresponding
simulated data may be generated via the symmetric moving average approach [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], the generated
process is in fact similar to the fractional Gaussian
noise process, which
may describe
telecommunication traffic, see [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
      </p>
      <p>The paper is organized as follows. In Sec. 1 the introduction and the literature review are given. In
Sec. 2 the discrete Kolmogorov–Wiener filter and the symmetric moving average approach for
obtaining simulated stationary heavy-tail data are described. In Sec. 3 heavy-tail simulated data are
obtained. In Sec. 4 the prediction results are described, and in Sec. 5 conclusions are made.
2. Description of the discrete Kolmogorov–Wiener filter and of the method
of generation of heavy-tail simulated data
noise   :</p>
      <p>Let the filter input   be a stationary random process which is the sum of the signal   and the
number of points for which the prediction is made, so we have the following requirement:
The Kolmogorov–Wiener filter output   should be «the closest» to the value   + where  is the
The correlation function   ′( ) of the filter input  ′
and the cross-correlation function of the
to be a linear one, so the filter output is expressed in terms of the filter input as follows:
processes   and  ′  
′( )are considered to be given. The Kolmogorov–Wiener filter is considered
 ′ = 
 +   .</p>
      <p>2
〈(  −   + ) 〉 → min.</p>
      <p>on the weight coefficients ℎ , so (2) can be rewritten as
where ℎ are the unknown filter weight coefficients and the input data are given for  = 0,1,2, . . ,  .
The coefficients ℎ should minimize expression (2). The term 〈 2+ 〉 is a constant and does not depend
which in view of (3) gives</p>
      <p>With account for the facts that</p>
      <p>The function  (ℎ0, ℎ1, … , ℎ )is a quadratic one, and thus it has one minimum, which is described
These conditions with account for the evenness of the correlation function and the fact that
 (ℎ0, ℎ1, … , ℎ )
 ℎ</p>
      <p>= 0;  = 0,1,2, … ,  .
 ℎ
 ℎ =   = {
1,  = 
0,  ≠ 
and
one can finally write</p>
      <p>∑ ℎ   ′( −  )=   ′( +  );  = 0,1,2, … ,  ,
which is a set of linear equations in the unknown coefficients ℎ . In matrix form, this set may be
〈 ′ −  ′ − 〉 =   ′( −  )
〈 ′ −   + 〉 =   ′( +  )

 =0
∑ ℎ ℎ   ′( −  )− 2 ∑ ℎ   ′( +  )=  (ℎ0, ℎ1, … , ℎ )→ min.</p>
      <p>Then the filter output may be obtained by formula (3).</p>
      <p>
        It should be noticed that all the above-mentioned calculations are described in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The
Kolmogorov–Wiener filter may be used both for the extraction of a signal form the sum of a signal
and a noise and for the signal prediction. In the case where the input signal is non-noisy, the
Kolmogorov–Wiener filter may be used for the prediction of the stationary process given at the filter
input. In the non-noisy case, the filter weight coefficients are given by formula (15) with account for
the fact that
      </p>
      <p>
        ′ = (  ′( )   ′( + 1)   ′( + 2) …   ′( +  )) .
follows [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]:
      </p>
      <p>
        Now let us describe the method of the generation of heavy-tail simulated data, which is used in
the paper. In this paper we use the symmetric moving average approach, which is described in detail
in [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Such an approach was chosen because of its simplicity.
equal to 1. Then a heavy-tail process   similar to the fractional Gaussian noise may be generated as
Let   be a stationary white noise process with an average value equal to zero and a variance
⋯
⋯
⋱
⋯
      </p>
      <p>′( )
⋯   ′( − 1)
  ′( − 2)</p>
      <p>⋮
  ′(0) )
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
ℎ0
ℎ
1
⋮
(ℎ )</p>
      <p>′( )
  ′( + 1)</p>
      <p>⋮
(  ′( +  ))
ℎ =
ℎ2 ,   ′ =   ′( + 2) .</p>
      <p>
        ℎ =  −′1 ∙   ′ .
vector column of the free terms:
is the correlation matrix [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], ℎ is the vector column of the unknown weight coefficients, and   is the
So, the vector column ℎ may be found as
      </p>
      <p>′ ∙ ℎ =   ′
  ′ =   ′(2)
  ′(0)
  ′(1)
⋮
  ′(1)
  ′(0)
  ′(1)
⋮
  ′(2)
  ′(1)
  ′(0)</p>
      <p>⋮
(  ′( )   ′( − 1)   ′( − 2)

 =0</p>
      <p>=−
 0 (( + 1) +0.5 + ( − 1) +0.5 − 2  +0.5),
theoretically,  should be infinite; in practical calculation it may be a rather large, but finite number;
and the coefficients   are as follows:
 ≥ max ( , (</p>
      <p>
        2
where  is the number of correlation function points of the process   which should be obtained and
a small number  is in fact the given accuracy of the coefficient   in (17); the values   &gt; should
be less than   0. The accuracy of this method depends on  , and the method is not exact even in the
case where  → ∞. However, for a rather large  the method may lead to good practical results [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
3. The generation of non-smooth and smooth heavy-tail simulated data
106 points of the white noise process   with an average value equal to 0 and a variance equal to 1
are generated on the basis of the generator built in the Wolfram Mathematica package. The following
parameters were chosen:
The corresponding number  = 3 ∙ 105 is chosen. In fact, the inequality (21) holds even for  = 105,
the value  = 3 ∙ 105 was chosen for a higher accuracy. On the basis of the idea (17)–(19), 105 points
of the process   were generated as follows:
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
here we take into account the fact that the number of points of the generated array   is equal to 105.
The correlation function of the process   is built as follows:
      </p>
      <p>
        =   + |min( )| + 10−3 ,
and
large, it is estimated as follows [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]:
here,  0 is the variance and  is the Hurst exponent of the process   . The number  may be very
  ( )= 〈  ∙   + 〉 =
∑ (  ∙   + ).
      </p>
      <p>The corresponding correlation function and its least-square fit are given in Fig.2.
in fact, the quantities   + + and   +</p>
      <p>are independent because   is the white noise, no matter
whether formula (17) or formula (22) is used; formula (22) is chosen in order to avoid indices beyond
the array   bounds. The coefficients   are calculated on the basis of (19).
telecommunication traffic, which is obviously non-negative. So we build the array   as follows:</p>
      <p>The average value of   is close to zero. We have to construct simulated data that may describe
a small summand 10-3 is added in order to avoid obtaining an infinite value of the prediction mean
average percentage error (MAPE). The process   is a non-negative random stationary heavy-tail
process; its graph is given in Fig. 1.
corresponding centralized process</p>
      <p>:
where the average value 〈 〉 is</p>
      <p>Let us make sure that the generated process   is a heavy-tail one. Let us describe the

  =   − 〈 〉
〈 〉 =</p>
      <p>1
105
105
∑   ,
 =1
1
The following numerical coefficients were obtained:
here, the coefficients are rounded off to two significant digits. So,
and on the basis of formula (29) and Fig.2 one can conclude that the correlation function exhibits a
power law decay rather than an exponential one. So, indeed, the generated process is a heavy-tail one.</p>
      <p>
        It should also be noticed that according to [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] the following property should be valid for ≥ 1 :
so, according to the least-square fit
which leads to
which is very close to the value 0.8, see (21). The variance of the process is equal to
close to the fractional Gaussian noise with given variance and Hurst exponent.
which is rather close to the value  0 = 1, see (21). So one can conclude that the generated process is
(27)
(28)
(29)
(30)
(31)
(32)
(33)
 ̃ =
      </p>
      <p>1
2 + 1</p>
      <p>∑   +
 =−</p>
      <p>
        The generated process is non-smooth, i.e. it is really highly fluctuating, so it is rather difficult to
predict it. So it is reasonable to investigate smooth heavy-tail processes. In order to obtain smoother
processes, we use a very simple smoothing algorithm [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]:
The corresponding non-negative process may be expressed similarly to (23):
where  ̃ are the values of a smooth process, expression (34) is valid for every point except for the
first  and the last  ones. The first  and the last  points of the process  ̃ may be obtained as the
corresponding linear least-square fit of the first  and the last  points of the process   , respectively.
and the corresponding centralized process
where the average value
 ̃ =  ̃ + |min( ̃)| + 10−3 ,
      </p>
      <p>̃  =  ̃ − 〈 ̃〉
〈 ̃〉 =</p>
      <p>1
105
105
∑  ̃ .</p>
      <p>=1
The simulated data for the process  ̃ for  = 3 are given in Fig.3.
(34)
(35)
(36)
(37)
the corresponding correlation function:</p>
      <p>It should be stressed that the obtained smooth process  ̃ is also a heavy-tail one. Let us consider</p>
      <p>For example, for  = 3 the following correlation function and its fit are obtained, see Fig. 4.
The least-square fit is sought in the form (27) , the following numerical coefficients were obtained:
here, the coefficients are rounded off to two significant digits. So,
may also be roughly considered as fractional Gaussian noise.</p>
      <p>As can be seen form Fig.4, the correlation function of a smooth process is also well described by a
power-law function, the obtained smooth process  ̃ is also a heavy-tail one, and, in fact, this process
4. Prediction on the basis of the Kolmogorov–Wiener filter</p>
      <p>The prediction for non-smooth data is built as follows. In fact, the prediction for the centralized
process is used. The filter weight coefficients are built on the basis of (13)–(16); the corresponding
correlation function is taken in the form (26).</p>
      <p>First of all, the points  1, 
2,…,   +1 of the simulated process 
are taken as the filter input,
from the simulated data, and the points   +3,   +4,…,   + +2 are predicted, and so on.
and the points   +2,   +3,…, 
 + +1 are predicted. Then the points 
2, 
3, … ,   +2 are taken</p>
      <p>At the  th iteration of the algorithm the predcition is calculated as follows. The filter input data are
so
(3) we have
as
and
  ̃( )= 〈̃  ∙ ̃  + 〉 =
(̃  ∙ ̃  + ).</p>
      <p>(38)
(39)
(40)
(41)
(42)
(43)
(44)
(45)
(46)
(47)
The filter output   is the predicted value for  ′ + (the non-noisy case is investigated). According to
The corresponding prediction errors are calculated at each iteration.</p>
      <p>Let us tell a few words why the above-mentioned change of the upper bound of summation has no
significant effect on the result. In order to make the prediction for ̂  +1+ , one should calculate the
sum of  + 2 −  summands, in order to make the prediction for ̂  +2+ one should calculate the
 ′0 =   ,  ′1 =   +1, … ,  ′ =   + ,
̂  + + = ∑ ℎ   + −</p>
      <p>′ =   + .
  = ∑ ℎ  ′ − ,</p>
      <p>=0</p>
      <p>=0</p>
      <p>=0
here, the upper bound of summation is changed in order to avoid obtaining indices beyond the array
of the input data. Such a change of the bound does not lead to a significant error for the prediction
under consideration. On the basis of (41)–(43) one can conclude that
where ̂  + + is the predicted value of   + + . Obviously, the prediction is made only for the values
 +  +  = 
+ 1 +  ,  + 2 +  , … ,</p>
      <p>+  +  . We should also remember that we should make the
prediction for the non-negative simulated data. So, the predicted non-negative data may be expressed
 ̂ + + = ̂  + + + 〈 〉 = 〈 〉 + ∑ ℎ   + − ,  = ̅̅̅+̅̅̅1̅̅−̅̅̅̅̅, ̅̅.</p>
      <p>The MAPE and MAE errors for the corresponding prediction are calculated as
sum of  + 3 −  summands, and so on. We obviously deal with the case where 
≫  , so the value
 + 1 −  is rather close to  + 1, so the above-mentioned change of the upper bound is not
significant for the calculations.</p>
      <p>Similarly, the prediction for the smooth heavy-tail process is made as follows. At the  th iteration
of the algorithm the prediction is calculated as follows:</p>
      <p>̃̂ + + = 〈 ̃〉 + ∑ ℎ ̃  + − ,  = ̅̅̅+̅̅̅1̅̅−̅̅̅̅̅, ̅̅
and the corresponding MAPE and MAE errors are
and</p>
      <p>The MAPE and MAE are calculated for each above-mentioned iteration. The 105 −  −  MAPE
and</p>
      <p>MAE values both for smooth and for non-smooth processes are obtained. The following
parameters are chosen:
(48)
(49)
(50)
(51)

1
2
3
4
5
6
7
〈 ̃〉
2.98
2.52
2.34
2.31
2.22
2.11
2.04
9.11
6.26
4.85
3.92
3.37
2.98</p>
      <p>The following results are obtained. The MAPE and MAE histograms in the case of the non-smooth
process are shown in Fig.5. The y-axes of the histograms indicate the number of MAPE and MAE
values that belong to the corresponding intervals. For the non-smooth process the average MAPE is
24.7%, and the average MAE is 0.70 (the average value of the process is 〈 〉 = 3.88). It should also
be stressed that for some points the MAPE are more than 100%. So one can conclude that the
prediction accuracy is not high in the case of the non-smooth process. So, if the process is a highly
fluctuating one, then the prediction based on the Kolmogorov–Wiener filter may not lead to good
results.</p>
      <p>But if the process is rather smooth, the prediction results are much better. The corresponding
results are given in Table 1. In Table 1  is the parameter used in (34), i.e. 2 + 1 is the number of
smoothing points. As can be seen, the smother the process is, the better the prediction results are, and
the prediction accuracy increases with  . The corresponding histograms for  = 3 are given in Fig.6.
The predictions for  ≥ 6 have an average MAPE value less than 3%.</p>
      <p>For example, for  = 3 the average MAPE is less than 5%. As can be seen from the corresponding
histogram, the MAPE for the overwhelming majority of points is less than 10%. For some very rare
points the MAPE may be rather high (up to 40%), but in our opinion this may be explained as
follows. As can be seen from Fig. 3, the values for some points of the process  ̃ are rather close to
zero, and the MAPE may not be an adequate characteristic for the prediction of points close to zero.
So, one can conclude that the Kolmogorov–Wiener filter may give good results for the prediction of a
stationary heavy-tail random process if the process is smooth enough.
5. Conclusions and plans for the future</p>
      <p>
        The use of the Kolmogorov–Wiener filter for the prediction of stationary random heavy-tail
processes is considered. The attention is paid to the discrete case. The problem under consideration
may be connected with the telecommunication traffic prediction, which is important, for example, for
cyber security, see [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. There are many rather sophisticated approaches to telecommunication traffic
prediction [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. For rather simple cases (stationary or smooth traffic) the ARMA or gray model
approaches may be used [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The traffic in telecommunication systems with data packet transfer is
considered to be a self-similar heavy-tail process, see [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Such a simple filter as the Kolmogorov–
Wiener one may be used in the prediction of stationary random processes [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. However, as far as we
know, the corresponding approach for traffic prediction is not sufficiently developed in the literature.
      </p>
      <p>
        In this paper we generate data for a stationary heavy-tail process on the basis of the symmetric
moving average approach [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. The corresponding non-smooth and smooth data are generated. The
prediction for 1 point forward on the basis of the previous 101 points is investigated. It is shown that
the Kolmogorov–Wiener filter is not good for non-smooth processes, but may give a good prediction
for a stationary random heavy-tail process if the process is rather smooth. So, if the traffic is
stationary and rather smooth, the Kolmogorov–Wiener filter may be used for its prediction. The
advantage of the corresponding approach is the simplicity of the method in contrast with, for example,
neural networks or ARIMA models.
      </p>
      <p>
        The plans for the future are as follows. In this paper only the values T = 100 and z = 1 are
investigated. So the prediction investigation for a wider range of parameters may be a plan for the
future. In our recent paper [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] the theoretical approach to the Kolmogorov–Wiener filter construction
in the continuous case is considered. In this paper we generated a large number of data points, which
may allow one to try to investigate the continuous case, so the investigation of the applicability of the
method [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] may be another plan for the future. This paper is based on the generation of simulated
data, so the investigation of real experimental traffic data may be another plan for the future. It should
also be stressed that the use of the Kolmogorov–Wiener filter for the prediction of stationary
processes may be useful not only in telecommunications, but also in other fields of knowledge, for
example, in electrical engineering, see [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
6. References
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Q. H.</given-names>
            <surname>Do</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. T. H.</given-names>
            <surname>Doan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. V. A.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. T.</given-names>
            <surname>Duong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Van Linh</surname>
          </string-name>
          ,
          <article-title>Prediction of Data Traffic in Telecom Networks based on Deep Neural Networks</article-title>
          ,
          <source>Journal of Computer Science</source>
          <volume>16</volume>
          (
          <year>2020</year>
          )
          <fpage>1268</fpage>
          -
          <lpage>1277</lpage>
          . doi:
          <volume>10</volume>
          .3844/jcssp.
          <year>2020</year>
          .
          <volume>1268</volume>
          .1277.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.-X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.-H.</given-names>
            <surname>Jia</surname>
          </string-name>
          ,
          <source>Telecommunication Traffic Prediction Based on Improved LSSVM</source>
          ,
          <source>International Journal of Pattern Recognition and Artificial Intelligence</source>
          ,
          <volume>32</volume>
          , No.
          <volume>3</volume>
          (
          <year>2018</year>
          )
          <volume>1850007</volume>
          (16 pages),
          <source>doi: 10</source>
          .1142/S0218001418500076.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>H.</given-names>
            <surname>Brugner</surname>
          </string-name>
          ,
          <article-title>Holt-Winters Traffic Prediction on Aggregated Flow Data, Proceedings of the Seminars Future Internet and Innovative Internet Technologies and Mobile Communication Focal Topic: Advanced Persistent Threats</article-title>
          .
          <source>Summer Semester</source>
          <year>2017</year>
          (
          <year>2017</year>
          ),
          <fpage>25</fpage>
          -
          <lpage>32</lpage>
          . doi:
          <volume>10</volume>
          .2313/NET-2017
          <string-name>
            <surname>-</surname>
          </string-name>
          09-1_
          <fpage>04</fpage>
          .d
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>P.</given-names>
            <surname>Kaushik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Yadav</surname>
          </string-name>
          ,
          <article-title>Traffic Prediction in Telecom Systems Using Deep Learning</article-title>
          ,
          <source>Proceedings of 7th International Conference on Reliability</source>
          ,
          <article-title>Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions)</article-title>
          ,
          <source>August 29-31</source>
          ,
          <year>2018</year>
          , Noida, India (
          <year>2018</year>
          ),
          <fpage>302</fpage>
          -
          <lpage>207</lpage>
          , doi: 10.1109/ICRITO.
          <year>2018</year>
          .
          <volume>8748386</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>P. S. R.</given-names>
            <surname>Diniz</surname>
          </string-name>
          , Adaptive Filtering Algorithms and Practical Implementation, 5th ed.,
          <source>Springer Nature Switzerland AG, Cham</source>
          ,
          <year>2020</year>
          , doi: 10.1007/978-3-
          <fpage>030</fpage>
          -29057-3.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>T.</given-names>
            <surname>Bao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Duffy</surname>
          </string-name>
          ,
          <article-title>Signal extraction: experimental evidence</article-title>
          ,
          <source>Theory and Decision</source>
          <volume>90</volume>
          (
          <year>2021</year>
          ),
          <fpage>219</fpage>
          -
          <lpage>232</lpage>
          . doi:
          <volume>10</volume>
          .1007/s11238-020-09785-x
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S. G.</given-names>
            <surname>Pollock</surname>
          </string-name>
          , Filters,
          <source>Waves and Spectra, Econometrics</source>
          <volume>6</volume>
          (
          <year>2018</year>
          ),
          <volume>35</volume>
          (33 pages).
          <source>doi: 10.3390/econometrics6030035</source>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S. G.</given-names>
            <surname>Pollock</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Mise</surname>
          </string-name>
          ,
          <article-title>A Wiener-Kolmogorov Filter for Seasonal Adjustment and the Cholesky Decomposition of a Toeplitz Matrix</article-title>
          ,
          <source>Computational Economics</source>
          <volume>59</volume>
          (
          <year>2022</year>
          ),
          <fpage>913</fpage>
          -
          <lpage>933</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10614-020-10087-1
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>V.</given-names>
            <surname>Pronina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Kokkinos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.V.</given-names>
            <surname>Dylov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lefkimmiatis</surname>
          </string-name>
          ,
          <article-title>Microscopy Image Restoration with Deep Wiener-Kolmogorov Filters</article-title>
          , in: A.
          <string-name>
            <surname>Vedaldi</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Bischof</surname>
          </string-name>
          , T. Brox, JM. Frahm (Eds.),
          <source>Lecture Notes in Computer Science</source>
          , vol
          <volume>12365</volume>
          , Springer, Cham,
          <year>2020</year>
          , pp.
          <fpage>185</fpage>
          -
          <lpage>201</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -58565-5_
          <fpage>12</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>V.</given-names>
            <surname>Gorev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gusev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Korniienko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Aleksieiev</surname>
          </string-name>
          ,
          <article-title>Kolmogorov-Wiener Filter Weight Function for Stationary Traffic Forecasting: Polynomial and Trigonometric Solutions</article-title>
          , in: P. Vorobiyenko,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ilchenko</surname>
          </string-name>
          , I. Strelkovska (Eds.),
          <source>Lecture Notes in Networks and Systems</source>
          , vol
          <volume>212</volume>
          , Springer,
          <year>2021</year>
          , pp.
          <fpage>111</fpage>
          -
          <lpage>129</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -76343-
          <issue>5</issue>
          _
          <fpage>7</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhuang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Loss Analysis for Networks based on Heavy-Tailed and</article-title>
          <string-name>
            <surname>Self-Similar</surname>
            <given-names>Traffic</given-names>
          </string-name>
          ,
          <source>Journal of Physics: Conference Series</source>
          <volume>1584</volume>
          (
          <year>2020</year>
          ),
          <volume>012054</volume>
          (8 pages).
          <source>doi: 10</source>
          .1088/
          <fpage>1742</fpage>
          - 6596/1584/1/012054.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>D.</given-names>
            <surname>Radev</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Lokshina</surname>
          </string-name>
          ,
          <article-title>Advanced models and algorithms for self-similar IP network traffic simulations and pefformance analysis</article-title>
          ,
          <source>Journal of Electrical Engineering</source>
          <volume>61</volume>
          , No.
          <volume>6</volume>
          (
          <issue>2010</issue>
          ),
          <fpage>341</fpage>
          -
          <lpage>349</lpage>
          . doi:
          <volume>10</volume>
          .2478/v10187-010-0053-0.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>D.</given-names>
            <surname>Koutsoyiannis</surname>
          </string-name>
          ,
          <article-title>The Hurst phenomenon and fractional Gaussian noise made easy</article-title>
          ,
          <source>Hydrological Sciences Journal</source>
          ,
          <volume>47</volume>
          (
          <year>2002</year>
          ),
          <fpage>573</fpage>
          -
          <lpage>595</lpage>
          . doi:
          <volume>10</volume>
          .1080/02626660209492961.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Generalized fractional Gaussian noise and its application to traffic modeling</article-title>
          ,
          <source>Physica A 579</source>
          (
          <year>2021</year>
          ),
          <volume>126138</volume>
          (22 pages).
          <source>doi: 10</source>
          .1016/j.physa.
          <year>2021</year>
          .
          <volume>126138</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>K.</given-names>
            <surname>Molugaram</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. S.</given-names>
            <surname>Rao</surname>
          </string-name>
          ,
          <article-title>Statistical Techniques for Transportation Engineering</article-title>
          ,
          <source>ButterworthHeinemann (Elsevier)</source>
          , Oxford,
          <year>2017</year>
          , doi: 10.1016/B978-0
          <source>-12-811555-8</source>
          .
          <fpage>00012</fpage>
          -X.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Yu</surname>
            .
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Papaika</surname>
            ,
            <given-names>O. H.</given-names>
          </string-name>
          <string-name>
            <surname>Lysenko</surname>
            ,
            <given-names>Ye. V.</given-names>
          </string-name>
          <string-name>
            <surname>Koshelenko</surname>
            ,
            <given-names>I. H.</given-names>
          </string-name>
          <string-name>
            <surname>Olishevskyi</surname>
          </string-name>
          ,
          <article-title>Mathematical modeling of power supply reliability at low voltage quality</article-title>
          ,
          <source>Naukovyi Visnyk Natsionalnoho Hirnychoho Universytetu, No. 2</source>
          (
          <issue>2021</issue>
          ),
          <fpage>97</fpage>
          -
          <lpage>103</lpage>
          . doi:
          <volume>10</volume>
          .33271/nvngu/2021-2/097.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>