<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>September</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Motion Dataset for Evaluating Extreme Quantiles Forecasting Methods</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Lyudmyla Kirichenko</string-name>
          <email>lyudmyla.kirichenko@nure.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roman Lavrynenko</string-name>
          <email>roman.lavrynenko.cpe@nure.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nataliya Ryabova</string-name>
          <email>nataliya.ryabova@nure.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>CEUR Workshop Proceedings (CEUR-WS.org) Proceedings</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Kharkiv National University of Radio Electronics</institution>
          ,
          <addr-line>Nauky av., 14, Kharkiv, 61166</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>2</volume>
      <fpage>7</fpage>
      <lpage>28</lpage>
      <abstract>
        <p>Machine learning utilizes data for training. However, there are instances when the data is insufficient. To determine the degree of risk of extreme events, it is necessary to predict the values of extreme quantiles, which may occur once in a hundred years, having only 30 years of historical data. The data is clearly insufficient for conventional forecasting methods. The problem becomes even more complicated when the time series has fractal properties and contains long-term dependencies. Developing machine learning methods on real data for such a task often seems impossible, so we present a method for generating a dataset to obtain precise values of extreme quantiles for time series, which are realizations of fractional Brownian motion. A key feature of this data acquisition is the parallelization of the Hosking method, which is used for the simulation of a fractional Brownian motion. extreme quantile regression, fBm, Kaggle, probabilistic forecasting, time series (N.Ryabova)</p>
      </abstract>
      <kwd-group>
        <kwd>Methods</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Fractal time series is a class of time series characterized by the property of self-similarity, that is,
the statistical properties of the series are preserved on different time scales. In recent decades, such time
series have been found in many phenomena of the surrounding world, including weather data, financial
data, biomedical data, etc. Forecasting fractal time series is of practical importance for decision making
in various fields. For example, in economics and finance, fractal time series forecasting can help manage
risk and make decisions about buying and selling stocks or other financial instruments.</p>
      <p>Traditionally, point forecasting has been the primary approach in time series forecasting, where a
single value is predicted as the most likely outcome. However, point forecasting does not capture the
inherent uncertainty present in time series data, which can lead to unreliable and inaccurate predictions.</p>
      <p>
        Probabilistic forecasting [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ], on the other hand, provides a range of possible outcomes and their
associated probabilities. Probabilistic forecasting allows decision-makers to understand the uncertainty
in the forecast and make informed decisions based on the range of possible outcomes. Probabilistic
forecasting can also capture important features of the underlying data distribution, such as seasonality,
trend, and volatility.
      </p>
      <p>To evaluate the performance of a probabilistic forecasting methods, it is necessary to compare the
model's predicted probability distribution with the actual probability distribution of the ground truth
data. One common way to do this is by calculating the quantiles of the predicted probability distribution
and comparing them with the quantiles of the actual distribution.</p>
      <p>Quantiles are simply points in the probability distribution that divide the data into groups. For
example, the 50th percentile (also known as the median) is the value that divides the data into two equal
groups, with 50% of the data above and 50% below this value. Other commonly used quantiles include</p>
      <p>2023 Copyright for this paper by its authors.
the 10th percentile, the 90th percentile, and the interquartile range (the difference between the 25th and
75th percentiles).</p>
      <p>Although fractal time series have specific properties that can make forecasting difficult, in many
cases it is possible to use the same forecasting methods as for conventional time series. However, the
challenge arises in predicting extreme quantiles of the probability distribution, often referred to as the
"tails" of the distribution. These extreme quantiles represent rare but critical events, such as catastrophic
financial losses, extreme weather events, or catastrophic failures in infrastructure systems.</p>
      <p>
        In fields such as hydrology and climate science [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ], the concept of risk is frequently quantified
using the T-year return level, symbolized as QT. This return level is a measure of the magnitude of an
event that is expected, on average, to be exceeded once every T years. Consider Y as a certain variable,
for which we record nY independent observations each year. For example, if we collect daily data, nY
would be 365, representing the number of days in a year. The T-year return level, QT, is then computed
as the quantile Q(1 - 1/(nY * T)).
      </p>
      <p>Thus, the T-year return level is a probabilistic measure of the size of an event that is expected to be
exceeded with a frequency of once every T years, based on historical data. It is a critical concept in risk
assessment, particularly in understanding and preparing for extreme events. Predicting these quantiles
accurately, especially in fractal time series, is essential in a number of fields to aid in risk management
and policy planning. A significant challenge in extreme quantile prediction is the limited availability of
data for estimation. For instance, predicting a 100-year return level becomes problematic when we only
have training data from the previous 50 years. This scarcity of data makes the statistical estimation of
extreme quantiles a challenging task, particularly in the context of probabilistic forecasting models.
Consequently, developing robust methods for extreme quantile prediction under such data constraints
is a crucial area of research.</p>
      <p>Given the importance of forecasting extreme quantiles in fractal time series, we propose an approach
for generating datasets of such time series specifically designed to evaluate extreme quantiles
forecasting.</p>
      <p>Contributions:
1. The method provides a way to efficiently compute multiple continuations of a single fractional
Brownian motion (fBm) time series using the Hosking algorithm.</p>
      <p>2. In result the dataset with ground truth extreme quantiles of possible continuations can be used
for evaluating machine learning methods designed for probabilistic forecasting.</p>
      <p>The code for generating a file of the dataset for a specific Hurst exponent can be found at the
following link: https://www.kaggle.com/code/unfriendlyai/fbm-extreme-quantile-generator. Our fBm
dataset is available at: https://www.kaggle.com/datasets/unfriendlyai/fbm-extreme-quantiles</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Works</title>
      <p>
        The increasing availability of data demands new processing and analysis methods for effective time
series forecasting. Machine learning methods for time series forecasting are gaining significance,
allowing for automated and faster prediction processes, as well as improved accuracy and quality of
forecasts [
        <xref ref-type="bibr" rid="ref13 ref14 ref15 ref4">4, 13, 14, 15</xref>
        ]. Reviews have presented the main methods and approaches for time series
forecasting using machine learning [
        <xref ref-type="bibr" rid="ref25 ref31 ref6">6, 25, 31</xref>
        ]. Although fractal time series have a wide application in
scientific and technical fields, the application of machine learning in the field of fractal time series
analysis has mainly affected classification methods [
        <xref ref-type="bibr" rid="ref16 ref17 ref20 ref21 ref8">8, 16, 17, 20, 21</xref>
        ] and clustering [
        <xref ref-type="bibr" rid="ref18 ref27">18, 27</xref>
        ], as well
as methods for estimating the Hurst exponent by time realizations [
        <xref ref-type="bibr" rid="ref19 ref27 ref3">3, 19, 27</xref>
        ].
      </p>
      <p>
        Datasets with modeled and real time series sets have been created for this, but in fact there are no
datasets with modeled fractal time series that could be used to validate methods for extreme quantile
prediction. At the moment, relatively few special methods have been developed for forecasting time
series with fractal properties. Most of the existing methods are focused on predicting fractional
Brownian motion (fBm) [
        <xref ref-type="bibr" rid="ref28 ref7">7,28</xref>
        ]. At the same time, the issue of forecasting extreme quantile of fractal
time series, in particular fBm, remains open.
      </p>
      <p>
        The development of probabilistic prediction is covered in reviews [
        <xref ref-type="bibr" rid="ref30 ref34">30, 34</xref>
        ]. There are two main
approaches to modeling a probability distribution. In the first distribution shape is given beforehand
(eg. Gaussian, exponential), and the model during training should determine 2-4 parameters of this
distribution, depending on the input data [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. For modeling more complex distributions more complex
methods may be used like Generalized Additive Models for Shape, Scale, and Location (GAMLSS)
[
        <xref ref-type="bibr" rid="ref29">29</xref>
        ] . The second type is when the model should approximate the conditional cumulative distribution
function. This Maybe be achieved with direct quantiles or expectiles prediction [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ] or using a novel
Normalizing flow based approach [
        <xref ref-type="bibr" rid="ref1 ref33">1, 33</xref>
        ]. Existing models allow to obtain information about the
distribution function in different ways [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ].
      </p>
      <p>
        The need for probabilistic forecasting is demonstrated in particular by the increasing number of
Kaggle competitions that require predicting time series quantiles in the future [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref22">10, 11, 12, 22</xref>
        ]. For
example, in “M5 Forecasting – Uncertainty” competition the participants are asked to provide 28 days
ahead point forecasts for all the series of the competition, as well as the corresponding median and 50%,
67%, 95%, and 99% prediction intervals. In the reference [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ], modeling extreme quantile regression
and risk assessment were explored, specifically with an application to forecasting flood risk. The study
provided valuable insights into the practical usage of extreme quantile regression models for predicting
rare and extreme events, such as floods.
      </p>
    </sec>
    <sec id="sec-3">
      <title>Method of generating dataset for extreme quantiles problem</title>
      <p>Hence, there is a clear need for creating specific datasets of fractal time series, with a particular
emphasis on those designed for probabilistic forecasting of extreme quantiles. The goal of this study is
to generate a dataset containing realizations of fractional Brownian motion (fbm) and the corresponding
true quantiles of their possible continuations.</p>
      <p>Fractional Brownian motion is an extension of classical Brownian motion, which is characterized
by random walks of particles in space. The fBm has properties of self-similarity and scale invariance,
which means that its structure and characteristics remain unchanged when the scale of observation
changes. The increments ΔX(τ) = X(t+τ) – X(t) have a Gaussian distribution.</p>
      <p>Fractional Brownian motion (fBm) X is a type of self-similar stochastic process characterized by its
Hurst exponent H (0 &lt; H &lt; 1) which determines the degree of long-range dependence of the process.
A persistent time series is a series with a Hurst exponent greater than 0.5, indicating that future values
are likely to exhibit a positive autocorrelation with past values. This means that when past values are
higher (lower) than average, future values are also likely to be higher (lower) than average. On the other
hand, an anti-persistent time series has a Hurst exponent less than 0.5, indicating that future values are
likely to exhibit a negative autocorrelation with past values. This means that when past values are higher
(lower) than average, future values are likely to be lower (higher) than average. When the Hurst
exponent is equal to 0.5, this indicates a case of a random walk or white noise, where future values are
independent of past values.</p>
      <p>
        There are a number of exact methods to simulate fBm realizations [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The Hosking method is
simple, popular and implemented in Python. The Hosking method involves simulating the fBm using
the following steps:
      </p>
      <p>1. Generate a sequence of independent and identically distributed (IID) random variables from a
standard normal distribution with zero mean and variance one.</p>
      <p>2. Compute the autocovariance function of the FBM using the formula above for a range of lags.
3. Use the autocovariance function to compute the Cholesky decomposition of the covariance
matrix.</p>
      <p>4. Multiply the IID random variables by the Cholesky factor to obtain a sequence of correlated
random variables.</p>
      <p>5. Compute the cumulative sum of the correlated random variables to obtain the simulated FBM.
6. Repeat steps 1-5 to obtain a sample of the FBM.</p>
      <p>The Cholesky decomposition is a technique for decomposing a positive definite matrix into a product
of a lower triangular matrix and its transpose. In the context of simulating FBM, the Cholesky factor is
used to generate a sequence of correlated random variables from a sequence of IID random variables.</p>
      <p>Also a very popular exact estimation method, particularly because of its fast speed is the
DaviesHarte method. Davies-Harte method generates fBm by transforming a Gaussian white noise process
with a discrete Fourier transform, applying a scaling factor based on the desired Hurst exponent, and
then inverse transforming the noise process to produce the fBm. However, this method does not generate
values iteratively and therefore cannot be used to continue time series with predefined values.</p>
      <p>
        To verify that the examples generated for the dataset possess the required characteristics, we
examine their Hurst exponent and the standard deviation of increments. The Whittle method [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] is a
powerful tool for estimating the Hurst exponent of time series. The method has some drawbacks. In
particular, it does not work well with non-Gaussian time series. However, in the case when it is known
in advance that the time series are fBm, the method is one of the most accurate [
        <xref ref-type="bibr" rid="ref24 ref26 ref9">9, 24, 26</xref>
        ].
      </p>
      <p>The problem addressed in this work is the need to evaluate extreme quantile forecasting methods for
time series with long-range dependence, specifically those generated by fractional Brownian motion
processes. To address this problem, we propose a method for creating a dataset of fBm time series and
their ground truth continuations using the Hosking algorithm. Our method involves parallel calculation
of M series, each with a common value of Hurst exponent, and the generation of a matrix of quantiles
for each time step.</p>
      <p>Our method for creating a single instance of the evaluation dataset for a specific Hurst exponent
value, as illustrated in Figure 1, involves the following steps:
 Set the Hurst exponent value.
 Generate a matrix of normally distributed numbers of size M x N, where M is the number of
continuations and N = N1 + N2. Here, N1 represents the length of a common beginning of the time
series, and N2 represents the length of continuations.
 To obtain a common beginning of length N1, make all M beginnings of the previous matrix
equal. This is achieved by copying the first row into all other rows.
 Employ the Hosking method to iteratively calculate a set of M time series that are independent
of each other. This method is selected due to its capacity for parallel calculation of the values of the
succeeding time steps based on normally distributed random numbers using matrix operations.
 The output from the previous step is M time series, in which the first N1 increments are identical
and the following N2 values are independent of each other but dependent on the N1 initial identical
increments.
 Calculate the target true quantiles for N2 steps using M variants of continuation.
 The final result is stored as a covariate time series of length N1 and a matrix of ground truth
target quantiles of length N2.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Experiment and results</title>
      <p>
        As a result of the proposed method described above, the following fractal time series dataset was
designed for the realizations of fractional Brownian motion. One can freely download or utilize the
dataset on Kaggle platform by searching for the dataset with the name " fBm Extreme Quantiles" or by
using the following link: https://www.kaggle.com/datasets/unfriendlyai/fbm-extreme-quantiles
The dataset was created with the following parameters:
 The set of Hurst exponent values comprises [0.3, 0.35, 0.45, 0.53, 0.6, 0.65, 0.72, 0.85, 0.9,
0.93]. This range is chosen to represent various types of time series behavior: antipersistent (values
less than 0.5), nearly independent (around 0.5), and persistent (greater than 0.5) series. The diverse
selection of Hurst values allows us to capture a broad range of potential dynamics in the data, thereby
creating a more robust and comprehensive dataset for model evaluation.
 Number of records per Hurst exponent is 50. Each Hurst exponent value has 50 records in the
evaluation dataset, sufficient for obtaining statistically significant experimental results when
comparing different prediction methods.
 The length of the original time series N1 is 128.
 The length of the continuations of the original series, for which quantiles are provided N2 is 16.
 Number of continuation examples (M): 3,650,000 examples of continuations are used to
calculate quantiles. This number was chosen for the convenience of determining the true quantile
when an event occurs once every hundred years under daily observation [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ]. The number of such
outcomes for accurate computation is taken as 100 (365 days x 100 years x 100 events = 3,650,000).
 Set of true quantiles includes the median (0.5), usual quantiles (0.05 and 0.95), and quantiles
corresponding to 100-year return levels T=100y (1/36500 and 1-1/36500) and 10-year return levels
T=10y (1/3650 and 1-1/3650). This provides a comprehensive range of quantiles for analysis, from
the most common to the most extreme (Figure 1).
 Time series increments are normalized (divided by STD of increments).
 Original time series are presented as cumulative sums of increments. Quantiles are calculated
on their cumulative continuations.
 The training dataset consists of 10,000 examples of length 128 for covariates and values of the
following 16 time steps for the target. These parameters are similar to those used in [
        <xref ref-type="bibr" rid="ref36">36</xref>
        ]. In this
case, the data for calculating extreme quantiles of a hundred-year period is not enough, as 10,000
days are roughly three times fewer than a hundred years.
      </p>
      <p>The code for generating a file of the dataset for a specific Hurst exponent can be found at the
following link: https://www.kaggle.com/code/unfriendlyai/fbm-extreme-quantile-generator</p>
      <p>
        The obtained dataset was tested using three machine learning methods known for their efficiency
and speed, often used in Kaggle competitions, that can predict specific quantiles: LightGBM [
        <xref ref-type="bibr" rid="ref36">36</xref>
        ],
CatBoost, and Statsmodels QuantReg [
        <xref ref-type="bibr" rid="ref37">37</xref>
        ]. The results of the three models for one time series are shown
in Figures 2, 3. The 0.05, 0.50, and 0.95 quantiles were satisfactorily predicted by all three methods.
The extreme quantiles were predicted unsatisfactorily for persistent fBm time series. For antipersistent
fBm time series with H=0.30, LightGBM and CatBoost show more satisfactory results for extreme
quantiles. However, the prediction from Statsmodels QuantReg is unsatisfactory.
      </p>
      <p>When the Hurst exponent is close to 0.5, the prediction values do not depend on the previous values
of the series and these predictions are not interesting. When the Hurst exponent deviates from 0.5, the
predictions for the 0.05, 0.50, and 0.95 quantiles in all cases depended on the previous values of the
series, and all three models captured these dependencies.</p>
      <p>However, for extreme quantiles in this case, it was noted that the predictions often did not depend
(or almost did not depend) on the previous values. Whether for a persistent descending series or an
ascending one, the predictions were quite symmetrical relative to the last value of the series, unlike the
adequately predicted median (Figure 4).</p>
      <p>To determine correctness of examples in dataset and limits for calculating Hurst exponent of
predicted time series, the values of the Hurst exponent for each time series of length 128 were
determined using the Whittle algorithm. The results are shown in Table 1.</p>
      <p>To compare the correctness, we generated the same number of time series for each value of the Hurst
exponent with the Davies-Harte method. The estimate of the Hurst exponent for the time series of the
dataset and the fBm generated by the Davies-Harte method has similar scatter values.</p>
      <p>The results of comparison with the Davies-Harte method show that the time series by the Hosking's
method with parallel calculation of M the continuations of one row are executed correctly.
Target H value
0.30
0.35
0.45
0.53
0.60
0.65
0.72
0.85
0.90
0.93</p>
    </sec>
    <sec id="sec-5">
      <title>5. Discussions</title>
      <p>In this study, we successfully employed parallel computation using the Hosking method to obtain
the true values of extreme quantiles for fBm (fractional Brownian motion) time series. This was
achieved by substituting normally distributed random numbers at the begining of the series with
identical numbers for all numerous variants of the series. By using 100 times more series variants than
the number of days in 100 years, we were able to prepare a dataset of time series and their true quantiles
with sufficient accuracy for the evaluation of machine learning methods predicting extreme quantiles.</p>
      <p>
        Although we could generate an unlimited number of training examples, inspired by [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ], we limited
the training dataset to the same size, specifically 7000 training examples, and an additional 3000 for
validation used for hyperparameter selection (in our case, early stopping points). This corresponds to
approximately 19 years of daily observations. At the same time, the task was to predict an event with a
frequency of once in 100 years of daily observations (quantile 1/36500).
      </p>
      <p>Given such a limited volume of training data, conventional machine learning models were unable to
predict extreme quantiles. However, there were no issues with predicting the median and 0.95 quantile.</p>
      <p>We also noted that the presence of long-term dependencies in the time series, due to the fact that the
time series is a realization of fractional Brownian motion, was necessary. In the absence of long-term
dependencies (Hurst exponent close to 0.5), it was impossible to compare the effectiveness of prediction
methods. The prediction of extreme quantiles of persistent time series compared to antipersistent ones
proved to be a significant challenge and deserves special attention.</p>
      <p>Some models predicted extreme quantiles independently of the previous values (increments) of the
time series. This was noticeable by the symmetrical distribution of quantiles 1/36500 and 1-1/36500
relative to the last value of the time series for strongly persistent series (consistently increasing or
decreasing). Although in this case it sometimes seems that one of the extreme quantiles is predicted,
this is refuted by the symmetrical opposite quantile, which clearly does not depend on the input data.
This feature also needs to be taken into account when calculating the effectiveness of methods.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>This study presents a novel method for generating a dataset to evaluate the prediction of extreme
quantiles in fractional Brownian motion time series. Despite the challenges posed by limited training
data and long-term dependencies, our approach provides a foundation for further research into refining
existing prediction methods and exploring new machine learning approaches for this task.</p>
      <p>
        These findings highlight the challenges and potential avenues for improving the prediction of
extreme quantiles in fractal time series data, a task of significant relevance in risk assessment and other
fields. It's important to note that while we have proposed a method for creating a dataset for evaluation,
we have not proposed the prediction methods themselves. Further research is needed to refine these
methods, such as those proposed in [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ], and to explore other machine learning approaches for this task.
This work lays the groundwork for such future investigations.
      </p>
    </sec>
    <sec id="sec-7">
      <title>7. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Arpogaus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Voss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Sick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Nigge-Uricher</surname>
          </string-name>
          , and
          <string-name>
            <given-names>O.</given-names>
            <surname>Dürr</surname>
          </string-name>
          , '
          <article-title>Probabilistic Short-Term LowVoltage Load Forecasting using Bernstein-Polynomial Normalizing Flows'</article-title>
          ,
          <source>in ICML 2021, Workshop Tackling Climate Change with Machine Learning</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>O.</given-names>
            <surname>Banna</surname>
          </string-name>
          ,
          <string-name>
            <surname>Fractional Brownian</surname>
          </string-name>
          Motion - Approximations and Projections. London, England: ISTE,
          <year>2019</year>
          . doi:
          <volume>10</volume>
          .1002/9781119476771.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Eichhorn</surname>
          </string-name>
          , and G. Volpe, '
          <article-title>Measurement of anomalous diffusion using recurrent neural networks'</article-title>
          ,
          <source>Phys. Rev. E.</source>
          , vol.
          <volume>100</volume>
          , no.
          <issue>1-1</issue>
          , p.
          <fpage>010102</fpage>
          ,
          <string-name>
            <surname>Jul</surname>
          </string-name>
          .
          <year>2019</year>
          . doi:
          <volume>10</volume>
          .1103/PhysRevE.100.010102.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Bontempi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. Ben</given-names>
            <surname>Taieb</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Y.-A.</given-names>
            <surname>Le</surname>
          </string-name>
          <string-name>
            <surname>Borgne</surname>
          </string-name>
          , '
          <article-title>Machine learning strategies for time series forecasting'</article-title>
          ,
          <source>in Business Intelligence</source>
          , Berlin, Heidelberg: Springer Berlin Heidelberg,
          <year>2013</year>
          , pp.
          <fpage>62</fpage>
          -
          <lpage>77</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>642</fpage>
          -36318-
          <issue>4</issue>
          _
          <fpage>3</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>T.</given-names>
            <surname>Duan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Avati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. Y.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Basu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ng</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Schuler</surname>
          </string-name>
          ,
          <article-title>NGBoost: Natural Gradient Boosting for Probabilistic Prediction</article-title>
          .
          <source>International Conference on Machine Learning</source>
          .
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Y. P.</given-names>
            <surname>Faniband</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Ishak</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Sait</surname>
          </string-name>
          , '
          <article-title>A review of open source software tools for Time Series Analysis'</article-title>
          ,
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2203.05195.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Garcin</surname>
          </string-name>
          , '
          <article-title>Forecasting with fractional Brownian motion: a financial perspective</article-title>
          ',
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2105.09140.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>N.</given-names>
            <surname>Granik</surname>
          </string-name>
          et al., '
          <article-title>Single-particle diffusion characterization by deep learning', Biophys</article-title>
          . J., vol.
          <volume>117</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>185</fpage>
          -
          <lpage>192</lpage>
          , Jul.
          <year>2019</year>
          . doi:
          <volume>10</volume>
          .1016/j.bpj.
          <year>2019</year>
          .
          <volume>06</volume>
          .015.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A. H.</given-names>
            <surname>Hamza</surname>
          </string-name>
          and
          <string-name>
            <given-names>M. Y.</given-names>
            <surname>Hmood</surname>
          </string-name>
          , “
          <article-title>Comparison of Hurst exponent estimation methods”</article-title>
          ,
          <source>JEAS</source>
          , vol.
          <volume>27</volume>
          , no.
          <issue>128</issue>
          , pp.
          <fpage>167</fpage>
          -
          <lpage>183</lpage>
          , Jun.
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .33095/jeas.v27i128.
          <fpage>2162</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Addison</surname>
            <given-names>Howard</given-names>
          </string-name>
          , Jay Evan Reid, Michael Lopez, Will Cukierski. (
          <year>2019</year>
          ).
          <article-title>NFL Big Data Bowl</article-title>
          . Kaggle. https://kaggle.com/competitions/nfl-big
          <article-title>-data-bowl-2020.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Addison</surname>
            <given-names>Howard</given-names>
          </string-name>
          , inversion, Spyros Makridakis, Vangelis. (
          <year>2020</year>
          ).
          <article-title>M5 Forecasting - Uncertainty</article-title>
          . Kaggle. https://kaggle.com/competitions/m5-forecasting-uncertainty
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Addison</surname>
            <given-names>Howard</given-names>
          </string-name>
          , inversion. (
          <year>2020</year>
          ).
          <article-title>COVID19 Global Forecasting (Week 5)</article-title>
          . Kaggle. https://kaggle.com/competitions/covid19-global
          <article-title>-forecasting-week-5</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>X.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Su</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Kong</surname>
          </string-name>
          , '
          <article-title>Prediction for time series with CNN and LSTM'</article-title>
          ,
          <source>in Proceedings of the 11th International Conference on Modelling, Identification and Control (ICMIC2019)</source>
          , Singapore: Springer Singapore,
          <year>2020</year>
          , pp.
          <fpage>631</fpage>
          -
          <lpage>641</lpage>
          . doi:
          <volume>10</volume>
          .1007/
          <fpage>978</fpage>
          -981- 15-0474-7_
          <fpage>59</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>S.</given-names>
            <surname>Khlamov</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Savanevych</surname>
          </string-name>
          , '
          <article-title>Big astronomical datasets and discovery of new celestial bodies in the solar system in automated mode by the CoLiTec software', in Knowledge Discovery in Big Data from Astronomy and Earth Observation</article-title>
          , Elsevier,
          <year>2020</year>
          , pp.
          <fpage>331</fpage>
          -
          <lpage>345</lpage>
          . doi:
          <volume>10</volume>
          .1016/B978-0
          <source>- 12-819154-5</source>
          .
          <fpage>00030</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>L.</given-names>
            <surname>Kirichenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Radivilova</surname>
          </string-name>
          ,
          <string-name>
            <surname>and I. Zinkevich</surname>
          </string-name>
          , '
          <article-title>Forecasting weakly correlated time series in tasks of electronic commerce'</article-title>
          ,
          <source>in 2017 12th International Scientific and Technical Conference on Computer Sciences and Information Technologies (CSIT)</source>
          ,
          <year>Lviv</year>
          ,
          <year>2017</year>
          . doi:
          <volume>10</volume>
          .1109/STCCSIT.
          <year>2017</year>
          .
          <volume>8098793</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>L.</given-names>
            <surname>Kirichenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Vitalii</surname>
          </string-name>
          , and T. Radivilova, '
          <article-title>Machine learning classification of multifractional Brownian motion realizations (</article-title>
          <year>2020</year>
          ) CEUR Workshop Proceedings', pp.
          <fpage>980</fpage>
          -
          <lpage>989</lpage>
          ,
          <fpage>2608</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>L.</given-names>
            <surname>Kirichenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Zinchenko</surname>
          </string-name>
          , and T. Radivilova, '
          <article-title>Classification of time realizations using machine learning recognition of recurrence plots'</article-title>
          ,
          <source>in Advances in Intelligent Systems and Computing</source>
          , Cham: Springer International Publishing,
          <year>2021</year>
          , pp.
          <fpage>687</fpage>
          -
          <lpage>696</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -54215- 3_
          <fpage>44</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>L.</given-names>
            <surname>Kirichenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Pichugina</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Zinchenko</surname>
          </string-name>
          , '
          <article-title>Clustering time series of complex dynamics by features'</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          , vol.
          <volume>3132</volume>
          , pp.
          <fpage>83</fpage>
          -
          <lpage>93</lpage>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>L.</given-names>
            <surname>Kirichenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Pavlenko</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Khatsko</surname>
          </string-name>
          , '
          <article-title>Wavelet-based estimation of Hurst exponent using neural network'</article-title>
          ,
          <source>in 2022 IEEE 17th International Conference on Computer Sciences and Information Technologies (CSIT)</source>
          , Lviv, Ukraine,
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .1109/CSIT56902.
          <year>2022</year>
          .
          <volume>10000906</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>P.</given-names>
            <surname>Kowalek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Loch-Olszewska</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>J.</given-names>
            <surname>Szwabiński</surname>
          </string-name>
          , '
          <article-title>Classification of diffusion modes in singleparticle tracking data: Feature-based versus deep-learning approach'</article-title>
          ,
          <source>Phys. Rev. E.</source>
          , vol.
          <volume>100</volume>
          , no.
          <issue>3-1</issue>
          , p.
          <fpage>032410</fpage>
          ,
          <string-name>
            <surname>Sep</surname>
          </string-name>
          .
          <year>2019</year>
          . doi:
          <volume>10</volume>
          .1103/PhysRevE.100.032410.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Xu</surname>
          </string-name>
          , and G. Zhang, '
          <article-title>Time series classification with deep neural networks based on Hurst exponent analysis'</article-title>
          ,
          <source>in Neural Information Processing</source>
          , Cham: Springer International Publishing,
          <year>2017</year>
          , pp.
          <fpage>194</fpage>
          -
          <lpage>204</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -70087-8_
          <fpage>21</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>S.</given-names>
            <surname>Makridakis</surname>
          </string-name>
          et al., '
          <article-title>The M5 uncertainty competition: Results, findings</article-title>
          and conclusions',
          <source>Int. J. Forecast.</source>
          , vol.
          <volume>38</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>1365</fpage>
          -
          <lpage>1385</lpage>
          , Oct.
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .1016/j.ijforecast.
          <year>2021</year>
          .
          <volume>10</volume>
          .009.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>A.</given-names>
            <surname>März</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>Kneib</surname>
          </string-name>
          , 'Distributional Gradient Boosting Machines',
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2204.00778.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>G.</given-names>
            <surname>Millán</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Osorio-Comparán</surname>
          </string-name>
          and
          <string-name>
            <given-names>G.</given-names>
            <surname>Lefranc</surname>
          </string-name>
          ,
          <article-title>"Preliminaries on the Accurate Estimation of the Hurst Exponent Using Time Series," 2021 IEEE International Conference on Automation/XXIV Congress of the Chilean Association of Automatic Control (ICA-ACCA)</article-title>
          , Valparaíso, Chile,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          , doi: 10.1109/ICAACCA51523.
          <year>2021</year>
          .
          <volume>9465274</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>B.</given-names>
            <surname>Ramadevi</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Bingi</surname>
          </string-name>
          , '
          <article-title>Chaotic time series forecasting approaches using machine learning techniques: A review'</article-title>
          ,
          <source>Symmetry (Basel)</source>
          , vol.
          <volume>14</volume>
          , no.
          <issue>5</issue>
          , p.
          <fpage>955</fpage>
          , May
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .3390/sym14050955.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>H. L.</given-names>
            <surname>Shang</surname>
          </string-name>
          , '
          <article-title>A comparison of Hurst exponent estimators in long-range dependent curve time series'</article-title>
          ,
          <string-name>
            <given-names>J. Time</given-names>
            <surname>Ser</surname>
          </string-name>
          . Econom., vol.
          <volume>12</volume>
          , no.
          <issue>1</issue>
          ,
          <string-name>
            <surname>Jun</surname>
          </string-name>
          .
          <year>2020</year>
          . doi:
          <volume>10</volume>
          .1515/jtse-2019-0009.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>G. S.</given-names>
            <surname>Sidhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ibrahim Ali Metwaly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tiwari</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Bhattacharyya</surname>
          </string-name>
          , '
          <article-title>Short term trading models using Hurst exponent and machine learning', SSRN Electron</article-title>
          . J.,
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .2139/ssrn.3824032.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>W.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Cattani</surname>
          </string-name>
          , and C.
          <string-name>
            <surname>-H. Chi</surname>
          </string-name>
          , '
          <article-title>Fractional Brownian motion: Difference iterative forecasting models'</article-title>
          ,
          <source>Chaos Solitons Fractals</source>
          , vol.
          <volume>123</volume>
          , pp.
          <fpage>347</fpage>
          -
          <lpage>355</lpage>
          , Jun.
          <year>2019</year>
          . doi:
          <volume>10</volume>
          .1016/j.chaos.
          <year>2019</year>
          .
          <volume>04</volume>
          .021.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Stasinopoulos</surname>
          </string-name>
          and
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Rigby</surname>
          </string-name>
          , “
          <article-title>Generalized Additive Models for Location Scale</article-title>
          and
          <string-name>
            <surname>Shape (GAMLSS) in</surname>
            <given-names>R</given-names>
          </string-name>
          ”,
          <string-name>
            <surname>J. Stat. Soft.</surname>
          </string-name>
          , vol.
          <volume>23</volume>
          , no.
          <issue>7</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>46</lpage>
          , Dec.
          <year>2007</year>
          . doi:
          <volume>10</volume>
          .18637/jss.v023.
          <year>i07</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>H.</given-names>
            <surname>Tyralis</surname>
          </string-name>
          and G. Papacharalampous, '
          <article-title>A review of probabilistic forecasting and prediction with machine learning</article-title>
          ',
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2209.08307.
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>J. F.</given-names>
            <surname>Torres</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hadjout</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sebaa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Martínez-Álvarez</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Troncoso</surname>
          </string-name>
          , '
          <article-title>Deep learning for time series forecasting: A survey'</article-title>
          ,
          <source>Big Data</source>
          , vol.
          <volume>9</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>21</lpage>
          , Feb.
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .1089/big.
          <year>2020</year>
          .
          <volume>0159</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>C.</given-names>
            <surname>Wan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Song</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Z. Y.</given-names>
            <surname>Dong</surname>
          </string-name>
          , '
          <article-title>Direct quantile regression for nonparametric probabilistic forecasting of wind power generation'</article-title>
          ,
          <source>IEEE Trans. Power Syst.</source>
          , vol.
          <volume>32</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>2767</fpage>
          -
          <lpage>2778</lpage>
          , Jul.
          <year>2017</year>
          . doi:
          <volume>10</volume>
          .1109/TPWRS.
          <year>2016</year>
          .
          <volume>2625101</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>P.</given-names>
            <surname>Wielopolski</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Zięba</surname>
          </string-name>
          , '
          <article-title>TreeFlow: Going beyond tree-based Gaussian probabilistic regression</article-title>
          ',
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2206.04140.
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Pourpanah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Zeng</surname>
          </string-name>
          , and
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          , '
          <article-title>A survey on epistemic (model) uncertainty in supervised learning: Recent advances</article-title>
          and applications',
          <source>Neurocomputing</source>
          , vol.
          <volume>489</volume>
          , pp.
          <fpage>449</fpage>
          -
          <lpage>465</lpage>
          , Jun.
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .1016/j.neucom.
          <year>2021</year>
          .
          <volume>10</volume>
          .119.
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>O. C.</given-names>
            <surname>Pasche</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Engelke</surname>
          </string-name>
          , '
          <article-title>Neural networks for extreme quantile regression with an application to forecasting of flood risk'</article-title>
          ,
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2208.07590.
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>G.</given-names>
            <surname>Ke</surname>
          </string-name>
          et al.,
          <article-title>'LightGBM: A Highly Efficient Gradient Boosting Decision Tree'</article-title>
          ,
          <source>in Advances in Neural Information Processing Systems</source>
          ,
          <year>2017</year>
          , vol.
          <volume>30</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>R.</given-names>
            <surname>Koenker</surname>
          </string-name>
          and
          <string-name>
            <given-names>K. F.</given-names>
            <surname>Hallock</surname>
          </string-name>
          , 'Quantile regression',
          <string-name>
            <surname>J. Econ. Perspect.</surname>
          </string-name>
          , vol.
          <volume>15</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>143</fpage>
          -
          <lpage>156</lpage>
          , Nov.
          <year>2001</year>
          . https://doi.org/10.1257/jep.15.4.143.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>