<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Time series classification using F-transform</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Przemyslaw Grzegorzewski</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antoni K¸edzierski</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Faculty of Mathematics and Information Science, Warsaw University of Technology</institution>
          ,
          <addr-line>Koszykowa 75, 00-662 Warsaw</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Systems Research Institute, Polish Academy of Sciences</institution>
          ,
          <addr-line>Newelska 6, 01-447 Warsaw</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this paper, we propose a new methodology for time series classification. It employs two techniques: the fuzzy transform (F-transform) and the well-known decision tree classifier. A combination of these two tools appears to result in a new classification method that shows good statistical properties and could be a noteworthy alternative for considered as best for time series 1NN classifier.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Fuzzy transform</kwd>
        <kwd>classification</kwd>
        <kwd>decision tree</kwd>
        <kwd>distances</kwd>
        <kwd>time series</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Time series data form an increasing proportion of the world’s data supply. The omnipresence of
time series and the exponentially growing size of databases resulted in an explosion of interest
in Data Mining methods adapted to the specificity of time series. In typical time series mining
tasks, such as indexing, grouping, classification, forecasting, segmentation, summarization, and
anomaly detection, the analysis of the similarity between the series plays an important role
(see, e.g. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]). The appropriate selection of such a similarity measure may be of significant
importance for the quality and efectiveness of statistical inference [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ].
      </p>
      <p>
        The so-called fuzzy transform (or F-transform, for short), introduced by Perfilieva [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], is a
special technique that can be used to obtain a simple approximate representation of functions
that captures their essential features. The theory of F-transform was developed extensively in
recent years and brought many successful applications in image processing, data analysis, and
signal processing. It also seems to have interesting potential in other fields, like ordinary and
partial diferential equations with fuzzy initial conditions. Thus, it should come as no surprise
that the F-transform found an application in time series analysis, especially for time series
forecasting (see, e.g., [
        <xref ref-type="bibr" rid="ref5 ref6 ref7 ref8">5, 6, 7, 8</xref>
        ]).
      </p>
      <p>The main goal of this contribution is to compare the best distance measures in the most
popular 1NN method with the new classification method – random forest based on F-transform.
We want to test the usefulness of the F-transforms in time series classification.</p>
      <p>The paper is organized as follows: in Section 2 we recall basic information on the F-transform.
Next, in Section 3, we make a short introduction to time series classification and propose how to
apply the F-transform for this task. Then, in Section 4, we describe the investigation conducted
to examine the proposed classification method and discuss some of the experimental results.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Fuzzy transform</title>
      <p>
        In this section, we provide only the definition and basic concepts related to the F-transform.
For more details we refer, e.g., to [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>Let us consider a continuous real function  : [, ] → R. The F-transform is defined
concerning the so-called fuzzy partition of the domain [, ] by a finite number of fuzzy sets
satisfying some axioms specified in the following definition.</p>
      <p>Definition 1. Let 1 &lt; . . . &lt;  denote fixed nodes within [, ], such that 1 =  and  = 
and  ⩾ 3. We say that fuzzy sets 1, . . . ,  form a fuzzy partition of [, ] if they satisfy the
following conditions for  = 1, . . . , :
1. () = 1;
2. () = 0 for  ∈/ (− 1, +1), where for uniformity of notation, we put 0 = 1 =  and
+1 =  = ;
3.  is continuous;
4.  is strictly increasing in [− 1, ] and strictly decreasing in [, +1];
5. ∑︀</p>
      <p>=1 () = 1 for each  ∈ [, ].</p>
      <p>
        The last axiom is known as orthogonality or the Ruspini condition. The membership functions
of fuzzy sets 1, . . . ,  forming a fuzzy partition are called basic functions. It is worth noting
that the shapes of basic functions are not predetermined and can be selected to meet some
additional specific properties. A fuzzy partition is called uniform if the nodes are equidistant
(i.e.  = − 1 + ℎ for  = 2, . . . ,  and some fixed ℎ) and fuzzy sets 2, . . . , − 1 are shifted
copies of symmetrized 1 (or , for details see [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]). Once a fuzzy partition is selected we can
define the F-transform.
      </p>
      <p>Definition 2. Let 1, . . . ,  denote a fuzzy partition of [, ] and let  : [, ] → R be a
continuous function. The -tuple (1, . . . , ) of real numbers given by</p>
      <p>∫︀ ()
∫︀ 
 =   ()() ,  = 1, . . . , ,
(1)
is a (direct) fuzzy transform (F-transform) of  with respect to the given fuzzy partition.</p>
      <p>In practical applications,  is usually not given analytically. Instead, we are provided with
some data points obtained from observations or measurements. Thus Def. 2 can be modified by
replacing integrals in (1) by finite sums.</p>
      <p>To be more strict, let 1, . . . ,  denote a fuzzy partition of [, ] and let the function
 : [, ] → R be given at fixed points 1, . . . ,  ∈ [, ], where  &gt; . We say that the set
of points {1, . . . ,  } is suficiently dense with respect to the fuzzy partition 1, . . . ,  if for
every  ∈ {1, . . . , } there exist  ∈ {1, . . . ,  } such that () &gt; 0. Now we are able to
define the so-called discrete F-transform.</p>
      <p>Definition 3. Let 1, . . . ,  denote a fuzzy partition of [, ] and let  : [, ] → R be a
function known at points 1, . . . ,  ∈ [, ]. Moreover, let us assume that the set {1, . . . ,  } is
suficiently dense with respect to the fuzzy partition 1, . . . , . Then the -tuple (1, . . . , )
of real numbers given by
(2)
(3)
 = ∑︀∑=︀1 ()() ,  = 1, . . . , ,</p>
      <p>=1 ()
is a (direct) discrete fuzzy transform (discrete F-transform) of  with respect to the given fuzzy
partition.</p>
      <p>In applications we usually refer to the F-transform without specifying explicitly whether it is
for a continuous or discrete problem, i.e. according to Def. 2 or Def. 3, assuming that the reader
can recognize which is the actual underlying concept from the context.</p>
      <p>The F-transform (as well as the discrete F-transform) of  will be denoted by [ ] =
(1, . . . , ), where the number  is called the -th component of the F-transform. One can
realize that the components of the F-transform are just weighted mean values of the original
function  , where the weights are determined by the basic functions 1, . . . , .</p>
      <p>The F-transform itself would not be interesting enough without its inverse formula which
allows reconstructing  from [ ].</p>
      <p>Definition 4. Let [ ] = (1, . . . , ) be the direct F-transform of  with respect to a fuzzy
partition 1, . . . ,  of [, ]. Then the function , : [, ] → R given by</p>
      <p>,() = ∑︁  · (),  ∈ [, ],</p>
      <p>=1
is called the inverse F-transform of  .</p>
      <p>
        It is seen that the inverse F-transform is a continuous function on [, ]. However, what’s more,
it can be shown that the sequence of the inverse F-transform {,}∞=3 converges uniformly
to the initial function  as  → ∞. Moreover, this result is valid both when a fuzzy partition is
uniform [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] or non-uniform [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>Thus, to conclude, while the direct F-transform [ ] may serve as a discrete approximate
representation of a function  : [, ] → R, the inverse F-transform , is a suitable continuous
approximation of  .</p>
      <p>
        Both of these statements make the F-transform an extremely useful tool in various fields
and applications. One of them is time series analysis. Until now, it was used there mainly and
successfully for prediction (see, e.g., [
        <xref ref-type="bibr" rid="ref5 ref6 ref7 ref8">5, 6, 7, 8</xref>
        ]). Our goal is to test its suitability for other tasks
related to time series analysis, in particular, the classification of time series.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Time series classification</title>
      <p>
        Time series classification is an important problem in data analysis. Although many algorithms
have been proposed the nearest neighbor (NN) classifier still seems to be the most appreciated
one. It is so mostly because of its simplicity and noticeably good performance in many situations.
The 1NN classifiers are mainly used for time series due to the dimensionality of the data. Their
high performance, especially with dynamic time warping (DTW) and its modified versions used
as the distance measure has been confirmed by many experiments (see, e.g., [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]).
      </p>
      <p>In our study, we set ourselves the goal to make use of renowned classification methods, such
as the random forest classifier, but in the field of time series. The reason we use the F-transform
is to reduce the high dimensionality of time-series data. Indeed, using the F-transform we can
compress time series to significantly smaller vectors and then perform the classification task on
them.</p>
      <p>Let  = { :  ∈ } be a given time series. Obviously,  might be viewed as a function
() =  defined on a fixed time interval which is not given analytically but instead some
measurements  at points  ∈  are available. Assuming { :  ∈ } is suficiently dense
with respect to the fuzzy partition 1, . . . ,  and following (2) the F-transform of  is given
by [] = (1, . . . , ), where</p>
      <p>∑︀ ()
 = ∈
∑︀ ()
∈
,  = 1, . . . , .</p>
      <p>(4)
Looking at the time series representation given by (4) it becomes clear that the use of the
F-transform makes it possible to reduce significantly the dimensionality of the data.</p>
      <p>Further on we will conduct tests to check whether our approach combining decision trees
and the F-transform can compete with other widespread classification tools.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental results</title>
      <p>
        To investigate how the proposed F-transform-based random forest classifier behaves and to
compare it with classifiers utilizing various distances and similarity measures we conducted
an extensive experimental study. It was performed on benchmark datasets from the UCR
Time Series Classification Archive [
        <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
        ]. The UCR time series repository contains 128
datasets originating from diferent domains and such sources as electrocardiograms, power
measurements, sensor readings, spectroscopy, trafic data, simulated data, etc. Within the data,
one can find cases with a highly diversified number of classes, with various time series per
dataset and time series of diferent lengths.
      </p>
      <p>
        In our experiment, we analyzed 29 datasets included in the UCR repository (i.e. PowerCons,
Cofee, BME, SmoothSubspace, Wafer, Plane, Strawberry, ItalyPowerDemand, Meat, GunPoint,
CBF, UMD, BeetleFly, Symbols, MoteStrain, ECG200, Trace, SwedishLeaf, FaceFour, Yoga, Beef,
Wine, Fish, Fungi, ShapesAll, Ham, FiftyWords, ElectricDevices, BirdChicken). They have been
specially selected to deal with a wide range of possible problems that can be encountered when
analyzing time series, since they difer in the number of observations, the sizes of the training
and test sets, the shape of their trajectories, etc. For instance, PowerCons contains data on
the energy consumption of French households over two seasons - heating and summer. The
collected trajectories contain clear observations of outliers related to the increased energy
consumption. Cofee dataset contains spectrograms of two diferent types of cofee - Robusta
and Arabica. The BME set is artificially generated data that represents three types of trajectories:
containing a local maximum at the beginning of the series, containing a local maximum at the
end of the series, and having no shots. Wafer contains trajectories corresponding to the records
from several sensors monitoring the production process, the so-called silicon wafers, where the
observation labels say whether the record describes a normal or a disturbing process. The Plane
set contains the outlines of seven diferent airplane models converted into a one-dimensional
time series. The thing that characterizes the contour data is often the high variability over
a small period, which corresponds to small indentations in the shape. Strawberry contains
spectrograms of fruit mousses made from strawberries or from strawberries with the addition
of other fruits. For a more detailed description of the datasets, we refer to [
        <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
        ] and the
bibliography cited there.
      </p>
      <p>
        The suggested F-transform-based random forest classifier was compared with the most
efective variants of the 1NN algorithm equipped with 55 metrics and similarity measures described
in [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]. These distances/similarity measures could be grouped into four categories: shape-based
measures, edit-based measures, feature-based measures and structure-based measures [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>
        Figure 1 presents a comparison of the considered classification methods. Box-plots given
there shows the distribution of their accuracy obtained for all 29 benchmarks. As to be expected,
there is no definite winner but the top eficient classifiers are based on distances related to
DTW and its modifications. This result is in line with the conclusions of the research conducted
by Górecki and Piasecki [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]. However, it should be underlined that the method based on
the F-transform appears among the winning approaches. And although it ranks eighth in
terms of the median, it is worth noting that the classifier utilizing the F-transform is the most
stable method on the list. Indeed, it did not work badly on any of the 29 analyzed datasets (no
outliers on its boxplot) and at the same time, it has a quite small dispersion (both range and the
interquartile range).
      </p>
      <p>Good properties of the method utilizing the F-transform are confirmed by the drawing in
Figure 2 where each considered method is placed on a map describing two features: the mean
error and standard deviation (to be more strict, the position of the method corresponds to
the upper left corner of the rectangle representing the given method). As it is easily seen the
proposed classification method based on the F-transform reveals the smallest standard deviation
of the error and one of the smallest mean errors.</p>
      <p>Additionally, all considered methods were grouped according to clusters, where the mean
errors and standard deviations were used as features. Figure 2 shows the results of k-means
clustering, where the optimal number of clusters turned out to be equal to 3. Hence we have
obtained 3 disjoint groups: weak and unstable (yellow), moderate (green), and highly eficient
and stable (blue). Our classification method based on the F-transform belongs to the “blue”
cluster but we marked it in red to make it easier to identify in the drawing.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>Our research confirmed that none of the classification methods neither any of the distance
measures nor our F-transform classifier is the best for all available datasets. However, there is
a group of methods performing significantly better than the others. This group includes the
proposed classifier based on the F-transform. The limited space does not allow the presentation
and discussion of all the results obtained, but - as we believe - the reader may be convinced
of the high quality of the adsorbed method based on what is presented in this contribution.
Anyway, classifiers obtained with the F-transform challenge the methods considered to be the
best at present, i.e. the 1NN algorithm with an appropriate metric.</p>
      <p>Moreover, the F-transform enables use in a classification other than distance-based methods
(e.g. decision trees, logistic regression) as it significantly reduces the dimensionality of the
data. Due to the F-transform, we can jump to a completely new level of classification and use
completely diferent methods, which - as it turns out - work well. We hope that our study will
also contribute to increasing the interest in the F-transform, showing its usefulness not only in
forecasting (as demonstrated earlier), but also in other time series analysis tasks.</p>
      <p>Many questions and problems are still open. In particular, we want to examine if there is a
significant relationship between the fuzzy partition selection and the resulting classification.
We have observed that the F-transform works efectively for the classification of spectrographic
data. Hence, we want to indicate a class of time series for which the F-transform method reveals
the best properties as a classifier. Finally, in the nearest future, we plan to examine how to
apply the F-transform in other time series analysis problems such as cluster analysis or anomaly
detection.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Ratanamahatana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Gunopulos</surname>
          </string-name>
          , E. Keogh,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vlachos</surname>
          </string-name>
          , G. Das,
          <article-title>Mining time series data</article-title>
          , in: O.
          <string-name>
            <surname>Maimon</surname>
          </string-name>
          , L. Rokach (Eds.),
          <source>Data Mining and Knowledge Discovery Handbook</source>
          , Springer,
          <year>2010</year>
          , pp.
          <fpage>1049</fpage>
          -
          <lpage>1077</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-0-
          <fpage>387</fpage>
          -09823-4_
          <fpage>56</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>T.</given-names>
            <surname>Górecki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Piasecki</surname>
          </string-name>
          ,
          <article-title>An experimental evaluation of time series classification using various distance measures</article-title>
          ,
          <source>Archives of Data Science, Series A</source>
          <volume>5</volume>
          (
          <year>2018</year>
          )
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          . doi:
          <volume>10</volume>
          .5445/ KSP/1000087327/07.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>T.</given-names>
            <surname>Górecki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Piasecki</surname>
          </string-name>
          ,
          <article-title>A comprehensive comparison of distance measures for time series classification</article-title>
          , in: A.
          <string-name>
            <surname>Steland</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Rafajłowicz</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          Szajowski (Eds.),
          <source>Stochastic Models, Statistics and Their Applications</source>
          , volume
          <volume>294</volume>
          of Springer Proceedings in Mathematics &amp; Statistics, Springer Nature,
          <year>2019</year>
          , pp.
          <fpage>409</fpage>
          -
          <lpage>428</lpage>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>030</fpage>
          -28665-1_
          <fpage>31</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>I. Perfilieva</surname>
          </string-name>
          ,
          <article-title>Fuzzy transforms: theory and applications</article-title>
          ,
          <source>Fuzzy Sets and Systems</source>
          <volume>157</volume>
          (
          <year>2006</year>
          )
          <fpage>993</fpage>
          -
          <lpage>1023</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.fss.
          <year>2005</year>
          .
          <volume>11</volume>
          .012.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Stepnicka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dvorak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pavliska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Vavrickova</surname>
          </string-name>
          ,
          <article-title>A linguistic approach to time series modeling with the help of f-transform</article-title>
          ,
          <source>Fuzzy Sets and Systems</source>
          <volume>180</volume>
          (
          <year>2011</year>
          )
          <fpage>164</fpage>
          -
          <lpage>184</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.fss.
          <year>2011</year>
          .
          <volume>02</volume>
          .017.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Stepnicka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Cortez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Donate</surname>
          </string-name>
          , L. Stepnickova,
          <article-title>Forecasting seasonal time series with computational intelligence: On recent methods and the potential of their combinations</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>40</volume>
          (
          <year>2013</year>
          )
          <fpage>1981</fpage>
          -
          <lpage>1992</lpage>
          . URL: http://dx.doi.org/10.1016/j. eswa.
          <year>2012</year>
          .
          <volume>10</volume>
          .001.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mirshahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Novak</surname>
          </string-name>
          ,
          <article-title>A fuzzy method for evaluating similar behavior between assets</article-title>
          ,
          <source>Soft Computing</source>
          <volume>25</volume>
          (
          <year>2021</year>
          )
          <fpage>7813</fpage>
          --
          <lpage>7823</lpage>
          . URL: https://doi.org/10.1007/s00500-021-05639-y.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>V.</given-names>
            <surname>Novak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mirshahi</surname>
          </string-name>
          ,
          <article-title>On the similarity and dependence of time series</article-title>
          ,
          <source>Mathematics</source>
          <volume>9</volume>
          (
          <year>2021</year>
          )
          <article-title>550</article-title>
          . URL: https://doi.org/10.3390/math9050550.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Stepnicka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Polakovic</surname>
          </string-name>
          ,
          <article-title>A neural network approach to the fuzzy transform</article-title>
          ,
          <source>Fuzzy Sets and Systems</source>
          <volume>160</volume>
          (
          <year>2009</year>
          )
          <fpage>1037</fpage>
          -
          <lpage>1047</lpage>
          . doi:doi:10.1016/j.fss.
          <year>2008</year>
          .
          <volume>11</volume>
          .029.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>H. A.</given-names>
            <surname>Dau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Keogh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kamgar</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.-C. M. Yeh</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Zhu</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Gharghabi</surname>
            ,
            <given-names>C. A.</given-names>
          </string-name>
          <string-name>
            <surname>Ratanamahatana</surname>
            , Yanping,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Hu</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Begum</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Bagnall</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Mueen</surname>
            , G. Batista,
            <given-names>Hexagon-ML</given-names>
          </string-name>
          ,
          <article-title>The ucr time series classification archive</article-title>
          ,
          <year>2018</year>
          . https://www.cs.ucr.edu/~eamonn/time_series_data_
          <year>2018</year>
          /.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>H. A.</given-names>
            <surname>Dau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bagnall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kamgar</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.-C. M. Yeh</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Gharghabi</surname>
            ,
            <given-names>C. A.</given-names>
          </string-name>
          <string-name>
            <surname>Ratanamahatana</surname>
          </string-name>
          , E. Keogh,
          <source>The ucr time series archive</source>
          ,
          <year>2019</year>
          . https://doi.org/10.48550/arXiv.
          <year>1810</year>
          .
          <volume>07758</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>P.</given-names>
            <surname>Esling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Agon</surname>
          </string-name>
          ,
          <article-title>Time-series data mining, ACM Computing Surveys (CSUR) 45 (</article-title>
          <year>2012</year>
          ). doi:
          <volume>10</volume>
          .1145/2379776.2379788.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>