=Paper= {{Paper |id=Vol-1425/paper6 |storemode=property |title=Temporal and Frequential Metric Learning for Time Series kNN Classification |pdfUrl=https://ceur-ws.org/Vol-1425/paper6.pdf |volume=Vol-1425 |dblpUrl=https://dblp.org/rec/conf/pkdd/DoCMR15 }} ==Temporal and Frequential Metric Learning for Time Series kNN Classification== https://ceur-ws.org/Vol-1425/paper6.pdf
 Proceedings 1st International Workshop on Advanced Analytics and Learning on Temporal Data
                                         AALTD 2015




      Temporal and Frequential Metric Learning
         for Time Series kNN Classication
    Cao-Tri Do123 , Ahlame Douzal-Chouakria2 , Sylvain Marié1 , and Michèle
                                 Rombaut3
                            1
                              Schneider Electric, France
                       LIG, University of Grenoble Alpes, France
                       2
                  3
                    GIPSA-Lab, University of Grenoble Alpes, France


       Abstract. This work proposes a temporal and frequential metric learn-
       ing framework for a time series nearest neighbor classication. For that,
       time series are embedded into a pairwise space where a combination
       function is learned based on a maximum margin optimization process.
       A wide range of experiments are conducted to evaluate the ability of the
       learned metric on time series kNN classication.


Keywords: Metric learning, Time series, kNN, Classication, Spectral metrics.

1    Introduction
Due to their temporal and frequential nature, time series constitute complex
data to analyze by standard machine learning approaches [1]. In order to clas-
sify such challenging data, distance features must be used to bring closer time
series of identical classes and separate those of dierent classes. Temporal data
may be compared on their values. The most frequently used value-based met-
rics are the Euclidean distance and the Dynamic Time Warping dtw to cope
with delays [2, 3]. They can also be compared on their dynamics and frequential
characteristics [4, 5]. Promising approaches aims to learn the Mahalanobis dis-
tance or kernel function for a specic classier [6, 7]. Other work investigate the
representation paradigm by representating objects in a dissimilarity space where
are investigated dissimilarity combinations and metric learning [8, 9]. The idea
in this paper is to combine basic metrics into a discriminative one for a k NN
classier. In the metric learning context for a metric learning approach driven
by nearest neighbors (Weinberger & Saul [6]), we extend the work of Do & al.
in [10] to temporal and frequential characteristics. The main idea is to embed
pairs of time series in a space whose dimensions are basic temporal and frequen-
tial metrics, where a combination function is learned based on a large margin
optimization process.
    The main contributions of the paper are a) propose a new temporal and fre-
quential metric learning framework for a time series nearest neighbors classica-
tion, b) learn a combination metric involving amplitude, behavior and frequen-
tial characteristics and c) conduct large experimentations to study the ability of
learned metric. The rest of the paper is organized as follows. Section 2 recalls
briey the major metrics for time series. In Section 3, we present the proposed


Copyright⃝
         c 2015 for this paper by its authors. Copying permitted for private and academic
purposes.
2     C.-T. Do, A. Douzal-Chouakria, S. Marié, M. Rombaut

metric learning approach. Finally, Section 4 presents the experiments conducted
and discusses the results obtained.

2    Time series metrics
Let xi = (xi1 , ..., xiT ) and xj = (xj1 , ..., xjT ) be two time series of time length
T . Time series metrics fall at least within three main categories. The rst one
concerns value-based metrics, where time series are compared according to their
values regardless of their behaviors. Among these metrics are the Euclidean
distance (dE ), the Minkowski distance and the Mahalanobis distance [3]. We
recall the formula of dE :
                                           v
                                           u T
                                           u∑
                            Ed (x , x ) = t (x − x )2
                                i   j             it     jt                         (1)
                                            t=1

    The second category relies on metrics in the spectral representations. In some
applications, time series may be similar because they share the same frequency
characteristics. For that, time series xi are rst transformed into their Fourier
representation x̃i = [x̃i1 , ..., x̃iF ], where x̃if is a complex number (i.e. Fourier
                             T
components), with F = 22 + 1 [5]. Then, one may use the Euclidean distance
(dF F T ) between the module of the complex numbers x̃if , noted |x̃if |:
                                            v
                                            u F
                                            u∑
                       dF F T (xi , xj ) = t (|x̃if | − |x̃jf |)2                  (2)
                                           f =1

Note that times series of similar frequential characteristics may have distinctive
global behavior. Thus, to compare time series based on their behavior, a third
category of metrics is used. Many applications refer to the Pearson correlation
or its generalization, the temporal correlation coecient [4] dened as:
                                     ∑
                                          (xit − xit′ )(xjt − xjt′ )
                                     t,t′
               Cortr (xi , xj ) = √∑                 √∑                        (3)
                                      (xit − xit′ )2       (xjt − xjt′ )2
                                    t,t′               t,t′

where |t − t′ | ≤ r, r ∈ [1, ..., T − 1] being a parameter that can be learned or xed
a priori. The optimal value of r is noisy dependant. For r = T − 1, Eq. 3 leads
to the Pearson correlation. As Cortr is a similarity measure, it is transformed
into a dissimilarity measure: dCortr (xi , xj ) = 12 (1 − Cortr (xi , xj )).



3    Temporal and frequential metric learning for a large
     margin k NN
Let X = {xi , yi }N
                  i=1 be a set of N static vector samples, xi ∈ R , p being the
                                                                 p

number of descriptive features and yi the class labels. Weinberger & Saul pro-
posed in [6] an approach to learn a dissimilarity metric D for a large margin
Temporal and Frequential Metric Learning for Time Series k NN Classif.                      3

k NN. It is based on two intuitions: rst, each training sample xi should have the
same label yi as its k nearest neighbors; second, training samples with dierent
labels should be widely separated. For this, they introduced the concept of target
for each training sample xi . Target neighbors of xi , noted j  i, are the k closest
xj of the same class (yj = yi ). The target neighborhood is dened with respect
to an initial metric. The aim is to learn a metric D that pulls the targets and
pushes the ones of dierent class.

    Let d1 , ..., dh ..., dp be p given dissimilarity metrics that allow to compare sam-
ples. The computation of a metric always takes into account a pair of samples.
Therefore, we used the pairwise representation introduced in Do & al. [10]. In
this space, a vector xij represents a pair of samples (xi , xj ) described by the p
basics metrics dh : xij = [d1 (xi , xj ), ..., dp (xi , xj )]T . If xij = 0 then xj is identi-
cal to xi according to all metrics dh . A combination function D of the metrics
dh can be seen as a function in this space.     ∑ We propose in the following to use a
linear combination of dh : Dw (xi , xj ) = h wh .dh (xi , xj ). Its pairwise notation
is Dw (xij ) = wT .xij . To ensure that Dw is a valid metric, we set wh ≥ 0 for all
h = 1...p. The main steps of the proposed approach to learn the metric, detailed
hereafter, can be summarized as follows:
 1. Embed each pair (xi , xj ) into the pairwise space Rp .
 2. Scale the data within the pairwise space.
 3. Dene for each xi its targets.
 4. Scale the neighborhood of each xi .
 5. Learn the combined metric Dw .


Data scaling. This operation is performed to scale the data within the pairwise
space and ensure comparable ranges for the p basic metrics dh . In our experiment,
we use dissimilarity measures with values in [0; +∞[. Therefore, we propose to
Z-normalize their log distributions.

Target set. For each xi , we dene its target neighbors as the k nearest neighbors
xj ( j    i) of the same class according to an initial metric. In
                                                                √∑this paper, we
choose a L2-norm of the pairwise space as the initial metric (      h dh ). Other
                                                                       2

metrics could be chosen. We emphasize that target neighbors are xed a priori
(at the rst step) and do not change during the learning process.

Neighborhood scaling. In real datasets, local neighborhoods can have very dif-
ferent scales. To make the target neighborhood spreads comparable, we propose
for each xi to scale its neighborhood vectors xij such that the L2-norm of the
farthest target is 1.

Learning the combined metric Dw . Let {xij , yij }Ni,j=1 be the training set with
yij = −1 if yj = yi and +1 otherwise. Learning Dw for a large margin k NN
classier can be formalized as the following optimization problem:
4     C.-T. Do, A. Douzal-Chouakria, S. Marié, M. Rombaut

                               ∑                             ∑ 1 + yil
                        min                  Dw (xij ) + C              .ξijl
                         w,ξ
                               i,j       i
                                                                      2
                                                           i,j i,l
                               |             {z      }   |         {z       }
                                             pull                  push

                        s.t. ∀j    i, yl ̸= yi ,                                              (4)
                        Dw (xil ) − Dw (xij ) ≥ 1 − ξijl
                        ξijl ≥ 0
                        wh > 0 ∀h = 1...p
                                     ∑                         ∑
Note that the "pull" term                        Dw (xij ) =           wT .xij = N.k.wT .x̄ij is a L1-
                                     j       i                 j   i
Mahalanobis norm weighted by the average target sample. Therefore, it behaves
like a L1-norm in the optimization problem. The problem is very similar to a
C-SVM classication problem. When C is innite, we have a "strict" problem:
the solver will try to nd a direction in the pairwise space for which only targets
are in the close neighborhood of each xi , and a maximum margin ||w1||2 .

Let xtest be a new sample to classify and xtest,i (i = 1, ..., N ) the corresponding
vectors into the pairwise embedding space. After xtest,i normalization according
to the Data Scaling step, xtest is classied based on a standard k NN and Dw .


4    Experiments
In this section, we compare k NN classier performances for several metrics on
reference time series datasets [1114] described in Table 1. To compare with the
reference results in [3, 11], the experiments are conducted with the same proto-
cols as in Do &. al. [10]: k is set to 1; train and test set are given a priori. Due to
the current format to store the data, small datasets with short time series were
retained and the experiments are conducted on one runtime.

    In this experimentation, we consider basic metrics dE , dF F T and dCortr then,
we learn a combined metric Dw according to the procedure described in Section 3.
First, two basic temporal metrics are considered in D2 (dE and dCortr ) as in Do
& al. [10]. Second, we consider a combination between temporal and frequential
metrics in D3 (dE , dCortr and dF F T ). Cplex library [15] has been used to solve
the optimization problem in Eq. 4. We learn the optimal parameter values of
these metrics by minimizing a leave-one out cross-validation criterion.

As the training dataset sizes are small, we propose a hierarchical error criterion:
 1. Minimize the k NN error rate
 2. Minimize ddintra
               inter
                     if several parameter values obtain the minimum k NN error.
where dintra and dinter stands respectively to the mean of all intraclass and
interclass distances according to the metric at hand. Table 2 gives the range of
the grid search considered for the parameters. In the following, we consider only
the raw series and don't align them using a dtw algorithm for example. For
Temporal and Frequential Metric Learning for Time Series k NN Classif.                5

all reported results (Table 3), the best one is indexed with a star and the ones
signicantly similar from the best one (Z-test at 1% risk) are in bold [16].


              Dataset             Nb. Class Nb. Train Nb. Test TS length
              SonyAIBO                2        20       601       70
              MoteStrain              2        20      1252       84
              GunPoint                2        50       150      150
              PowerCons               2        73       292      144
              ECG5Days                2        23       861      136
              SonyAIBOII              2        27       953       65
              Coee                   2        28        28      286
              BME                     3        48       102      128
              UMD                     3        46        92      150
              ECG200                  2       100       100       96
              Beef                    5        30        30      470
              DiatomSizeReduction     4        16       306      345
              FaceFour                4        24        88      350
              Lighting-2              2        60        61      637
              Lighting-7              7        70        73      319
              OliveOil                4        30        30      570

Table 1. Dataset description giving the number of classes (Nb. Class), the number of
time series for the training (Nb. Train) and the testing (Nb. Test) sets, and the length
of each time series (TS length).




                  Method Parameter          Parameter range
                  dCortr     r                [1, 2, 3, , ..., T ]
                  D2 , D 3   C     [10−3 , 0.5, 1, 5, 10, 20, 30, ..., 150]
                           Table 2. Parameter ranges



    From Table 3, we can see that temporal metrics dE and dCortr alone per-
forms better one from the other depending on the dataset. Using a frequential
metric alone such as dF F T brings signicant improvements for some datasets
(SonyAIBO, GunPoint, PowerCons, ECG5Days). It can be observed that one
basic metric is sucient on some databases (MoteStrain, GunPoint, PowerCons,
ECG5Days). In other cases, learning a combination of these basic metrics reach
the same performances on most datasets or achieve better results (UMD). The
new approach allows to extend combination functions to many metrics with-
out having to cope with additional parameters in grid search and without to
test every basic metrics alone to retained the best one. It also extends the
work done in [6] for single distance to multiple distances. Adding metrics such
as dF F T improves the performances on some datasets (SonyAIBO, GunPoint,
UMD, FaceFour, Lighting-2, Lighting-7) than considering only temporal metrics
6     C.-T. Do, A. Douzal-Chouakria, S. Marié, M. Rombaut

                                               Metrics
             Dataset                    Basic         Learned combined
                                   dE   dCortr dF F T   D2       D3
             SonyAIBO             0.305 0.308 0.258* 0.308     0.259
             MoteStrain          0.121* 0.264 0.278 0.210       0.277
             GunPoint             0.087 0.113 0.027* 0.113      0.073
             PowerCons           0.370 0.445 0.315* 0.384       0.410
             ECG5Days             0.203 0.153 0.006* 0.153      0.156
             SonyAIBOII          0.141 0.142 0.128* 0.142        0.142
             Coee                0.250   0*   0.357   0*         0*
             BME                  0.128 0.059* 0.412 0.059*      0.078
             UMD                 0.185* 0.207 0.315 0.207       0.185*
             ECG200               0.120 0.070* 0.166 0.070*     0.070*
             Beef                0.467 0.300* 0.500 0.300*       0.367
             DiatomSizeReduction 0.065* 0.075 0.069 0.075        0.075
             FaceFour            0.216 0.216 0.239 0.216        0.205*
             Lighting-2          0.246 0.246 0.148* 0.246        0.213
             Lighting-7          0.425 0.411 0.315 0.411        0.288*
             OliveOil            0.133* 0.133* 0.200 0.133*     0.133*
Table 3. Error rate of 1NN classier for dierent metrics. D2 is computed using dE
and dCortr ; D3 uses the 3 basic metrics. The metric with the best performance for
each dataset is indicated by a star (*) and the ones with equivalent performances are
in bold.




(dE , dCortr ). However, it does not always improve the results (GunPoint, Pow-
erCons, ECG5Days). This might be caused by the fact that our framework is
sensitive to the choice of the initial metric (L2-norm) or maybe, some steps in
the algorithm should be improved to make the combination better.




5   Conclusion


For nearest neighbor time series classication, we propose to learn a metric as a
combination of temporal and frequential metrics based on a large margin opti-
mization process. The learned metric shows good performances on the conducted
experimentations. For future work, we are looking for some improvements. First,
the choice of the initial metric is crucial. It has been set here as the L2-norm
of the pairwise space but a dierent metric could provide better target sets.
Otherwise, using an iterative procedure (reusing Dw to generate new target sets
and learn Dw again) could be another solution. Second, we note that the L1-
norm on the "pull" term leads to sparcity. Changing it into a L2-norm could
allow for non-sparse solutions and also extend the approach to non-linear metric
combination functions thanks to the Kernel trick. Finally, we could extend this
framework to multivariate, regression or clustering problems.
Temporal and Frequential Metric Learning for Time Series k NN Classif.               7

References
 1. T.C. Fu, A review on time series data mining, Engineering Applications of
    Articial Intelligence, 2011.
 2. H. Sakoe and S. Chiba, Dynamic Programming Algorithm Optimization for Spo-
    ken Word Recognition, IEEE transactions on acoustics, speech, and signal pro-
    cessing, 1978.
 3. H. Ding, G. Trajcevski, and P. Scheuermann, Querying and Mining of Time Series
    Data : Experimental Comparison of Representations and Distance Measures, in
    VLDB, 2008.
 4. A. Douzal-Chouakria and C. Amblard, Classication trees for time series, Pattern
    Recognition journal, 2011.
 5. S. Lhermitte, J. Verbesselt, W.W. Verstraeten, and P. Coppin, A comparison of
    time series similarity measures for classication and change detection of ecosystem
    dynamics, 2011.
 6. K. Weinberger and L. Saul, Distance Metric Learning for Large Margin Nearest
    Neighbor Classication, Journal of Machine Learning Research, 2009.
 7. M. Gönen and E. Alpaydin, Multiple kernel learning algorithms, The Journal of
    Machine Learning Research, vol. 12, pp. 22112268, 2011.
 8. R. Duin, M. Bicego, M. Orozco-alzate, S. Kim, and M. Loog, Metric Learning in
    Dissimilarity Space for Improved Nearest Neighbor Performance, pp. 183192.
 9. A. Ibba, R. Duin, and W. Lee, A study on combining sets of dierently measured
    dissimilarities, in Proceedings - International Conference on Pattern Recognition,
    2010, pp. 33603363.
10. C. Do, A. Douzal-Chouakria, S. Marié, and M. Rombaut, Multiple Metric Learn-
    ing for large margin kNN Classication of time series, EUSIPCO, 2015.
11. E. Keogh, Q. Zhu, B. Hu, Y. Hao, X. Xi, L. Wei, and C.A. Ratanama-
    hatana,        The UCR Time Series Classication/Clustering Homepage
    (www.cs.ucr.edu/eamonn/time_series_data/), 2011.
12. K. Bache and M. Lichman,                   UCI Machine Learning Repository
    (http://archive.ics.uci.edu/ml), 2013.
13. LIG-AMA              Machine          Learning         Datasets         Repository
    (http://ama.liglab.fr/resourcestools/datasets/), 2014.
14. C. Frambourg, A. Douzal-Chouakria, and E. Gaussier, Learning Multiple Tempo-
    ral Matching for Time Series Classication, Advances in Intelligent Data Analysis
    XII, 2013.
15. IBM Cplex (http://www-01.ibm.com/software/commerce/optimization/cplex-
    optimizer/), .
16. T. Dietterich, Approximate Statistical Tests for Comparing Supervised Classi-
    cation Learning Algorithms, 1997.