<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>2-adic Fuzzy Partitions and Multi-Scale Representation of Time Series</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Irina Perfilieva</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>David Adamczyk</string-name>
          <email>david.adamczyk@osu.cz</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Ostrava</institution>
          ,
          <addr-line>30.dubna, 22</addr-line>
          ,
          <country country="CZ">Czech Republic</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>We are focused on a new method of time series analysis that is based on the extraction of representative keypoints. We use the multi-scale theory based on the use of non-traditional kernels derived from the theory of Ftransforms. The sequence of kernels corresponds to what is called as 2-adic fuzzy partitions. This leads to simplified algorithms and comparable efficiency in the selection of keypoints. We reduce the number of representative keypoints and enhance robustness of their selection. We also propose a new keypoint descriptor and test it on matching financial time series with high volatility.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>A new method of time series analysis based on the
selection of representative keypoints with subsequent reverse
reconstruction is proposed and tested. Keypoints serve as
indicators of the area in which a functional object (time
series, image, etc.) has clearly expressive features compared
to other nearly flat areas. The way of extracting keypoints
and related features is similar to the processing of data by
neural networks. Indeed, the latter is focused on stable
feature extraction that is invariant with respect to various
geometric transformations.</p>
      <p>
        In the related domain, image processing, features are
associated with keypoints and their descriptors. Both are
used to be identified with local areas of the image that
correspond to its content (not related to the background).
Thus, the processing is computationally consuming and
depends on many space-environment conditions:
illumination, position, resolution, etc. In [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], it has been shown
that invariant local features (Harris corners and their
rotation invariant descriptors) can contribute to solving general
problems of image recognition. However, the Harris
corner detector is sensitive to changes in image scale. This
was eliminated in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] where the descriptor known in the
literature as SIFT was proposed. The method of SIFT has
inspired many modifications: SURF, PCA-SIFT, GLOH,
Gauss-SIFT, etc., (see [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and references therein), aimed
at improving efficiency in various senses: reliability,
computation time, etc. However, the main stages and their
semantics have been preserved.
      </p>
      <p>
        Our contribution to this topic is as follows: we use the
basic methodology of SIFT and its modifications, but
select non-traditional kernels derived from the theory of
Ftransforms [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. This allows to simplify the scaling and
selection of key points, as well as reduce their number and
enhance robustness. We also propose a new keypoint
descriptor and test it on matching financial time series with
high volatility.
      </p>
      <p>
        The main theoretical result that we arrive at here is that
the Gaussian kernel as the predominant in the scale-space
theory can be replaced with the same success by a special
symmetric positive semi-definite kernel with a local
support. In particular, we show that generating function of a
triangular-based uniform fuzzy partition of R can be used
for determining such kernel. This fact allows us to base
upon the theory of F-transforms and its ability to extract
features (keypoints) with a clear understanding of their
semantic meaning [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Briefly about the theory of scale-space representations</title>
      <p>
        We start with a brief overview (see in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]) of the mentioned
theory because it explains the proposed methods. Perhaps
the quad-tree methodology is the first type of multi-scale
representation of image data. It focuses on recursively
dividing an image into smaller areas controlled by the
intensity range. The low-pass pyramid representation then
facilitated multi-scaling in such a way that the image size
decreased exponentially compared to scale level.
      </p>
      <p>
        Koenderink [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] emphasized that scaling up and down
the internal scope of observations and handling image
structures at all scales (in accordance with the task)
contribute to a successful image analysis. The challenge is
to understand the image at all relevant scales at the same
time, but not as an unrelated set of derived images at
different levels of blur.
      </p>
      <p>
        The basic idea (in Lindeberg [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]) how to obtain a
multiscale representation of an object is to embed it into a
oneparameter family of gradually smoothed ones where
finescale details are sequentially suppressed. Under fairly
general conditions, the author showed that the Gaussian kernel
and its derivatives are the only possible smoothing kernels.
These conditions are mainly linearity and shift invariance,
combined with various ways of formalizing the notion that
structures on a coarse scale should correspond to
simplifications of corresponding structures on a fine scale.
      </p>
      <p>
        A scale-space representation differs from a multi-scale
representation in that it uses the same spatial sampling at
all scales and one continuous scale parameter as the
generator. By the construction in Witkin [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], a scale-space
representation is a one-parameter family of derived
signals constructed using convolution with a one-parameter
family of Gaussian kernels of increasing width.
      </p>
      <p>Formally, a scale-space family of a continuous signal is
constructed as follows. For a signal f : RN ! R, the
scalespace representation L : RN R+ ! R is defined by:
L( ; 0) = f ( );</p>
      <p>L( ; t) =g( ; t) ? f ;
where t 2 R+ is the scale parameter and g : RN
is the Gaussian kernel as follows:</p>
      <p>1
g(x; t) = (2pt)N=2 exp
åN xi2 :
i=1 2t
The scale parameter t relates to the standard deviation of
the kernel g, and is a natural measure of spatial scale at the
level t.</p>
      <p>As an important remark, we note that the scale-space
family L can be defined as the solution to the diffusion
(heat) equation
¶t L = 1 ÑT ÑL;</p>
      <p>2
with initial condition L( ; 0) = f . The Laplace operator,
ÑT Ñ or D, the divergence of the gradient, is taken in the
spatial variables.</p>
      <p>The solution to (2) in one-dimension and in the case
where the spatial domain is R is known as the convolution
(?) of f (initial condition) and the fundamental solution:
(1)
R+ ! R
(2)
(3)
(4)
L( ; t) =g( ; t) ? f ;</p>
      <p>
        1
g(x; t) = p
( 2pt)
exp
x2
2t
:
The following two questions arise: is this approach the
only reasonable way to perform low-level processing, and
are Gaussian kernels and their derivatives the only
smoothing kernels that can be used? Many authors [
        <xref ref-type="bibr" rid="ref11 ref3 ref4">4, 11, 3</xref>
        ]
answer these questions positively, which leads to the
default choice of Gaussian kernels in most image processing
tasks. In this article, we want to expand on the set of
useful kernels suitable for performing scale-space
representations. In particular, we propose to use kernels arising from
generating functions of fuzzy partitioning.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Space with a Fuzzy Partition as a</title>
    </sec>
    <sec id="sec-4">
      <title>Universe with Closeness</title>
      <p>In this section, we introduce space that plays an important
role in our research. A space with a fuzzy partition is
considered as a space with a proximity (closeness) relation,
which is a weak version of a metric space. As we
indicated at the beginning, our goal is to extend the Laplace
operators to those that take into account the specifics of
spaces with fuzzy partitions. Then, the next goal is to
show that the diffusion (heat conduction) equation in (2)
can be extended to spaces with closeness, where the
concepts of derivatives are adapted to nonlocal cases. Both
things allow us to use the theory of scale-space
representation and propose on its basis a new method for localizing
key points.</p>
      <p>Let us first recall the basic definitions of all related
concepts.
3.1</p>
      <sec id="sec-4-1">
        <title>Fuzzy partition</title>
        <p>Definition 1: Fuzzy sets A1; : : : ; An : [a; b] ! R,
establish a fuzzy partition of the real interval [a; b] with nodes
a = x1 &lt; : : : &lt; xn = b, if for all k = 1; : : : ; n, the following
conditions are valid (we assume x0 = a, xn+1 = b):
1. Ak(xk) = 1; Ak(x) &gt; 0 if x 2 (ak; bk), a
ak &lt; bk
b;
2. Skn=1(ak; bk) = (a; b);
3. Ak(x) = 0 if x 62 [ak; bk];
4. Ak(x) is continuous on [ak; bk].</p>
        <p>
          The membership functions A1; : : : ; An are called basic
functions [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
        </p>
        <p>Definition 2: The fuzzy partition A1; : : : ; An, where n
2, is h-uniform if nodes x1 &lt; &lt; xn are h-equidistant,
i.e. for all k = 1; : : : ; n 1, xk+1 = xk + h, where h = (b
a)=(n 1), and there exists an even function A0 : [ 1; 1] !
[0; 1], such that A0(0) = 1, and for all k = 1; : : : ; n:
x xk</p>
        <p>H
Ak(x) = A0
; x 2 [xk</p>
        <p>H; xk + H] \ [a; b];
where H &gt; h=2.</p>
        <p>
          A0 is called a generating function of a uniform fuzzy
partition [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. Generating function A0 is normalized if
Z 1
1
        </p>
        <p>A0(x)dx = 1:
Remark 1. Rescaled generating function AH (x) =
A0(x=H) generates the corresponding kernel AH (x y).
3.2</p>
      </sec>
      <sec id="sec-4-2">
        <title>Discrete Universe with Closeness</title>
        <p>In this section, we introduce a finite space with a binary
relation of closeness and then show how closeness can be
related to a uniform fuzzy partition.</p>
        <p>The best formal model of a space with closeness is a
weighted graph G = (V; E; w) where V = fv1; : : : ; v`g is a
finite set of vertices, and E (E V V ) is a set of weighted
edges so that w : E ! R+. The edge e = (vi; v j) connects
two vertices vi and v j, and then the weight of e is w(vi; v j)
or just wi j. Weights are set using the function w : V
V ! R+, which is symmetric (wi j = w ji; 8 1 i; j `),
non-negative (wi j 0) and wi j = 0 if (vi; v j) 62 E. The
notation vi v j denotes two adjacent vertices vi and v j
with an existing edge connecting them. Function w sets
the closeness on V .</p>
        <p>
          There are many models of closeness in the literature
with the default options: Gaussian and uniform
distributions [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. Below, we propose a new model based on a
uniform fuzzy partition.
        </p>
        <p>We use the above given (graph) terminology and assume
that the set of vertices V is identified with the set of indices
V = f1; : : : ; `g and that the corresponding interval [1; `] is
1-uniform fuzzy partitioned with normalized basic
functions A1H ; : : : A`H , so that AkH (x) = AH (x k), k = 1; : : : ; `,
AH (x) = A0(x=H), H 1, and A0 is the generating
function.</p>
        <p>Definition 3: Graph GH = (V; E; wH ) is fuzzy weighted,
if V = f1; : : : ; `g, and the weight function wH : V
V ! [0; 1], is determined by a 1-uniform fuzzy partition
A1H ; : : : A`H , of [1; `], where H 1, so that wH (vi; v j) =
AiH ( j), i; j = 1; : : : ; `.
4</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Discrete (Non-local) Laplace operator</title>
      <p>
        In this section, we aim to develop elements of functional
analysis on spaces with closeness in order to be able to
introduce an operator with properties similar to the
Laplacian. We recall the definition of (non-local) Laplace
operator as a differential operator given by the divergence of the
gradient of a function (see [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]). On spaces with closeness,
the generalized version knows as Laplace-Beltrami
operator is used. The definition is based on the self-adjoint
property, which leads us to definitions of the
corresponding Hilbert spaces.
      </p>
      <p>Let G = (V; E; w) be a weighted graph model of a space
with closeness, and let f : V ! R be a real-valued
function. Let H(V ) denote the Hilbert space of real-valued
functions on V , such that if f ; h 2 H(V ) and f ; h : V ! R,
then the inner product h f ; hiH(V ) = åv2V f (v)h(v).
Similarly, H(E) denotes the space of real-valued functions
defined on the set E of edges of a graph G. This space has
the inner product hF; HiH(E) = å(u;v)2E F(u; v)H(u; v) =
åu2V åv u F(u; v)H(u; v), where F; H : E ! R are two
functions on H(E).</p>
      <p>The difference operator d : H(V ) ! H(E) of f , is
defined on (u; v) 2 E by
(d f )(u; v) = pw(u; v) ( f (v)
f (u)) :</p>
      <p>The directional derivative of f , at vertex v 2 V , along
the edge e = (u; v), is defined as:</p>
      <p>¶v f (u) = (d f )(u; v):</p>
      <p>The adjoint to the difference operator d : H(E) !
H(V ), is a linear operator defined by:
hd f ; HiH(E) = h f ; d HiH(V );
(5)
(6)
(7)
for any function H 2 H(E) and function f 2 H(V ).
Proposition 1: The adjoint operator d can be expressed
at a vertex u 2 V by the following formula:
(d H)(u) = å pw(u; v) (H(v; u)
v u</p>
      <p>H(u; v)) :
(8)
The divergence operator, defined by d , measures the
network outflow of a function in H(E), at each vertex of
the graph.</p>
      <p>The weighted gradient operator of f 2 H(V ), at vertex
u 2 V; 8(u; vi) 2 E, is a column vector:
Ñw f (u) = (¶v f (u) : v</p>
      <p>u)T = (¶v1 f (u); : : : ; ¶vk f (u))T :</p>
      <p>
        The weighted Laplace operator Dw : H(V ) ! H(V ), is
defined by:
1
Dw f = d (d f ): (9)
2
Proposition 2 [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]: The weighted Laplace operator Dw at
f 2 H(V ) acts as follows:
(Dw f )(u) =
å w(u; v)( f (v)
v u
f (u)):
This Laplace operator is linear and corresponds to the
graph Laplacian.
      </p>
      <p>
        Proposition 3 [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]: Let GH = (V; E; wH ) be a fuzzy
weighted graph, corresponding to the 1-uniform fuzzy
partition of V = f1; : : : `g. Then, the weighted Laplace
operator DH at f 2 H(V ) acts as follows:
(DH f )(i) =
å AiH ( j)( f ( j)
i j
f (i)) = f (i)
      </p>
      <p>
        FH [ f ]i;
where Fh[ f ]i, i = 1; : : : ; `, is the i-th discrete F-transform
component of f , cf. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
5
      </p>
    </sec>
    <sec id="sec-6">
      <title>Multi-scale Representation in a Space with a Fuzzy Partition</title>
      <p>Taking into account the introduced notation, we propose
the following scheme for the multi-scale representation
LFP of a signal f : V ! R, where V = f1; : : : ; `g and
subscript “FP” stands for an 1-uniform fuzzy partition
determined by parameter H 2 N, H 1:</p>
      <p>LFP( ; 0) = f ( );
LFP( ; t) =F2t H [ f ];
(10)
where t 2 N is the scale parameter and F2t H [ f ] is the
complete vector of F-transform components of f . The scale
parameter t relates to the length of the support of the
corresponding basic function. As in the case of (1), it is a
natural measure of spatial scale at level t. To show the
relationship to the diffusion equation, we formulate the
following general result.</p>
      <p>Proposition 4: Assume that two time continuously
differentiated real function f : [a; b] ! R, and [a; b] is h-uniform
fuzzy partitioned by AH ; : : : ; AnH and A21H ; : : : ; An2H , where
1
basic functions AiH (Ai2H ), i = 1; : : : ; n, are generated by
A0(x) = 1 jxj with the nodes at xi = a + nb 1a (i 1). Then,
F2H [ f ]i</p>
      <p>FH [ f ]i</p>
      <p>H2</p>
      <p>f 00(xi):
4</p>
      <p>The semantic meaning of this proposition in relation
to the proposed scheme (10) of multi-scale representation
LFP of f is as follows:</p>
      <p>The F-transfrom (FT)-based Laplacian of f (11) can
be approximated by the (weighted) differences of two
adjacent convolutions determined by the triangular-shaped
generating function.
(11)</p>
    </sec>
    <sec id="sec-7">
      <title>Experiments with Time Series</title>
      <sec id="sec-7-1">
        <title>Reconstruction from FT-based Laplacians</title>
        <p>To demonstrate the effectiveness of the proposed
representation, we first show that an initial time series can be
(with a sufficient precision) reconstructed from a sequence
of FT-based Laplacians. Below, we illustrate this claim
on a financial time series with high volatility. With each
value of t = 1; 2; : : : we obtain the corresponding FT-based
Laplacian as the difference between two adjacent
convolutions (vectors with F-transform components), so that we
obtain the sequence
fLFP( ; t + 1)</p>
        <p>LFP( ; t) j t = 1; 2; : : :g
The stop criterion is closeness to zero of the current
difference. We then compute the reconstruction by summing all
the elements in the sequence. Figure 1 shows the
step-bystep reconstruction and the final reconstructed time series.
The latter is plotted on the bottom image along with the
original time series to give confidence in a perfect fit. The
estimated Euclidean distance is 89.6.</p>
        <p>In the same Figure 1, we show one MLP reconstructions
of the same time series with the following configurations:
4 hidden layers with 4086 neurons in each layer
(common setting) and learning rate 0.001. It is obvious that
the proposed multi-scale representation and subsequent
reconstruction are computationally cheaper and give results
with better reconstruction quality. To confirm, we give
estimates of the Euclidean distances between the original
time series and its reconstructions: (from a sequence of
FT-based Laplacians) against 159.3 (using MLP).
6.2</p>
      </sec>
      <sec id="sec-7-2">
        <title>Keypoint Localization and Description</title>
        <p>Keypoint localization. The localization accuracy of key
points depends on the problem being solved. When
analyzing time series, the accuracy requirements are different
from those used in computer vision to match or register
images. Time series focuses on comparing the target and
reference series in order to detect similarities and use them
to make a forecast. Therefore, the spatial coordinate is not
so important in contrast to the comparative analysis of
local trends and their changes in time intervals with adjacent
key points as boundaries.</p>
        <p>
          Taking into account the above arguments, we propose
to localize and identify keypoints from the second-to-last
scaled representation of the Laplacian before the latter
meets the stopping criterion. We then follow the technique
suggested in [
          <xref ref-type="bibr" rid="ref5 ref6">6, 5</xref>
          ] and identify the keypoint with the
local extremum point of the Laplacian corresponding to the
selected scale. As in the cited above works, we faced a
number of technical problems related to the stability of
local extrema, sampling frequency in a scale, etc. Due to the
different spatial organization of the analyzed objects (time
series versus images), we found simpler solutions to the
problems raised. For example, in order to exclude extrema
close to each other (and therefore they are very unstable),
we leave only one representative, the value of which gives
the best semantic correlation with the characteristic of this
particular extremum.
        </p>
        <p>Below, we give illustrations to some processed by us
time series. They were selected from the cite with
historical data in Yahoo Finance. We analyzed the 2016 daily
adjusting closing prices using international stock indices,
namely Prague (PX), Paris (FCHI), Frankfurt (GDAXI)
and Moscow (MOEX). Due to the daily nature of the time
series, they all have high volatility, which is additional
support for the proposed method. In Figure 2, we show
the time series with stock indices PX (Prague) and its last
three scaled representations of the Laplacian, where the
latter satisfies the stopping criterion. Selected (filtered out)
keypoints are marked with red (blue) dots.</p>
        <p>Keypoint Description. Due to the specificity of time
series with high volatility, we propose a keypoint descriptor
as a vector that includes only the Laplacian values at
keypoints from two adjacent scales and in the area bounded by
an interval with boundaries set by adjacent left/right
keypoints from the same scale. In addition, we normalize the
keypoint descriptor coordinates by the Laplacian value of
the principal keypoint. As our experiments with matching
keypoint descriptors of different time series show, the
proposed keypoint descriptor is robust to noise and invariant
with respect to spatial shifts and time series ranges. The
last remark is that the quality of matching is estimated by
the Euclidean distance between keypoint descriptors.</p>
        <p>
          To illustrate the assertion about robustness and
invariance, we show (Figure 3) with the results of matches
between principal keypoints of time series with stock indices
PX (Prague), FCHI (Paris), and GDAXI (Frankfurt). In all
cases, the stock indices PX were considered as tested and
compared against the reference stock indices FCHI and
GDAXI.
We are focused on a new fast and robust algorithm of
image/signal feature extraction in the form of representative
keypoints. We have contributed to this topic by showing
that the use of non-traditional kernels derived from the
theory of F-transforms [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] leads to simplified algorithms and
comparable efficiency in the selection of keypoints.
Moreover, we reduced their number and enhanced robustness.
This has been shown at the theoretical and
experimental levels. We also proposed a new keypoint descriptor
and tested it on matching financial time series with high
volatility.
        </p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgment</title>
      <p>The work was supported from the project ERDF/ESF
“Centre for the development of Artificial Intelligence
Methods for the Automotive Industry of the region” (No.
CZ.02.1.01/0.0/0.0/17-049/0008414).</p>
      <p>Additional support of the grant project
SGS18/PrFMF/2021 (University of Ostrava) is kindly announced.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Belkin</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Niyogi</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Laplacian eigenmaps for dimensionality reduction and data representation</article-title>
          ,
          <source>Neural Computation</source>
          ,
          <volume>15</volume>
          (
          <issue>6</issue>
          ) (
          <year>2003</year>
          )
          <fpage>1373</fpage>
          -
          <lpage>1396</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Elmoataz</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lezoray</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bougleux</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Discrete regularization on weighted graphs: A framework for image and manifold processing</article-title>
          ,
          <source>IEEE Transactions on Image Processing</source>
          ,
          <volume>17</volume>
          (
          <year>2008</year>
          )
          <fpage>1047</fpage>
          -
          <lpage>1060</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Koenderink</surname>
            ,
            <given-names>J. J.:</given-names>
          </string-name>
          <article-title>The structure of images</article-title>
          ,
          <source>Biological Cybernetics</source>
          ,
          <volume>50</volume>
          (
          <year>1984</year>
          )
          <fpage>363</fpage>
          -
          <lpage>370</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Lindeberg</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Scale-space theory: a basic tool for analyzing structures at different scales</article-title>
          ,
          <source>Journal of Applied Statistics</source>
          ,
          <volume>21</volume>
          (
          <year>1994</year>
          )
          <fpage>225</fpage>
          - -
          <lpage>270</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Lindeberg</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Image matching using generalized scalespace interest points</article-title>
          ,
          <source>Journ. Mathematical Imaging and Vision</source>
          ,
          <volume>52</volume>
          (
          <issue>1</issue>
          ) (
          <year>2015</year>
          )
          <fpage>3</fpage>
          -
          <lpage>36</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Lowe</surname>
            ,
            <given-names>D. G.</given-names>
          </string-name>
          :
          <article-title>Distinctive image features from scale-invariant key-points</article-title>
          ,
          <source>Int. J. Computer Vision</source>
          ,
          <volume>60</volume>
          (
          <year>2004</year>
          )
          <fpage>91</fpage>
          -
          <lpage>110</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Perfilieva</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Fuzzy transforms: Theory and applications</article-title>
          ,
          <source>Fuzzy sets and systems 157 (8)</source>
          (
          <year>2006</year>
          )
          <fpage>993</fpage>
          -
          <lpage>1023</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Molek</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Perfilieva</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Deep Learning and Higher Degree F-transforms: Interpretable Kernels Before and After Learning</article-title>
          ,
          <source>Int. Journ. Computational Intelligence Systems</source>
          ,
          <volume>13</volume>
          (
          <issue>1</issue>
          ) (
          <year>2020</year>
          )
          <fpage>1404</fpage>
          -
          <lpage>1414</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Perfilieva</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vlasanek</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Total variation with nonlocal FT-Laplacian for patch-based inpainting</article-title>
          ,
          <source>Soft Computing</source>
          <volume>23</volume>
          (
          <year>2019</year>
          )
          <fpage>1833</fpage>
          -
          <lpage>1841</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Schmid</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mohr</surname>
          </string-name>
          , R.:
          <article-title>Local grayvalue invariants for image retrieval</article-title>
          ,
          <source>IEEE Trans. Pattern Analysis and Machine Intelligence</source>
          ,
          <volume>19</volume>
          (
          <issue>5</issue>
          ), (
          <year>1997</year>
          )
          <fpage>530</fpage>
          -
          <lpage>535</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Witkin</surname>
            ,
            <given-names>A. P.</given-names>
          </string-name>
          :
          <article-title>Scale-space filtering</article-title>
          ,
          <source>Proc. 8th International Joint Conference on Artifcial Intelligence, IJCAI'83</source>
          ,
          <issue>2</issue>
          (
          <year>1983</year>
          )
          <fpage>1019</fpage>
          -
          <lpage>1022</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>