<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>November</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Experimental Curves Segmentation Using Variable Resolution</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Anton Sharypanov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vladimir Kalmykov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vitaly Vishnevskey</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Mathematical Machines &amp; Systems Problems of National Academy of Sciences of Ukraine (IMMSP)</institution>
          ,
          <addr-line>42 Academician Glushkov Avenue, 03680, Kiev</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>2</volume>
      <fpage>0</fpage>
      <lpage>21</lpage>
      <abstract>
        <p>A new segmentation method of signals distorted by noise is discussed. Unlike other known methods, for example, the Canny method, a priori data on interference and / or a signal (image) is not used. Segmentation of signals and halftone images distorted by interference is one of the oldest problems in computer vision. But human vision solves this task almost independently of our consciousness. It was discovered for visual neurons, that sizes of receptive fields' excitatory zones change during visual act, which eventually mean dynamical changes in visual system's resolution i.e. coarse-to-fine phenomenon in living organism. We assumed that "coarse-to-fine" phenomenon, i.e. several different resolutions, is used in human vision to segment images. A "coarse-to-fine" algorithm for segmentation of experimental graphs was developed. The main difference of algorithm mentioned above from others is that decision is made taking into account all partial solutions for all resolutions being used. This ensures stability of final global solution. The algorithm verification results are presented. It is expected that the method can naturally be expanded to segmentation of halftone images.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Experimental curves</kwd>
        <kwd>segmentation</kwd>
        <kwd>coarse-to-fine</kwd>
        <kwd>cardiac signal</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Experimental curves represent the results of measurements, as a rule, distorted by interference. The
most basic feature of the experimental curve is its shape, which displays function that generates the
observed realization of curve and characterizes parameters of the displayed object or process. It is
assumed that the measured values are represented the realization of some unknown function existing
on a given measurement interval, and the result of the measurement is a finite sequence of pairs
"reference number-value". Since different curves that relate to the same object can differ from each
other in scale, interference level, number of measurements, etc., direct use of neural network methods
or methods that rest on statistical pattern recognition for solving the problem of comparing the shape
of graphs or curves does not seem possible. In this case, the unknown functions that describe
experimental curves must be approximated by functions that are invariant to affine transformations
for their subsequent processing and comparison.</p>
      <p>Since images, as well as signals, can be considered as experimental realizations of some unknown
functions, some image processing methods can be used in signal processing, in particular, the variable
resolution method. The aim of our research is to introduce new methods for processing signals and
images, in particular, to develop a new algorithm for segmenting experimental curves suitable for
automated signal processing based on these methods and finally, to demonstrate the results of this
algorithm's application to one-dimensional signals distorted by interference.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Biological and mathematical aspects of variable resolution in relation to experimental curves segmentation</title>
      <p>
        In 70s of the last century, neurophysiologists discovered the phenomenon of changes in sizes of
receptive fields’ excitatory zones in the visual system neurons, which was investigated and confirmed
later [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. If at the beginning of visual act receptive field consists of maximum number (tens,
sometimes hundreds) of receptors, then by the end of visual act this amount decreases to minimum
possible amount - 1-2 receptors. Thus, we can assume that: 1) for visual system, there exists a variable
resolution that is changing during visual act and is determined by the size of excitatory zone of the
neuron receptive field at each moment of time; 2) the receptive field of a neuron is a discrete analogue
of neighborhood of point in a continuous 2-dimensional space.
      </p>
      <p>To analyze continuity of a function in continuous two-dimensional space, the classical definition
of continuity of a function in e-d form is successfully used: if for each e&gt;0 there exists such d&gt;0 that
for any value of variable x that belongs to δ-neighborhood of point c the values of function f(x) belong
to e-neighborhood of f(c). You should pay attention to how the continuity of function is checked at a
point. Starting with a certain value |x1-c|, the neighborhood of the point c decreases (|x1−c|&gt;|x2−c|,
|x2−c|&gt;|x3−c|, ...) tending to 0. Here f(x) is assumed to be continuous at a point c if the neighborhood
f(c) also tends to 0 (|f(x1)-f(c)|&gt;|f(x2)-f(c)|, |f(x2)-f(c)|&gt;|f(x3)-f(c)|, ...). Thereby, to analyze the continuity
of a function at a point, changing neighborhood of this point is used.</p>
      <p>The decrease in the size of receptive field excitatory zone can be considered as a decrease in
proportions of point neighborhood at center of the receptive field. The process, which is used in the
analysis of continuity of a function at a point in classical mathematical analysis, is repeated in visual
system of human and animals each visual act. The essential difference between resolution changes in
visual system from analysis of continuity of a function at a point is that the elements of the receptive
field are objects of a discrete space. Similarly, the classical definition is unsuitable for analyzing
continuity of experimental curves, since they are representations of unknown functions and are
identified as sequences of values, which in turn are sets of points in some discrete space. However, at
the initial moments of visual act, the excitatory zones of neurons contain many points (receptors) and
until the receptor sets in the excitatory zones of the receptive fields are not empty, the definition of
continuity can be applied to the brightness function determined in the discrete space of receptors and
it does not contradict to classical theory of discontinuity. Thus, the above phenomenon of resolution
changes in human visual system can be used to create new method of signal processing based on the
concept of variable resolution.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Review of the variable resolution using for image processing</title>
      <p>The idea to consider initial data with variable resolution is used by researchers and developers
spontaneously, most often to effectively solve problems of large computational complexity that arise
when processing the visual representation of signals. Such an approach makes it possible to exclude
inappropriate objects or non-informative signal sections at the early stages of processing and apply the
computationally-intensive part of algorithm to reduced volume of data. The review of methods from
the field of image processing that use the idea of variable resolution to save computational resources
is presented. The original image is considered with several reduced resolutions in each of them.</p>
      <p>
        An example is given in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] which shows the relevance of using some set of resolving powers in
image and signal processing. Recognition of arbitrary text by standard means in Figure 1 is used as an
example. The text in Figure 1a can be recognized by both statistical and structural recognition
methods. Recognizing the text in Figure 1b is a more difficult task. If you try to apply statistical
methods, the result of calculating the similarity with the etalon image will be distorted due to the
presence of pixels of the grid image with the color of the object in the background field. Also, the
relative position of the text and the grid may change after sampling and quantization operations are
applied to the image. When applying structural methods to the images on Figure 1b, the contours of
grid cells will be detected instead of object contours. Similar results can be expected when the grid
overlaid on the text has a background color (Figure 1c, 1d). In this case, when applying statistical
recognition methods, the recognition result will also be distorted due to the presence of pixels in the
image field that belong to the object but have the background color. Again, the same relative position
of the text and the grid is not guaranteed after the grid is applied to the image and the image is
subjected to sampling and quantization operations.
      </p>
      <p>If you try to apply structural recognition methods to the images in Figure 1c, 1d the same results
will be obtained as for Figure 1b: the contours of the grid cells will be defined. This statement was
verified using the well-known text recognition program FineReader. The text in Figure 1a was
successfully recognized. The result of processing images on Figure 1b, 1c, 1d is a refusal to recognize
the object in the image due to the inability to locate it. When the resolution of these images is reduced
several times, the resulting images (Figure 2) are recognized satisfactorily because the recognition
program does not detect the grid lines. This example demonstrates the importance of choosing the
right resolution when processing an image, or, if this is not possible, using a variable resolution.
where G( ) - Gaussian filter for the value of the standard deviation ,</p>
      <p>g (i, j) - is an element of the "blurred" image.</p>
      <p>In the automated processing of noisy images, preliminary processing of the input image using
different filters is used to eliminate undesirable details.</p>
      <p>
        The very first case of image processing using variable resolution in order to eliminate unwanted
details is an integral part of the widely used [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ] Canny method for determining the boundaries of
objects in an image. The original image V  {v(i, j) | i  1, I ; j  1, J} is blurred using a Gaussian
filter to reduce the level of noise, eliminate unwanted details and image texture elements:
g (i, j)  G( )  v(i, j) ,
a)
c)
a)
c)
b)
d)
b)
d)
(1)
(2)
(3)
(4)
      </p>
      <p>Partial values of the gradients for the horizontal gi (i, j) and vertical g j (i, j) directions in the
blurred image g (i, j) , using, for example, the Sobel operator to obtain the value of the total gradient
M (i, j) and its direction  (i, j) as
are calculated.</p>
      <p>The values of M (i, j) , using the threshold T, which should be chosen so that all contour elements
are selected, while most of the interference is eliminated, are obtained MT (i, j) :</p>
      <p>M (i, j) 
gi2 (i, j)  g 2j (i, j)</p>
      <p> g j (i, j)
 (i, j)  arctg 
</p>
      <p>
gi (i, j) 
M (i, j), if
MT (i, j)  
 0, otherwise</p>
      <p>M (i, j)  T</p>
      <p>To improve the quality of the method, two thresholds are used T1 and T2, where T1 &lt; T2. If a pixel
v(i,j) with a value T1 &lt; MT (i, j) &lt;T2 has two neighboring pixels in the gradient direction  (i, j) , for
each of which T1 &lt; MT (i, j) &lt; T2, its value as a contour element is saved, and if not, it is equated to 0.
All non-zero elements are combined to create a closed contour of the object, using a special
algorithm. In the Canny method, the variable resolution is used implicitly, since the operator selects
the degree of blurring  , but this is done based on his subjective considerations of the nature of the
interference. Disadvantages of the Canny method:</p>
      <p> object boundaries in the form of pixel sequences are the result of the algorithm, but a pixel is a
two-dimensional entity, while an object boundary is usually represented as a line, in particular, a
broken line without thickness;</p>
      <p> the result of the Canny algorithm depends on the variable parameter of the Gaussian filter -  ,
which has the value of the standard deviation of the normal probability distribution law (Figure 3).</p>
      <p>The result of its work depends on the unknown parameter σ - "blur", which has the meaning of
dispersion. In general, if you use a filter to preprocess a noisy image, the result will depend on the size
of the filter aperture.</p>
      <p>σ=1
σ=1.7
σ=4</p>
      <p>
        Resolution reduction is widely used to reduce the computational complexity and improve the
performance of existing image processing and recognition algorithms. For example, in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], a model of
patterns consisting of separate parts connected by non-rigid connections is considered at different
resolutions. An algorithm for transition from low to high resolution is defined. The proposed
processing method is based on the observation that the search for correspondences of a part of the
image to the reference is the most computationally expensive operation compared to the identification
of significant parts and the calculation of their optimal configuration. Minimizing the number of
operations of comparing parts of the patterns with the image leads to a faster detection operation.
Starting from the lowest resolution, the patterns are compared with the image. Only the most likely
locations are selected. Then the locally optimal locations found are recursively propagated to parts of
the model with higher resolution. By recursively removing unsuitable locations from the search space,
the set of possible locations is reduced so that at the maximum resolution, only a few comparisons of
the reference images are required. The proposed method allows for a tenfold speedup of computation
compared to the standard dynamic programming method.
      </p>
      <p>
        The algorithm discussed in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] uses a similar idea of excluding large regions from the hypothesis
space in the early stages of recognition, but a sequence of object detectors is used for each resolution.
The result of the detector is a quantitative assessment of the region under consideration. The decision
to apply the next detectors in the sequence to this area is made based on the comparison of the
obtained quantitative assessment with a certain threshold. The region will be considered at the next
resolution if its quantification from each detector exceeds the corresponding thresholds. All thresholds
are set automatically, based on probabilistic estimates. In [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], the application of the coarse-to-fine
strategy to the problem of clustering vehicle trajectories is considered. The initial trajectories are
combined into "coarse" clusters. Each "coarse" cluster includes trajectories with approximately the
same direction, but with different location characteristics. For further precise clustering, the set of
trajectory points is enumerated using the Euclidean distance as a measure of proximity. In face
recognition, the coarse-to-fine procedure can be implemented by applying different recognition
methods to reduce the number of candidates at each step. In [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], the decision-making process has
several stages: 1) assessment of belonging to one of all possible classes (one-against-all of SVM); 2)
determination of each candidate's belonging to one of a pair of classes (one-against-one of SVM); 3)
Eigenface algorithm; 4) RANSAC method. Stages 1) and 2) use the characteristics of the entire face
image obtained from the discrete cosine transform. Stage 3 considers projections of face images into
the feature space. The face space is defined by the eigenvectors of the face set and based on the
information about the intensity of the face image. The RANSAC method is applied at the last stage,
where the spatial information obtained using epipolar geometry methods from the image under
verification is compared with two reference images and the image with the highest similarity value
and the shortest distance to the corresponding feature points is selected.
      </p>
      <p>
        The task of establishing a correspondence between the pixels of two images of human faces
(finding a markup) [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] is effectively solved by building "cascades" of markups. In one "cascade", the
size of all images is halved and a new markup is built. After that, an initial approximation for the
original markup is determined based on the new markup and the motion field is searched relative to
this initial approximation, but with fewer labels. By using one "cascade", the algorithm for solving the
problem is eight times faster, while maintaining the accuracy of finding the motion field for two
images. Although the author describes this method as a certain engineering technique, it should be
noted that in fact, it uses variable resolution image processing, since within one "cascade" the face
image is considered with a halved resolution, and the markings obtained for images with a reduced
resolution are used as an initial approximation when searching for markings for images with an
increased resolution.
      </p>
      <p>
        Dynamic programming is often used in tasks such as speech recognition, character recognition,
pattern matching for deformable objects, and road tracking. However, such tasks often lead to state
spaces of enormous size, which can make calculations unfeasible, even with the use of dynamic
programming. To overcome such obstacles, it is proposed in [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] to use coarse-to-fine dynamic
programming (CFDP). The main idea of this approach is to form a sequence of coarse approximations
of the original dynamic programming graph by combining the graph states into "superstates". For
each coarse approximation, the optimal path is calculated with "optimistic" parenthesis weights
between the superstates. The superstates along this optimal path are revised, and the process is
repeated until a provably optimal global path is found. In many cases, the global optimum is achieved
with significantly less computational effort than when using dynamic programming directly. The
proposed algorithm is particularly well suited for problems with a large state space. According to
[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], the speed of the CFDP algorithm depends on the structure of the association and the nature of
the problem. In the best case, CFDP allows for a significant reduction in computations compared to
the conventional dynamic programming method; in the worst case, it will actually run slower.
      </p>
      <p>
        The purpose of using variable-resolution methods in the cases discussed above is to identify parts
of the original image or parts of the original dataset that contain information that seems useful for
solving the problem at hand. Complex calculations are performed only on these parts. At the same
time, the nature of the resolution change mechanism used in each case is not important. It should be
noted that a large number of image recognition tasks that have NP-complexity or cannot be solved
using traditional methods are solved instantly in the human visual system, and tasks related to video
processing are solved in real time. Therefore, it would be natural to turn to the results of studying the
processes in the human visual system obtained in neurophysiology to create new methods and
algorithms for processing visual information. In previous years, researchers have already tried to
move forward in this direction, using the results of vision neurophysiology that were relevant at the
time. For example, stimulus direction-sensitive cells in the primate visual system show a certain range
of spatial sizes, in particular, if the size of receptive fields is compared between different cortical
areas, such as primary visual cortex and the middle temporal lobe [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. With this in mind, [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]
investigated how integrating information about object motion across all spatial scales can help
improve optical flow estimation. An adaptive, multi-scale method was proposed, where the sampling
scale is chosen locally, according to the estimate of the relative velocity error with respect to image
properties. It was shown that the proposed method gives significantly better estimates of the optical
flow than traditional algorithms, with a slight increase in computational costs.
      </p>
      <p>
        According to the authors, this is important given the large number of iterations required by
relaxation algorithms and the surprising speed with which humans can reliably estimate the speed of
motion. Based on this approach, a two-level multiscale adaptive neural network model for calculating
motion parameters in the middle part of the temporal lobe of primates was presented in [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. At the
first stage, local velocities are measured at multiple spatial resolutions, after which the optical flow
field is calculated by a network of directionally sensitive neurons at multiple spatial resolutions.
When conflicts arise between signals from cells at different resolutions, a coarse-to-fine branching
scheme is applied, according to which signals from cells at coarser resolutions are prioritized. Further
experiments on modeling the properties of a non-classical receptive field proved to be in full
agreement with the results obtained in neurophysiology. A new explanation for the phenomenon of
motion capture was also proposed using a coarse-to-fine conflict resolution strategy when considering
information from different input channels.
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Segmentation Algorithm</title>
      <p>Statement of the problem: there exists an unknown function y = f(x) with domain bounded to [a,
b]. The image of this function is observed on [a, b]. The resolution needed for analyzing the image of
this function is unknown. Under the assumption that the given image of a function represents an
unknown piecewise smooth function, the boundaries of partial segments a = t0 &lt; t1 &lt;…&lt; tN = b and
their number N+1 should be found. Analytical solution of the segmentation problem stated above
should be considered as finding the points of discontinuity for unknown piecewise smooth function.
The following discontinuities are of interest: jump discontinuities, when the -neighborhood of the
function is empty in a given point and removable discontinuities when the first order derivative of
function does not exist in the given point (jump discontinuity of function gradient). However only the
image of unknown piecewise smooth function is observed so we are allowed to consider only the
discrete analog of discontinuities in the form of irregular points on experimental curve.</p>
      <p>
        Preliminary stage consists of presenting the experimental data as I “reference-value” pairs {i, xi};
i = 1, 2 … I, that corresponds to maximum resolution. Acquisition of coarse resolution signal is
performed (as well as in visual system) using a source signal with maximum resolution. Partial
answers of segmentation are considered as the set of breaking points found on each resolution. The
result of segmentation is considered as sequence of breaking points from the finest resolution being
taken from the longest sublist of partial answers with the same set of breaking points. Further details
of the algorithm are described in [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
      </p>
    </sec>
    <sec id="sec-5">
      <title>5. Experiment</title>
    </sec>
    <sec id="sec-6">
      <title>5.1. Model Signal Segmentation</title>
      <p>The algorithm for segmenting an experimental curve using variable resolution has been
implemented as a computer program in Matlab 2010b environment (Figure 4). Figure 4 a.1, b.1 shows
the numbers of samples of the experimental curve along the abscissa axis and the number of
resolutions at which the experimental curve is investigated on the ordinate axis. The segments in
Figure 4 a.1, b.1 correspond to the intervals in the region of the exact samples on which
discontinuities in the continuity of the experimental curve are found. Figure 4 b.1 shows that
information on the available jumps in the experimental curve obtained at low resolutions makes it
possible to exclude from consideration regions at maximum resolution in which jumps are detected
due to the presence of noise.
5.2.</p>
    </sec>
    <sec id="sec-7">
      <title>Cardiac Signal Segmentation</title>
      <p>With minor additions, the algorithm was also used in the cardiac signal segmentation application.
It was tested during two-part experiment: on cardiac signals obtained during state of rest and during
special patient activity. While in the first case the signal was obtained from patients the conditions
under which this experiment was conducted where almost ideal. The distortions in the signal weren’t
related to R-peak form. So the goal in this case was only to approve the ability of implemented
algorithm to find R-peaks in the cardiac signal distorted by noise. The algorithm was successfully
tested on over 100 samples. The results of segmentation for the 90-second cardiac signal are shown in</p>
    </sec>
    <sec id="sec-8">
      <title>5.2.1. Experiment Materials and Methods</title>
      <p>39 cardiograms that were obtained during hypoxic probes from four patients were considered.
Hypoxic probes were used to assess the functional state of a person. They consisted of several stages
where the person should breathe calmly, make deep breathes and hold breath for a certain period of
time. All that activities modulate the heart activity and lead to changes in time intervals between
sequential R-peaks and in amplitude of R-peaks (Figure 6).</p>
      <p>
        Cardiograms were obtained using mobile cardiac signal registrator from Solvaig company [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ],
500 Hz sampling rate. The calculations were conducted on Intel Core i5-7200U PC, 8 Gb RAM,
Microsoft Windows 10 Pro operating system. Each cardiogram was marked up using three programs:
Oracul [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] – medical diagnostic software for desktop PCs, Cardiolyse [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] – medical diagnostic
software on cloud platform, the program that utilizes algorithm under consideration.
      </p>
      <p>In order to increase the accuracy of algorithm under consideration its parameters were fine-tuned.
The length of sliding window was selected to contain at least two QRS-complexes. The overlapping
of two neighboring windows was picked so that it wasn’t less than the length of QRS-complex in
order to exclude possible missing of R-peak due to incomplete placement of significant signal part in
current window under consideration. Oracul and Cardiolyse applied noise filtering as the first step of
signal processing. The algorithm under consideration used the cardiac signal “as is” without any
preprocessing and without any apriori information about noise parameters. Cardiogram annotations
from each program where converted to one unified markup file format having only timestamps,
numbers of samples from start, QRS-complex type and R-peak amplitude (Figure 7). The number of
R-peaks found by reference programs where compared to that number from algorithm under
consideration and the number of identically segmented cardiograms were calculated.</p>
      <p>A</p>
      <p>j</p>
    </sec>
    <sec id="sec-9">
      <title>6. Results and Discussion</title>
      <p>In order to obtain the annotation files cardiograms were processed by each program respectively.
Resulting files were placed in the folders near the initial file. The average time to obtain the
annotation file for Oracul was 4 seconds. Since cardiograms for processing were sent to Cardiolyse
with POST-query and the annotation returned in the response to subsequent GET-query the average
processing time was not calculated. For the program that implements algorithm under consideration
the average time to process cardiogram and write down the markup file was 0.98 seconds. During the
segmentation results comparison it turned out that due to Oracul and Cardiolyse medical diagnostic
orientation they add only the result of segmentation for full cardiac cycles. Furthermore Oracul could
skip several visually normal QRS-complexes at the beginning and at the end of cardiogram. At the
same time the algorithm under consideration searched for R-peaks using pattern and without further
cardiac cycle analysis. Thus the generated markup with algorithm under consideration could have
several R-peaks from incomplete cardiac cycles at the beginning and at the end of cardiogram and that
result was considered valid. Another limitation of implemented algorithm due to its “non-diagnostic”
orientation was revealed. Both Oracul and Cardiolyse marked QRS-complexes with one of four types.
N for normal QRS-complexes, Q, V, S – QRS-complexes with some deviation from normal but still
having useful information. QRS-complexes with deviations also got to annotation files but their form
could be substantially different from normal (Figure 8) so the implemented algorithm failed to find
them.
0:00:00.100
0:00:00.946
0:00:01.792
0:00:02.610
0:00:03.412
0:00:04.256</p>
      <p>Taking into account everything aforementioned the following results were obtained (Table 1). As
we can see the distorted R-peaks that were found by reference algorithms and were skipped by the
implemented algorithm are only 2 % from their total amount. Nevertheless they were found in almost
every second cardiogram. That didn’t allow obtaining a value of identically segmented cardiograms in
“referenced algorithm – implemented algorithm” pairs more than 60 percent.</p>
      <p>The example of cardiogram primary segmentation with implemented algorithm is presented on
Figure 9. The abscissa axis has the samples’ numbers and the ordinate axis shows the amplitude of
signal in millivolts. Rhythmograms and amplitudeograms that were built from the result of
segmentation with each algorithm are presented on Figure 10. On Figure 10 for rhythmograms the
abscissa axes shows interval numbers between R-peaks, the ordinate axis shows the length of these
intervals in seconds. For amplitudeograms the abscissa axes shows the R-peak numbers, the ordinate
axis shows the amplitude of the corresponding R-peak in millivolts. As long as Oracul and Cardiolyse
provide the information on amplitudes of R-peaks for filtered signal the amplitudeograms (b) and (c)
also contains amplitudeograms based on signal after filtering denoted in red line. The differences in
amplitudeograms on (b) and (c) presumably are due to different filter settings in each system. We can
make an assumption that filter used in Cardiolyse remove more significant information resulting in
distortion of some QRS-complexes like on Figure 8. Due to that fact the amplitude of S-peak is placed
into annotation file instead of R-peak. Also due to “diagnostic” orientation of reference algorithms the
rhythmograms on Figure 10.b, 10.c contain an extra point near interval number 150. That means that
cardiogram has the sample resembling distorted R-peak. The implemented algorithm skips that
sample.</p>
    </sec>
    <sec id="sec-10">
      <title>7. Conclusions</title>
      <p>Thus, the segmentation of the experimental curve can be carried out as a search for the points of
discontinuity of the piecewise smooth function that generates it. It is possible to construct new
methods for segmenting experimental curves using the concept of variable resolution based on the
classical theory of continuity of functions and actual advances in the field of neurophysiology of
vision. In the algorithm under consideration processing results for all used resolutions are taken into
account when making decision on segmentation. The efficiency of the algorithm is confirmed by the
results of processing for signals and graphs distorted by interference. In this case no a priori
information about the noise level was used. The experiment on cardiogram segmentation with
algorithm being discussed using variable resolution provided satisfactory results compared to
reference algorithms. Amplitudeograms and rhythmograms that were built from R-peak markup could
be used as initial data in further research work involving heart rate variability. These solutions will be
used in the development of new methods for processing halftone images as well.</p>
    </sec>
    <sec id="sec-11">
      <title>8. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>O.</given-names>
            <surname>Ruksenas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bulatov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Heggelund</surname>
          </string-name>
          .
          <article-title>Dynamics of Spatial Resolution of Single Units in the Lateral Geniculate Nucleus of Cat During Brief Visual Stimulation</article-title>
          .
          <source>J Neurophysiol</source>
          <volume>97</volume>
          :
          <fpage>1445</fpage>
          -
          <lpage>1456</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Sharypanov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Antoniouk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kalmykov</surname>
          </string-name>
          .
          <article-title>Joint study of visual perception mechanism and computer vision systems that use coarse-to-fine approach for data processing //</article-title>
          <source>International Journal “Information content &amp; processing”</source>
          . - Sofia. - 2014. - v.
          <volume>1</volume>
          , № 3 - P.
          <fpage>287</fpage>
          -
          <lpage>300</lpage>
          .P.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Grigorescu</surname>
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Petkov</surname>
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Westenberg</surname>
            <given-names>M.A.</given-names>
          </string-name>
          <string-name>
            <surname>Contour</surname>
          </string-name>
          <article-title>Detection Based on Nonclassical Receptive Field Inhibition</article-title>
          .
          <source>IEEE Transactions On Image Processing</source>
          .
          <year>2003</year>
          . Vol.
          <volume>12</volume>
          , N 7. P.
          <volume>729</volume>
          -
          <fpage>739</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J. F.</given-names>
            <surname>Canny</surname>
          </string-name>
          , “
          <article-title>A computational approach to edge detection,”</article-title>
          <source>IEEE Trans. Pattern Anal. Machine Intell</source>
          ., vol. PAMI-
          <volume>8</volume>
          , no.
          <issue>6</issue>
          , pp.
          <fpage>679</fpage>
          -
          <lpage>698</lpage>
          ,
          <year>1986</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>P.</given-names>
            <surname>Arbelaez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Maire</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Fowlkes</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Malik</surname>
          </string-name>
          .
          <article-title>ontour Detection and Hierarchical Image Segmentation</article-title>
          .
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          Vol.
          <volume>33</volume>
          , No. 5.
          <year>2011</year>
          . P.
          <volume>898</volume>
          -
          <fpage>916</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pedersoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Vedaldi</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.</surname>
          </string-name>
          <article-title>Gonz`alez A Coarse-to-fine approach for fast deformable object detection</article-title>
          .
          <source>CVPR</source>
          .
          <year>2011</year>
          . June. P.
          <volume>1353</volume>
          -
          <fpage>1360</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>P.</given-names>
            <surname>Moreels</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          <article-title>Perona Probabilistic Coarse-To-Fine Object Recognition</article-title>
          .
          <source>Technical report</source>
          . Pasadena: California Institute of Technology.
          <year>2005</year>
          . 49 p.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <surname>W. Hu</surname>
          </string-name>
          <article-title>A Coarse-to-Fine Strategy for Vehicle Motion Trajectory Clustering</article-title>
          .
          <source>ICPR'06: proceedings of the 18th International Conference on Pattern Recognition</source>
          .
          <year>2006</year>
          . Vol.
          <volume>1</volume>
          . P.
          <volume>591</volume>
          -
          <fpage>594</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.-D.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.-H.</given-names>
            <surname>Kuo</surname>
          </string-name>
          <article-title>A Multi-Stage Classifier for Face Recognition Undertaken by Coarse-tofine Strategy, State of the Art in Face Recognition</article-title>
          .
          <source>Tech</source>
          .
          <year>2009</year>
          . URL: http://www.intechopen.com/books/state_
          <article-title>of_the_art_in_face_recognition/ a_multistage_classifier_for_face_recognition_undertaken_by_coarse-to-fine_strategy.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tyshchenko</surname>
          </string-name>
          .
          <source>3D Reconstruction of Human Face Based on Single or Several Images. Control Systems</source>
          and Computers http://usim.org.ua/arch/2011/2/1.pdf
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Raphael Coarse-to-Fine Dynamic Programming</article-title>
          .
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          .
          <year>2001</year>
          . Vol.
          <volume>23</volume>
          . Р.
          <volume>1379</volume>
          -
          <fpage>1390</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>O.B.</given-names>
            <surname>Lucena Dynamic</surname>
          </string-name>
          <string-name>
            <surname>Programming</surname>
          </string-name>
          ,
          <article-title>Tree-width and Computation on Graphical Models</article-title>
          .
          <source>PhD thesis: division of Applied Mathematics</source>
          . Providence: Brown University.
          <year>2002</year>
          . 85 p.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J.H.R.</given-names>
            <surname>Maunsell</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.C. Van Essen.</surname>
          </string-name>
          <article-title>Functional properties of neurons in middle temporal visual area of the macaque monkey. I. Selectivity for stimulus direction, speed and orientation</article-title>
          .
          <source>J. Neurophysiol</source>
          .
          <year>1983</year>
          . Vol.
          <volume>49</volume>
          . P.
          <volume>1127</volume>
          -
          <fpage>1147</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>R.</given-names>
            <surname>Battiti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Amaldi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Koch</surname>
          </string-name>
          .
          <article-title>Computing Optical Flow Across Multiple Scales: An Adaptive Coarse-to-Fine Strategy</article-title>
          .
          <source>International Journal of Computer Vision</source>
          .
          <year>1991</year>
          . Vol.
          <volume>6</volume>
          ,
          <string-name>
            <given-names>N</given-names>
            <surname>2</surname>
          </string-name>
          . P.
          <volume>133</volume>
          -
          <fpage>145</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>H.T.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mathur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Koch</surname>
          </string-name>
          .
          <article-title>A Multiscale Adaptive Network Model of Motion Computation in Primates</article-title>
          .
          <source>Advances in Neural Information Processing Systems</source>
          .
          <year>1990</year>
          . Vol 3. P.
          <volume>349</volume>
          -
          <fpage>355</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Kalmykov</surname>
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sharypanov</surname>
            <given-names>A</given-names>
          </string-name>
          .
          <article-title>Segmentation of the Experimental Curves as the Implementations of Unknown Piecewise Smooth Functions</article-title>
          .
          <source>Control Systems and Computers</source>
          .
          <year>2018</year>
          . N 2. P.
          <volume>12</volume>
          -
          <fpage>18</fpage>
          . doi:
          <volume>10</volume>
          .15407/usim.
          <year>2018</year>
          .
          <volume>02</volume>
          .012.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Reyestrator</surname>
            <given-names>EKS.</given-names>
          </string-name>
          <article-title>Modelʹ 06000</article-title>
          . Aktsionerne tovarystvo «Solʹveyh». K. Retrieved from: https://solvaig.com/monitoringovaya-sistema-telecardian/holter-ecg
          <source>-registrator-06000.1-black (last access 06.10</source>
          .
          <year>2023</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Chaikovsky</surname>
            <given-names>I. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Budnyk</surname>
            <given-names>M. M.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Frolov</given-names>
            <surname>Yu</surname>
          </string-name>
          . O.,
          <string-name>
            <surname>Budnyk</surname>
            <given-names>V. M.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Vasylʹyev V.</given-names>
            <surname>Ye</surname>
          </string-name>
          . «
          <article-title>Kompʺyuterna prohrama «Reyestratsiya ta analiz EKH syhnaliv»</article-title>
          .
          <source>Svidotstvo pro reyestratsiyu avtorsʹkoho prava na tvir № 95334</source>
          .
          <year>2020</year>
          . [In Ukrainian]
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Cardiolyse</surname>
          </string-name>
          .
          <article-title>Comprehensive Heart Health Analytics for Greater Longevity</article-title>
          . Retrieved from: https://cardiolyse.com/ (
          <source>last access 06.10</source>
          .
          <year>2023</year>
          р.)
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>