<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Experimental Curves Segmentation Using Variable Resolution</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Anton</forename><surname>Sharypanov</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Institute of Mathematical Machines &amp; Systems Problems of National Academy of Sciences of Ukraine (IMMSP)</orgName>
								<address>
									<addrLine>42 Academician Glushkov Avenue</addrLine>
									<postCode>03680</postCode>
									<settlement>Kiev</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Vladimir</forename><surname>Kalmykov</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Institute of Mathematical Machines &amp; Systems Problems of National Academy of Sciences of Ukraine (IMMSP)</orgName>
								<address>
									<addrLine>42 Academician Glushkov Avenue</addrLine>
									<postCode>03680</postCode>
									<settlement>Kiev</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Vitaly</forename><surname>Vishnevskey</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Institute of Mathematical Machines &amp; Systems Problems of National Academy of Sciences of Ukraine (IMMSP)</orgName>
								<address>
									<addrLine>42 Academician Glushkov Avenue</addrLine>
									<postCode>03680</postCode>
									<settlement>Kiev</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Experimental Curves Segmentation Using Variable Resolution</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">574448F74F284DD90041218FA9A77149</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T20:01+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Experimental curves</term>
					<term>segmentation</term>
					<term>coarse-to-fine</term>
					<term>cardiac signal</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>A new segmentation method of signals distorted by noise is discussed. Unlike other known methods, for example, the Canny method, a priori data on interference and / or a signal (image) is not used. Segmentation of signals and halftone images distorted by interference is one of the oldest problems in computer vision. But human vision solves this task almost independently of our consciousness. It was discovered for visual neurons, that sizes of receptive fields' excitatory zones change during visual act, which eventually mean dynamical changes in visual system's resolution i.e. coarse-to-fine phenomenon in living organism. We assumed that "coarse-to-fine" phenomenon, i.e. several different resolutions, is used in human vision to segment images. A "coarse-to-fine" algorithm for segmentation of experimental graphs was developed. The main difference of algorithm mentioned above from others is that decision is made taking into account all partial solutions for all resolutions being used. This ensures stability of final global solution. The algorithm verification results are presented. It is expected that the method can naturally be expanded to segmentation of halftone images.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Experimental curves represent the results of measurements, as a rule, distorted by interference. The most basic feature of the experimental curve is its shape, which displays function that generates the observed realization of curve and characterizes parameters of the displayed object or process. It is assumed that the measured values are represented the realization of some unknown function existing on a given measurement interval, and the result of the measurement is a finite sequence of pairs "reference number-value". Since different curves that relate to the same object can differ from each other in scale, interference level, number of measurements, etc., direct use of neural network methods or methods that rest on statistical pattern recognition for solving the problem of comparing the shape of graphs or curves does not seem possible. In this case, the unknown functions that describe experimental curves must be approximated by functions that are invariant to affine transformations for their subsequent processing and comparison.</p><p>Since images, as well as signals, can be considered as experimental realizations of some unknown functions, some image processing methods can be used in signal processing, in particular, the variable resolution method. The aim of our research is to introduce new methods for processing signals and images, in particular, to develop a new algorithm for segmenting experimental curves suitable for automated signal processing based on these methods and finally, to demonstrate the results of this algorithm's application to one-dimensional signals distorted by interference.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Biological and mathematical aspects of variable resolution in relation to experimental curves segmentation</head><p>In 70s of the last century, neurophysiologists discovered the phenomenon of changes in sizes of receptive fields' excitatory zones in the visual system neurons, which was investigated and confirmed Information Technology and Implementation (IT&amp;I-2023), November 20-21, 2023, Kyiv, Ukraine EMAIL: anton.sha.ua@gmail.com (A. 1); vl.kalmykov@gmail.com (A. 2); vit.vizual@gmail.com (A. 3) ORCID: 0000-0001-6804-0533 (A. 1); 0000-0001-8928-182X (A. 2); 0000-0003-2204-0487 (A. <ref type="bibr" target="#b2">3)</ref> later <ref type="bibr" target="#b0">[1]</ref>. If at the beginning of visual act receptive field consists of maximum number (tens, sometimes hundreds) of receptors, then by the end of visual act this amount decreases to minimum possible amount -1-2 receptors. Thus, we can assume that: 1) for visual system, there exists a variable resolution that is changing during visual act and is determined by the size of excitatory zone of the neuron receptive field at each moment of time; 2) the receptive field of a neuron is a discrete analogue of neighborhood of point in a continuous 2-dimensional space.</p><p>To analyze continuity of a function in continuous two-dimensional space, the classical definition of continuity of a function in e-d form is successfully used: if for each e&gt;0 there exists such d&gt;0 that for any value of variable x that belongs to δ-neighborhood of point c the values of function f(x) belong to e-neighborhood of f(c). You should pay attention to how the continuity of function is checked at a point. Starting with a certain value |x1-c|, the neighborhood of the point c decreases (|x1−c|&gt;|x2−c|, |x2−c|&gt;|x3−c|, ...) tending to 0. Here f(x) is assumed to be continuous at a point c if the neighborhood f(c) also tends to 0</p><formula xml:id="formula_0">(|f(x1)-f(c)|&gt;|f(x2)-f(c)|, |f(x2)-f(c)|&gt;|f(x3)-f(c)|, ...)</formula><p>. Thereby, to analyze the continuity of a function at a point, changing neighborhood of this point is used.</p><p>The decrease in the size of receptive field excitatory zone can be considered as a decrease in proportions of point neighborhood at center of the receptive field. The process, which is used in the analysis of continuity of a function at a point in classical mathematical analysis, is repeated in visual system of human and animals each visual act. The essential difference between resolution changes in visual system from analysis of continuity of a function at a point is that the elements of the receptive field are objects of a discrete space. Similarly, the classical definition is unsuitable for analyzing continuity of experimental curves, since they are representations of unknown functions and are identified as sequences of values, which in turn are sets of points in some discrete space. However, at the initial moments of visual act, the excitatory zones of neurons contain many points (receptors) and until the receptor sets in the excitatory zones of the receptive fields are not empty, the definition of continuity can be applied to the brightness function determined in the discrete space of receptors and it does not contradict to classical theory of discontinuity. Thus, the above phenomenon of resolution changes in human visual system can be used to create new method of signal processing based on the concept of variable resolution.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Review of the variable resolution using for image processing</head><p>The idea to consider initial data with variable resolution is used by researchers and developers spontaneously, most often to effectively solve problems of large computational complexity that arise when processing the visual representation of signals. Such an approach makes it possible to exclude inappropriate objects or non-informative signal sections at the early stages of processing and apply the computationally-intensive part of algorithm to reduced volume of data. The review of methods from the field of image processing that use the idea of variable resolution to save computational resources is presented. The original image is considered with several reduced resolutions in each of them.</p><p>An example is given in <ref type="bibr" target="#b1">[2]</ref> which shows the relevance of using some set of resolving powers in image and signal processing. Recognition of arbitrary text by standard means in Figure <ref type="figure" target="#fig_0">1</ref> is used as an example. The text in Figure <ref type="figure" target="#fig_0">1a</ref> can be recognized by both statistical and structural recognition methods. Recognizing the text in Figure <ref type="figure" target="#fig_0">1b</ref> is a more difficult task. If you try to apply statistical methods, the result of calculating the similarity with the etalon image will be distorted due to the presence of pixels of the grid image with the color of the object in the background field. Also, the relative position of the text and the grid may change after sampling and quantization operations are applied to the image. When applying structural methods to the images on Figure <ref type="figure" target="#fig_0">1b</ref>, the contours of grid cells will be detected instead of object contours. Similar results can be expected when the grid overlaid on the text has a background color (Figure <ref type="figure" target="#fig_0">1c, 1d</ref>). In this case, when applying statistical recognition methods, the recognition result will also be distorted due to the presence of pixels in the image field that belong to the object but have the background color. Again, the same relative position of the text and the grid is not guaranteed after the grid is applied to the image and the image is subjected to sampling and quantization operations.</p><p>If you try to apply structural recognition methods to the images in Figure <ref type="figure" target="#fig_0">1c</ref>, 1d the same results will be obtained as for Figure <ref type="figure" target="#fig_0">1b</ref>: the contours of the grid cells will be defined. This statement was verified using the well-known text recognition program FineReader. The text in Figure <ref type="figure" target="#fig_0">1a</ref> was successfully recognized. The result of processing images on Figure <ref type="figure" target="#fig_0">1b</ref>, 1c, 1d is a refusal to recognize the object in the image due to the inability to locate it. When the resolution of these images is reduced several times, the resulting images (Figure <ref type="figure" target="#fig_1">2</ref>) are recognized satisfactorily because the recognition program does not detect the grid lines. This example demonstrates the importance of choosing the right resolution when processing an image, or, if this is not possible, using a variable resolution. In the automated processing of noisy images, preliminary processing of the input image using different filters is used to eliminate undesirable details.</p><p>The very first case of image processing using variable resolution in order to eliminate unwanted details is an integral part of the widely used <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4]</ref> Canny method for determining the boundaries of objects in an image. The original image</p><formula xml:id="formula_1">{ ( , ) | 1, ; 1, } V v i j i I j J   </formula><p>is blurred using a Gaussian filter to reduce the level of noise, eliminate unwanted details and image texture elements:</p><p>( , ) ( ) ( , )</p><formula xml:id="formula_2">g i j G v i j   ,<label>(1)</label></formula><p>where () G  -Gaussian filter for the value of the standard deviation , ( , ) g i j -is an element of the "blurred" image. g i j , using, for example, the Sobel operator to obtain the value of the total gradient ( , ) M i j and its direction ( , ) ij</p><formula xml:id="formula_3"> as 22 ( , ) ( , ) ( , ) ij M i j g i j g i j  (2) ( , ) ( , ) ( , ) j i g i j i j arctg g i j      (3) are calculated.</formula><p>The values of ( , ) M i j , using the threshold T, which should be chosen so that all contour elements are selected, while most of the interference is eliminated, are obtained ( , )</p><formula xml:id="formula_4">T M i j : ( , ), if ( , ) ( , ) 0, otherwise T M i j M i j T M i j        (4) c) a) b) d) a) b) c) d)</formula><p>To improve the quality of the method, two thresholds are used T1 and T2, where T1 &lt; T2. If a pixel v(i,j) with a value T1 &lt; ( , ) T M i j &lt;T2 has two neighboring pixels in the gradient direction ( , ) ij  , for each of which T1 &lt; ( , ) T M i j &lt; T2, its value as a contour element is saved, and if not, it is equated to 0.</p><p>All non-zero elements are combined to create a closed contour of the object, using a special algorithm. In the Canny method, the variable resolution is used implicitly, since the operator selects the degree of blurring  , but this is done based on his subjective considerations of the nature of the interference. Disadvantages of the Canny method:</p><p> object boundaries in the form of pixel sequences are the result of the algorithm, but a pixel is a two-dimensional entity, while an object boundary is usually represented as a line, in particular, a broken line without thickness;</p><p> the result of the Canny algorithm depends on the variable parameter of the Gaussian filter - , which has the value of the standard deviation of the normal probability distribution law (Figure <ref type="figure" target="#fig_2">3</ref>).</p><p>The result of its work depends on the unknown parameter σ -"blur", which has the meaning of dispersion. In general, if you use a filter to preprocess a noisy image, the result will depend on the size of the filter aperture. Resolution reduction is widely used to reduce the computational complexity and improve the performance of existing image processing and recognition algorithms. For example, in <ref type="bibr" target="#b5">[6]</ref>, a model of patterns consisting of separate parts connected by non-rigid connections is considered at different resolutions. An algorithm for transition from low to high resolution is defined. The proposed processing method is based on the observation that the search for correspondences of a part of the image to the reference is the most computationally expensive operation compared to the identification of significant parts and the calculation of their optimal configuration. Minimizing the number of operations of comparing parts of the patterns with the image leads to a faster detection operation. Starting from the lowest resolution, the patterns are compared with the image. Only the most likely locations are selected. Then the locally optimal locations found are recursively propagated to parts of the model with higher resolution. By recursively removing unsuitable locations from the search space, the set of possible locations is reduced so that at the maximum resolution, only a few comparisons of the reference images are required. The proposed method allows for a tenfold speedup of computation compared to the standard dynamic programming method.</p><p>The algorithm discussed in <ref type="bibr" target="#b6">[7]</ref> uses a similar idea of excluding large regions from the hypothesis space in the early stages of recognition, but a sequence of object detectors is used for each resolution. The result of the detector is a quantitative assessment of the region under consideration. The decision to apply the next detectors in the sequence to this area is made based on the comparison of the obtained quantitative assessment with a certain threshold. The region will be considered at the next resolution if its quantification from each detector exceeds the corresponding thresholds. All thresholds are set automatically, based on probabilistic estimates. In <ref type="bibr" target="#b7">[8]</ref>, the application of the coarse-to-fine strategy to the problem of clustering vehicle trajectories is considered. The initial trajectories are combined into "coarse" clusters. Each "coarse" cluster includes trajectories with approximately the same direction, but with different location characteristics. For further precise clustering, the set of trajectory points is enumerated using the Euclidean distance as a measure of proximity. In face recognition, the coarse-to-fine procedure can be implemented by applying different recognition σ=1 σ=1.7 σ=4 methods to reduce the number of candidates at each step. In <ref type="bibr" target="#b8">[9]</ref>, the decision-making process has several stages: 1) assessment of belonging to one of all possible classes (one-against-all of SVM); 2) determination of each candidate's belonging to one of a pair of classes (one-against-one of SVM); 3) Eigenface algorithm; 4) RANSAC method. Stages 1) and 2) use the characteristics of the entire face image obtained from the discrete cosine transform. Stage 3 considers projections of face images into the feature space. The face space is defined by the eigenvectors of the face set and based on the information about the intensity of the face image. The RANSAC method is applied at the last stage, where the spatial information obtained using epipolar geometry methods from the image under verification is compared with two reference images and the image with the highest similarity value and the shortest distance to the corresponding feature points is selected.</p><p>The task of establishing a correspondence between the pixels of two images of human faces (finding a markup) <ref type="bibr" target="#b9">[10]</ref> is effectively solved by building "cascades" of markups. In one "cascade", the size of all images is halved and a new markup is built. After that, an initial approximation for the original markup is determined based on the new markup and the motion field is searched relative to this initial approximation, but with fewer labels. By using one "cascade", the algorithm for solving the problem is eight times faster, while maintaining the accuracy of finding the motion field for two images. Although the author describes this method as a certain engineering technique, it should be noted that in fact, it uses variable resolution image processing, since within one "cascade" the face image is considered with a halved resolution, and the markings obtained for images with a reduced resolution are used as an initial approximation when searching for markings for images with an increased resolution.</p><p>Dynamic programming is often used in tasks such as speech recognition, character recognition, pattern matching for deformable objects, and road tracking. However, such tasks often lead to state spaces of enormous size, which can make calculations unfeasible, even with the use of dynamic programming. To overcome such obstacles, it is proposed in <ref type="bibr" target="#b10">[11]</ref> to use coarse-to-fine dynamic programming (CFDP). The main idea of this approach is to form a sequence of coarse approximations of the original dynamic programming graph by combining the graph states into "superstates". For each coarse approximation, the optimal path is calculated with "optimistic" parenthesis weights between the superstates. The superstates along this optimal path are revised, and the process is repeated until a provably optimal global path is found. In many cases, the global optimum is achieved with significantly less computational effort than when using dynamic programming directly. The proposed algorithm is particularly well suited for problems with a large state space. According to <ref type="bibr" target="#b11">[12]</ref>, the speed of the CFDP algorithm depends on the structure of the association and the nature of the problem. In the best case, CFDP allows for a significant reduction in computations compared to the conventional dynamic programming method; in the worst case, it will actually run slower.</p><p>The purpose of using variable-resolution methods in the cases discussed above is to identify parts of the original image or parts of the original dataset that contain information that seems useful for solving the problem at hand. Complex calculations are performed only on these parts. At the same time, the nature of the resolution change mechanism used in each case is not important. It should be noted that a large number of image recognition tasks that have NP-complexity or cannot be solved using traditional methods are solved instantly in the human visual system, and tasks related to video processing are solved in real time. Therefore, it would be natural to turn to the results of studying the processes in the human visual system obtained in neurophysiology to create new methods and algorithms for processing visual information. In previous years, researchers have already tried to move forward in this direction, using the results of vision neurophysiology that were relevant at the time. For example, stimulus direction-sensitive cells in the primate visual system show a certain range of spatial sizes, in particular, if the size of receptive fields is compared between different cortical areas, such as primary visual cortex and the middle temporal lobe <ref type="bibr" target="#b12">[13]</ref>. With this in mind, <ref type="bibr" target="#b13">[14]</ref> investigated how integrating information about object motion across all spatial scales can help improve optical flow estimation. An adaptive, multi-scale method was proposed, where the sampling scale is chosen locally, according to the estimate of the relative velocity error with respect to image properties. It was shown that the proposed method gives significantly better estimates of the optical flow than traditional algorithms, with a slight increase in computational costs.</p><p>According to the authors, this is important given the large number of iterations required by relaxation algorithms and the surprising speed with which humans can reliably estimate the speed of motion. Based on this approach, a two-level multiscale adaptive neural network model for calculating motion parameters in the middle part of the temporal lobe of primates was presented in <ref type="bibr" target="#b14">[15]</ref>. At the first stage, local velocities are measured at multiple spatial resolutions, after which the optical flow field is calculated by a network of directionally sensitive neurons at multiple spatial resolutions. When conflicts arise between signals from cells at different resolutions, a coarse-to-fine branching scheme is applied, according to which signals from cells at coarser resolutions are prioritized. Further experiments on modeling the properties of a non-classical receptive field proved to be in full agreement with the results obtained in neurophysiology. A new explanation for the phenomenon of motion capture was also proposed using a coarse-to-fine conflict resolution strategy when considering information from different input channels.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Segmentation Algorithm</head><p>Statement of the problem: there exists an unknown function y = f(x) with domain bounded to <ref type="bibr">[a, b]</ref>. The image of this function is observed on [a, b]. The resolution needed for analyzing the image of this function is unknown. Under the assumption that the given image of a function represents an unknown piecewise smooth function, the boundaries of partial segments a = t0 &lt; t1 &lt;…&lt; tN = b and their number N+1 should be found. Analytical solution of the segmentation problem stated above should be considered as finding the points of discontinuity for unknown piecewise smooth function.</p><p>The following discontinuities are of interest: jump discontinuities, when the -neighborhood of the function is empty in a given point and removable discontinuities when the first order derivative of function does not exist in the given point (jump discontinuity of function gradient). However only the image of unknown piecewise smooth function is observed so we are allowed to consider only the discrete analog of discontinuities in the form of irregular points on experimental curve.</p><p>Preliminary stage consists of presenting the experimental data as I "reference-value" pairs {i, xi}; i = 1, 2 … I, that corresponds to maximum resolution. Acquisition of coarse resolution signal is performed (as well as in visual system) using a source signal with maximum resolution. Partial answers of segmentation are considered as the set of breaking points found on each resolution. The result of segmentation is considered as sequence of breaking points from the finest resolution being taken from the longest sublist of partial answers with the same set of breaking points. Further details of the algorithm are described in <ref type="bibr" target="#b15">[16]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Experiment 5.1. Model Signal Segmentation</head><p>The algorithm for segmenting an experimental curve using variable resolution has been implemented as a computer program in Matlab 2010b environment (Figure <ref type="figure" target="#fig_3">4</ref>). Figure <ref type="figure" target="#fig_3">4</ref> a.1, b.1 shows the numbers of samples of the experimental curve along the abscissa axis and the number of resolutions at which the experimental curve is investigated on the ordinate axis. The segments in Figure <ref type="figure" target="#fig_3">4</ref> a.1, b.1 correspond to the intervals in the region of the exact samples on which discontinuities in the continuity of the experimental curve are found. Figure <ref type="figure" target="#fig_3">4</ref> b.1 shows that information on the available jumps in the experimental curve obtained at low resolutions makes it possible to exclude from consideration regions at maximum resolution in which jumps are detected due to the presence of noise.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.">Cardiac Signal Segmentation</head><p>With minor additions, the algorithm was also used in the cardiac signal segmentation application. It was tested during two-part experiment: on cardiac signals obtained during state of rest and during special patient activity. While in the first case the signal was obtained from patients the conditions under which this experiment was conducted where almost ideal. The distortions in the signal weren't related to R-peak form. So the goal in this case was only to approve the ability of implemented algorithm to find R-peaks in the cardiac signal distorted by noise. The algorithm was successfully tested on over 100 samples. The results of segmentation for the 90-second cardiac signal are shown in Figure <ref type="figure" target="#fig_4">5</ref>. The automatic separation of the cardiac signal into cardiac cycles usually occurs along the R-wave, the amplitude of which is usually much greater than the amplitude of the other components of the cycle. This assumption can't be applied to the signal on Figure <ref type="figure" target="#fig_4">5</ref> due to the presence of interference, which causes the baseline drift from cycle to cycle. The implemented algorithm, based on the concept of variable resolution, made it possible to perform segmentation in this case.</p><p>The goal of the second part of experiment was to compare the segmentation results from implemented algorithm with the results from another two well-known reference algorithms.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.1.">Experiment Materials and Methods</head><p>39 cardiograms that were obtained during hypoxic probes from four patients were considered. Hypoxic probes were used to assess the functional state of a person. They consisted of several stages where the person should breathe calmly, make deep breathes and hold breath for a certain period of time. All that activities modulate the heart activity and lead to changes in time intervals between sequential R-peaks and in amplitude of R-peaks (Figure <ref type="figure" target="#fig_6">6</ref>).</p><p>Cardiograms were obtained using mobile cardiac signal registrator from Solvaig company <ref type="bibr" target="#b16">[17]</ref>, 500 Hz sampling rate. The calculations were conducted on Intel Core i5-7200U PC, 8 Gb RAM, Microsoft Windows 10 Pro operating system. Each cardiogram was marked up using three programs: Oracul <ref type="bibr" target="#b17">[18]</ref> medical diagnostic software for desktop PCs, Cardiolyse <ref type="bibr" target="#b18">[19]</ref> medical diagnostic software on cloud platform, the program that utilizes algorithm under consideration.</p><p>In order to increase the accuracy of algorithm under consideration its parameters were fine-tuned. The length of sliding window was selected to contain at least two QRS-complexes. The overlapping of two neighboring windows was picked so that it wasn't less than the length of QRS-complex in order to exclude possible missing of R-peak due to incomplete placement of significant signal part in current window under consideration. Oracul and Cardiolyse applied noise filtering as the first step of signal processing. The algorithm under consideration used the cardiac signal "as is" without any preprocessing and without any apriori information about noise parameters. Cardiogram annotations from each program where converted to one unified markup file format having only timestamps, numbers of samples from start, QRS-complex type and R-peak amplitude (Figure <ref type="figure">7</ref>). The number of R-peaks found by reference programs where compared to that number from algorithm under consideration and the number of identically segmented cardiograms were calculated. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Results and Discussion</head><p>In order to obtain the annotation files cardiograms were processed by each program respectively. Resulting files were placed in the folders near the initial file. The average time to obtain the annotation file for Oracul was 4 seconds. Since cardiograms for processing were sent to Cardiolyse with POST-query and the annotation returned in the response to subsequent GET-query the average processing time was not calculated. For the program that implements algorithm under consideration the average time to process cardiogram and write down the markup file was 0.98 seconds. During the segmentation results comparison it turned out that due to Oracul and Cardiolyse medical diagnostic orientation they add only the result of segmentation for full cardiac cycles. Furthermore Oracul could skip several visually normal QRS-complexes at the beginning and at the end of cardiogram. At the same time the algorithm under consideration searched for R-peaks using pattern and without further cardiac cycle analysis. Thus the generated markup with algorithm under consideration could have several R-peaks from incomplete cardiac cycles at the beginning and at the end of cardiogram and that result was considered valid. Another limitation of implemented algorithm due to its "non-diagnostic" orientation was revealed. Both Oracul and Cardiolyse marked QRS-complexes with one of four types. N for normal QRS-complexes, Q, V, S -QRS-complexes with some deviation from normal but still having useful information. QRS-complexes with deviations also got to annotation files but their form could be substantially different from normal (Figure <ref type="figure">8</ref>) so the implemented algorithm failed to find them.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Conclusions</head><p>Thus, the segmentation of the experimental curve can be carried out as a search for the points of discontinuity of the piecewise smooth function that generates it. It is possible to construct new methods for segmenting experimental curves using the concept of variable resolution based on the classical theory of continuity of functions and actual advances in the field of neurophysiology of vision. In the algorithm under consideration processing results for all used resolutions are taken into account when making decision on segmentation. The efficiency of the algorithm is confirmed by the Rhythmograms Amplitudeograms a) b) c) results of processing for signals and graphs distorted by interference. In this case no a priori information about the noise level was used. The experiment on cardiogram segmentation with algorithm being discussed using variable resolution provided satisfactory results compared to reference algorithms. Amplitudeograms and rhythmograms that were built from R-peak markup could be used as initial data in further research work involving heart rate variability. These solutions will be used in the development of new methods for processing halftone images as well.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Examples of images containing arbitrary text: a) a uniform background; b) an arbitrary text color grid superimposed on the background; c) an arbitrary background color grid superimposed on the text image; d) the grid lines on the text have a different thickness than the lines on the text in c)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Images from Figures 1a, 1b, 1c, 1d at 6 times lower resolution Partial values of the gradients for the horizontal</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Applying the Canny method to highlight contours in an image [5] at different values of the parameter </figDesc><graphic coords="4,72.00,284.70,121.58,102.99" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Segmentation example of normal (a) and noisy (b) model experimental curves based on variable resolution concept</figDesc><graphic coords="7,72.00,421.95,449.96,173.63" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Segmentation of distorted cardiac signals</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Rhythmogram (b) and amplitudeogram (c) obtained from primary segmentation (a)</figDesc><graphic coords="8,295.00,355.15,224.00,124.76" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head></head><label></label><figDesc>b) and (c) presumably are due to different filter settings in each system. We can make an assumption that filter used in Cardiolyse remove more significant information resulting in distortion of some QRS-complexes like on Figure8. Due to that fact the amplitude of S-peak is placed into annotation file instead of R-peak. Also due to "diagnostic" orientation of reference algorithms the rhythmograms on Figure10.b, 10.c contain an extra point near interval number 150. That means that cardiogram has the sample resembling distorted R-peak. The implemented algorithm skips that sample.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 9 :Figure 10 :</head><label>910</label><figDesc>Figure 9: Cardiogram registered during hypoxic probe (a) and segmentation example of it with implemented algorithm (b)</figDesc><graphic coords="10,72.00,164.12,464.30,176.99" type="bitmap" /></figure>
		</body>
		<back>
			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Taking into account everything aforementioned the following results were obtained (Table <ref type="table">1</ref>). As we can see the distorted R-peaks that were found by reference algorithms and were skipped by the implemented algorithm are only 2 % from their total amount. Nevertheless they were found in almost every second cardiogram. That didn't allow obtaining a value of identically segmented cardiograms in "referenced algorithmimplemented algorithm" pairs more than 60 percent. The example of cardiogram primary segmentation with implemented algorithm is presented on Figure <ref type="figure">9</ref>. The abscissa axis has the samples' numbers and the ordinate axis shows the amplitude of signal in millivolts. Rhythmograms and amplitudeograms that were built from the result of segmentation with each algorithm are presented on Figure <ref type="figure">10</ref>. On Figure <ref type="figure">10</ref> for rhythmograms the abscissa axes shows interval numbers between R-peaks, the ordinate axis shows the length of these intervals in seconds. For amplitudeograms the abscissa axes shows the R-peak numbers, the ordinate axis shows the amplitude of the corresponding R-peak in millivolts. As long as Oracul and Cardiolyse provide the information on amplitudes of R-peaks for filtered signal the amplitudeograms (b) and (c) also contains amplitudeograms based on signal after filtering denoted in red line. The differences in </p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Dynamics of Spatial Resolution of Single Units in the Lateral Geniculate Nucleus of Cat During Brief Visual Stimulation</title>
		<author>
			<persName><forename type="first">O</forename><surname>Ruksenas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bulatov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Heggelund</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J Neurophysiol</title>
		<imprint>
			<biblScope unit="volume">97</biblScope>
			<biblScope unit="page" from="1445" to="1456" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Joint study of visual perception mechanism and computer vision systems that use coarse-to-fine approach for data processing // International Journal</title>
		<author>
			<persName><forename type="first">A</forename><surname>Sharypanov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Antoniouk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kalmykov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Information content &amp; processing</title>
				<meeting><address><addrLine>Sofia</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="287" to="300" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Contour Detection Based on Nonclassical Receptive Field Inhibition</title>
		<author>
			<persName><forename type="first">C</forename><surname>Grigorescu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Petkov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Westenberg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions On Image Processing</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="729" to="739" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">A computational approach to edge detection</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Canny</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. Pattern Anal. Machine Intell</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="679" to="698" />
			<date type="published" when="1986">1986</date>
		</imprint>
	</monogr>
	<note>PAMI-</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">ontour Detection and Hierarchical Image Segmentation</title>
		<author>
			<persName><forename type="first">P</forename><surname>Arbelaez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Maire</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Fowlkes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Malik</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="898" to="916" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">alez A Coarse-to-fine approach for fast deformable object detection</title>
		<author>
			<persName><forename type="first">M</forename><surname>Pedersoli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vedaldi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gonz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">CVPR</title>
		<imprint>
			<biblScope unit="page" from="1353" to="1360" />
			<date type="published" when="2011-06">2011. June</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Probabilistic Coarse-To-Fine Object Recognition</title>
		<author>
			<persName><forename type="first">P</forename><surname>Moreels</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Perona</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page">49</biblScope>
			<pubPlace>Pasadena</pubPlace>
		</imprint>
		<respStmt>
			<orgName>California Institute of Technology</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical report</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A Coarse-to-Fine Strategy for Vehicle Motion Trajectory Clustering</title>
		<author>
			<persName><forename type="first">X</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Hu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICPR&apos;06: proceedings of the 18th International Conference on Pattern Recognition</title>
				<imprint>
			<date type="published" when="2006">2006</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="591" to="594" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">A Multi-Stage Classifier for Face Recognition Undertaken by Coarse-tofine Strategy, State of the Art in Face Recognition</title>
		<author>
			<persName><forename type="first">J.-D</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C.-H</forename><surname>Kuo</surname></persName>
		</author>
		<ptr target="http://www.intechopen.com/books/state_of_the_art_in_face_recognition/a_multistage_classifier_for_face_recognition_undertaken_by_coarse-to-fine_strategy" />
		<imprint>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
	<note type="report_type">Tech</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">3D Reconstruction of Human Face Based on Single or Several Images</title>
		<author>
			<persName><forename type="first">M</forename><surname>Tyshchenko</surname></persName>
		</author>
		<ptr target="http://usim.org.ua/arch/2011/2/1.pdf" />
	</analytic>
	<monogr>
		<title level="m">Control Systems and Computers</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Coarse-to-Fine Dynamic Programming</title>
		<author>
			<persName><forename type="first">C</forename><surname>Raphael</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="1379" to="1390" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Lucena Dynamic Programming, Tree-width and Computation on Graphical Models</title>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">B</forename></persName>
		</author>
		<imprint>
			<date type="published" when="2002">2002</date>
			<biblScope unit="page">85</biblScope>
			<pubPlace>Providence</pubPlace>
		</imprint>
		<respStmt>
			<orgName>division of Applied Mathematics ; Brown University</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">PhD thesis</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Functional properties of neurons in middle temporal visual area of the macaque monkey. I. Selectivity for stimulus direction, speed and orientation</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">H R</forename><surname>Maunsell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">C</forename><surname>Van Essen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Neurophysiol</title>
		<imprint>
			<biblScope unit="volume">49</biblScope>
			<biblScope unit="page" from="1127" to="1147" />
			<date type="published" when="1983">1983</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Computing Optical Flow Across Multiple Scales: An Adaptive Coarse-to-Fine Strategy</title>
		<author>
			<persName><forename type="first">R</forename><surname>Battiti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Amaldi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Koch</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Computer Vision</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="133" to="145" />
			<date type="published" when="1991">1991</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">A Multiscale Adaptive Network Model of Motion Computation in Primates</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">T</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Mathur</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Koch</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in Neural Information Processing Systems</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="349" to="355" />
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Segmentation of the Experimental Curves as the Implementations of Unknown Piecewise Smooth Functions</title>
		<author>
			<persName><forename type="first">V</forename><surname>Kalmykov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sharypanov</surname></persName>
		</author>
		<idno type="DOI">10.15407/usim.2018.02.012</idno>
	</analytic>
	<monogr>
		<title level="j">Control Systems and Computers</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="12" to="18" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<author>
			<persName><forename type="first">Eks</forename><surname>Reyestrator</surname></persName>
		</author>
		<ptr target="https://solvaig.com/monitoringovaya-sistema-telecardian/holter-ecg-registrator-06000.1-black" />
		<title level="m">Aktsionerne tovarystvo «Solʹveyh</title>
				<imprint>
			<date type="published" when="2023-10-06">access 06.10.2023</date>
		</imprint>
	</monogr>
	<note>Modelʹ 06000</note>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Kompʺyuterna prohrama «Reyestratsiya ta analiz EKH syhnaliv</title>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">A</forename><surname>Chaikovsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Budnyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yu</forename><forename type="middle">O</forename><surname>Frolov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">M</forename><surname>Budnyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Vasylʹyev</surname></persName>
		</author>
		<author>
			<persName><surname>Ye</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Svidotstvo pro reyestratsiyu avtorsʹkoho prava na tvir</title>
		<imprint>
			<biblScope unit="page">95334</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note>Ukrainian</note>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<author>
			<persName><surname>Cardiolyse</surname></persName>
		</author>
		<ptr target="https://cardiolyse.com/(lastaccess06.10.2023р" />
		<title level="m">Comprehensive Heart Health Analytics for Greater Longevity</title>
				<imprint/>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
