<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Deformation Field Estimate for Image Sequence by Applying Stochastic Adaptation in the Block Method</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Roman Kovalenko</string-name>
          <email>r.kovalenko.o@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Radik Ibragimov</string-name>
          <email>ibragimow.it@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pavel Smirnov</string-name>
          <email>rtcis@mail.ru</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alexander Tashlinskiy</string-name>
          <email>tag@ulstu.ru</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>block center coordinates. The size of blocks is selected based</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Radio Engineering Department, Ulyanovsk State Technical University</institution>
          ,
          <addr-line>Ulyanovsk</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Ventra</institution>
          ,
          <addr-line>Moscow</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>on the size of objects whose movement needs to be detected.</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <fpage>145</fpage>
      <lpage>148</lpage>
      <abstract>
        <p>-The paper researches the block method based on stochastic adaptation, which is used to estimate the deformation field of the image sequence. The similarity model was selected as the deformation model. The method was implemented for two target functions: the mean square interframe difference and the inter-frame correlation coefficient. The result of the proposed method was compared with the Motion Vector Field Adaptive Search Technique. The proposed method has a high noise resistance and allows one to reduce the influence of global inter-frame geometric changes.</p>
      </abstract>
      <kwd-group>
        <kwd>stochastic adaptation</kwd>
        <kwd>mean square difference</kwd>
        <kwd>correlation coefficient</kwd>
        <kwd>image sequence</kwd>
        <kwd>block method</kwd>
        <kwd>deformation field</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>I. INTRODUCTION</p>
      <p>Detection of the area of a moving object is usually used
in machine vision systems for highlighting the areas of
interest in images and subsequent analysis improvement. The
task of detecting a moving object for complex cases has not
yet received a general solution. The complexity of this task is
caused by the possibility of various dynamic changes in the
scene (smooth, sharp or local changes in lighting conditions,
weather changes, repetitive movement, etc.). A more
complex case can be observed when the background is
similar to a moving object. Therefore, the development of
algorithms analyzing scene movement in difficult conditions
remains a relevant subject.</p>
      <p>The task of detecting the area of a moving object is
considered as the task of dividing image pixels into two
groups: background and foreground, where the foreground is
the moving object. The foreground may consist of one or
several objects. In both cases, the foreground objects must be
detected, and if there are several objects, the moving objects
must also be separated from each other.</p>
      <p>As with many other image processing tasks, moving
object detection can be implemented in both spatial and
frequency domains.</p>
      <p>
        In the frequency domain, most of the moving object
detection methods are based on wavelet transformations [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]
and low order fractional statistics [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Background changes
have less effect on the result of the moving object area
detection in the frequency domain than in the spatial domain.
But with this approach, problems with shadows appear [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
of each block Bi, j on frame Z t 1 .
      </p>
      <p>
        hi, j  arg  extremum
 vi, j O
Q i, j , vi, j  

(1)
where O – is the search area, Q i, j , vi, j  – is the target
function of matching blocks of the current and the previous
frames. By assigning the shift h i , j to the nodes of the
reference grid included in block Bi, j , we obtain the
deformation field H  hi, j  for the deformed image and the
reference image. This approach provides high efficiency at a
relatively low computational complexity [
        <xref ref-type="bibr" rid="ref11 ref8">8, 11</xref>
        ].
      </p>
      <p>
        Block methods assume static background on which
moving objects are to be detected. In practice, consecutive
frames can have global mutual spatial deformations, e.g. due
to camera movements. In this case the algorithm based on
the block method will detect motion in almost the entire
frame. To solve this problem, a more complex models for
determining the location of blocks B i , j such as similarity
model [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] can be chosen. This models include the following
parameters  t ,t 1  ( h ,  ,  )T : shift along the basic axes
h  ( h x , h y )T , rotation angle  and scale  . The paper
proposes to estimate the location of blocks B i, j by stochastic
adaptation procedure [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] to find the parameters of  t ,t 1 .
The algorithm is resistant to impulse noise and requires small
computational cost which is virtually independent of block
sizes. Block sizes are usually significantly smaller than the
size of the object to be detected.
      </p>
    </sec>
    <sec id="sec-2">
      <title>III. ALGORITHM DECRIPTION</title>
      <p>
        For each block B i , j of the reference frame, the stochastic
block method proposes a recurring finding of estimation
parameters (vector  it,,jt 1 ) position on the deformed frame in
accordance with the procedure [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]:
ˆ it,,jtn1  ˆ it,,jtn11  Λ n  n (J( ˆ t ,t 1
i , j  n 1 , Z n ))
(2)
where  – stochastic gradient of the target function J  ;
Λ n –the array of learning rate; Z n – a local sample, it used
to find  at the iteration, n  0 , N  1 ; N – the number of
iterations. Note that a local sample Z n is independently
selected for each estimation iteration.
      </p>
      <p>
        The method was implemented for two most common
target functions: the mean square inter-frame difference
(MSID) and the inter-frame correlation coefficient [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
When using MSID for the stochastic gradient at the n-th
iteration, we obtain [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]:
 in 

      </p>
      <p>1   ~zxt ~zxtl   x , yl  ~zxtl   x , yl  2 z itl, 1jl   x 
2   x l 1  i
1 </p>
      <p>
          ~z yt ~zxtl , yl   y  ~zxtl , yl   y  2 z itl, 1jl   y ,
2   y l 1  i
(3)
where  x l , y l  – coordinates on image Z t ; il , jl  –
coordinates on image Z t 1 ; ~zxtl , yl is the brightness of the
oversampling image Z t taking into account the estimates
ˆ it,,jtn11 , obtained in the previous iteration;  x ,  y the steps
of finding derivatives  ~zxtl , yl  x and  ~zxtl , yl  y using the
finite difference [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ],  is local sample size Z n . Partial
derivatives  x  and  y  are found analytically.
      </p>
      <p>When inter-frame correlation coefficient is used as the
target function, then the expression of the stochastic gradient
on the n-th iteration takes the form:
 in </p>
      <p>1
2  ˆ t 1  l 1 
   z itl, 1jl  z mt1  ~zxtl , yl   y  ~z xtl , yl   y    y  ,
l 1   y     y    y    i 
   z itl, 1jl  z mt1  ~zxtl   x , yl  ~zxtl   x , yl    x 
 x     x    x    i
(4)
where
– the
mean values of ~zxtl   x , yl
and
and</p>
      <p>The method based on MSID requires less computational
costs and can work already with the local sample size   1 ,
which allows it to be implemented in pixel-by-pixel
processing. Therefore, in the proposed method, the choice of
MSID as the main target function is appropriate.</p>
      <p>If the similarity model is used as a model for geometric
deformations of the reference and deformed frame, then the
derivatives  x  i and  y  i will be defined by
expressions:</p>
      <p> x    a l  xo cos   bl  y o sin  ,
 x     a l  xo sin   bl  y o cos   ,
 x  h x  1 ,
 x  h y  0 ,
 y  h x  0 ,
 y  h y  1 ,
 y    a l  x o sin   bl  y o cos  ,
 y     a l  xo cos   bl  y o sin   ,
where ( x o , y o ) - coordinates of the rotation center.</p>
      <p>
        Usually to represent deformation field, every reference
pixel coordinates ( x , y ) is set in accordance with the shift
vector h  ( h x , h y )T . To obtain such deformation field
representation, the estimates of the deformation parameters
ˆ i , j must be recalculated using the accepted deformation
model. In particular, for the similarity model, we get:
hˆi, j  x  xo  ˆ n 1 i  xo cos ˆ n 1   j  y o sin ˆ n 1   hˆx n 1 , (5)
hˆi, j  y  y o  ˆ n 1 i  xo sin ˆ n 1   j  y o  cos ˆ n 1   hˆ y n 1 . (6)
The algorithm can be described simplified way as
follows. For neighboring frames that do not have mutual
global IGC, parameter estimates of blocks without motion
will remain close to zero in contrast to blocks with motion,
whose parameter estimates will converge to some nonzero
values (for a scale   1 ). Described rule is a criterion for
assigning a block to motion. If neighboring frames have
mutual global IGC, then the estimates of the deformation
parameters of all blocks will be different from zero. In this
case, the blocks corresponding to the moving object will
form compact clusters. Blocks with global deformations are
located throughout the frame, which is used as a criterion for
determining global deformations [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. The deformation
parameters of moving objects are determined by subtracting
the global deformations.
      </p>
    </sec>
    <sec id="sec-3">
      <title>IV. EXPERIMENTAL RESULTS</title>
      <p>h  (3 .6 , 2 .1)T ,   3  ,   1 . And the second organism is
almost motionless. At the same time frames have global IGC
with parameters: h  (1, 2 .2 )T ,    1 ,   1 .01 . Also for
a complex case, unbiased additive Gaussian noise with a
signal/noise ratio of 14 dB was added to the images.
Fig. 1. An example of an image sequence.</p>
      <p>Fig. 2 shows the comparative results of the inter-frame
difference algorithms Fig. 2(a), background subtraction Fig.
2(b) and the proposed stochastic block method Fig. 2(c). For
ease of comparison, each image has an organism contour.
(a)
(b)
(c)</p>
      <p>Fig. 2 shows that the inter-frame difference and
background subtraction algorithms define the second
organism in motion, due to global geometric changes in
consecutive frames. These two algorithms detect an area of a
moving object with a large number of gaps, especially in
low-contrast places where there is a small gradient of image
brightness. The proposed stochastic block method highlights
a region of motion with almost no gaps. The gaps can only
correspond to blocks in which most of the pixels relate to the
background and only some of them relate to a moving object.</p>
      <p>As already noted, the proposed method also works for
pixel-by-pixel estimation of the deformation field. In this
case, each element of the deformation field contains
information about the direction and magnitude of the pixel
shift in the reference image relative to its position on the
deformed image. For example, Fig. 3 shows two consecutive
frames of an image sequence in which the car in the center is
moving and the car on the right is stationary. At the same
time images of a moving car have the following parameters
of inter-frame spatial shift: h x  3 , h y  2 .95 .</p>
      <p>
        The results of estimating the deformation field using the
proposed method in comparison with the results obtained
using a well-known blocks method named Motion Vector
Field Adaptive Search Technique (MVFAST) [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] are
shown below. MVFAST also allows pixel-by-pixel estimate
of the deformation field. In this case, the estimates hˆi , j  x ,
ˆ
hi , j  y are recalculated into the vector module and its angle:
  h  
      </p>
      <p>2 2
hˆ i , j  x   hˆ i , j  y  ,
  h   arctg hˆi, j  x hi , j  y  .</p>
      <p>ˆ
(7)
(8)</p>
      <p>Fig. 4 shows typical shift estimations of image pixels
corresponding to the nodes for one row of the reference
image. Here Fig. 4(a) corresponds to the application of
MVFAST method, Fig. 4(b) to the proposed method. For
MVFAST method in contrast to the proposed one, you can
see the errors on the borders of the object image and in the
areas inside. Gaps inside the object occur in low-contrast
areas. The proposed method due to the inertia of changes in
the estimates does not have this disadvantage.
and variance Dˆ for both the row and the entire image. Also,
the table shows that the estimation expected value of the
MVFAST method for the motion area are several times
greater (about 5 times for a row, 8 times for an image) than
for the proposed method. The variance estimation for the
motion area in the MVFAST method is many times greater
than the variance of the proposed method. For a motionless
area, the MVFAST method shows slightly better results for
the entire image in the absence of noise. Deformation field
estimates for the entire image are shown in Fig. 5: Fig. 5(a)
when using the MVFAST method, Fig. 5(b) the proposed
method. The Fig.5 shows significant errors in the MVFAST
Algorithm</p>
      <p>mˆ Dˆ</p>
      <p>One line processing results
Proposed algorithm 0.01 26</p>
      <p>MVFAST 0.05 2530</p>
      <p>Average results of the entire image
Proposed algorithm 0.01 140</p>
      <p>MVFAST 0.08 1860
ˆ
m</p>
      <p>The developed method, based on identificationless
stochastic adaptation, has high noise immunity and allows
one to get rid of the influence of global IGC, as well as to
remove small moving objects that are not of interest. In this
paper, such objects were small organisms and particles, in
other situations it can be rain, snow, falling leaves, etc. The
detection of small objects is realized by reducing the size of
blocks, up to one pixel.</p>
    </sec>
    <sec id="sec-4">
      <title>ACKNOWLEDGMENT</title>
      <p>The work was supported by RFBR and Government of
Ulyanovsk Region according to the research projects №
1841-730011 and 19-29-09048.
method at the boundaries of the object, as well as in
lowcontrast areas within the object.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B.</given-names>
            <surname>Antic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Crnojevic</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Culibrk</surname>
          </string-name>
          , “
          <article-title>Efficient wavelet based detection of moving objects</article-title>
          ,
          <source>” 16th International Conference on Digital Signal Processing</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.M.</given-names>
            <surname>Bagci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yardimci</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Çetin</surname>
          </string-name>
          , “
          <article-title>Moving object detection using adaptive subband decomposition and fractional lower-order statistics in video sequences</article-title>
          ,
          <source>” Signal Processing</source>
          , vol.
          <volume>82</volume>
          , no.
          <issue>12</issue>
          , pp.
          <fpage>1941</fpage>
          -
          <lpage>1947</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>B.U.</given-names>
            <surname>Töreyin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.E.</given-names>
            <surname>Çetin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Aksay and M.B. Akhan</surname>
          </string-name>
          , “
          <article-title>Moving object detection in wavelet compressed video</article-title>
          ,
          <source>” Signal Processing: Image Communication</source>
          , vol.
          <volume>20</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>255</fpage>
          -
          <lpage>264</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ahmed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>El-Sayed</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Elhabian</surname>
          </string-name>
          , “
          <article-title>Moving object detection in spatial domain using background removal techniques</article-title>
          ,
          <source>” Recent Patents on Computer Sciencee</source>
          , vol.
          <volume>1</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>32</fpage>
          -
          <lpage>54</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>B.</given-names>
            <surname>Karasulu</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Korukoglu</surname>
          </string-name>
          , “
          <article-title>Moving object detection and tracking in videos,” Performance Evaluation Software SpringerBriefs in Computer Science</article-title>
          , pp.
          <fpage>7</fpage>
          -
          <lpage>30</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          and
          <string-name>
            <given-names>N.</given-names>
            <surname>Yung</surname>
          </string-name>
          , “
          <article-title>Extraction of moving objects from their background based on multiple adaptive thresholds and boundary evaluation,”</article-title>
          <source>IEEE Transactions on Intelligent Transportation Systems</source>
          , vol.
          <volume>11</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>40</fpage>
          -
          <lpage>51</lpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R.V.</given-names>
            <surname>Kutsov</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.P.</given-names>
            <surname>Trifonov</surname>
          </string-name>
          , “
          <article-title>Detection of a moving object in the image</article-title>
          ,
          <source>” Journal of Computer and Systems Sciences International</source>
          , vol.
          <volume>45</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>459</fpage>
          -
          <lpage>468</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.V.</given-names>
            <surname>Grishin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.S.</given-names>
            <surname>Vatolin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.S.</given-names>
            <surname>Lukin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Iu</surname>
          </string-name>
          . Putilin and
          <string-name>
            <given-names>K.N.</given-names>
            <surname>Strelnikov</surname>
          </string-name>
          , “
          <article-title>A review of block-based methods for estimating motion in digital video signals,” Software systems and tools: Thematic collection</article-title>
          , vol.
          <volume>9</volume>
          , pp.
          <fpage>50</fpage>
          -
          <lpage>62</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>N.</given-names>
            <surname>Iu. Zolotykh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.D.</given-names>
            <surname>Kustikova</surname>
          </string-name>
          and
          <string-name>
            <given-names>I.B.</given-names>
            <surname>Meerov</surname>
          </string-name>
          , “
          <article-title>An overview of the methods for searching and tracking vehicles on the video stream,” Vestnik of the Nizhny Novgorod University</article-title>
          .
          <source>N.I. Lobachevsky</source>
          , vol.
          <volume>5</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>348</fpage>
          -
          <lpage>358</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>H.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ye</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nedzvedz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Nedzvedz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lv</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Ablameyko</surname>
          </string-name>
          , “
          <article-title>Traffic extreme situations detection in video sequences based on integral optical flow,” Computer Optics</article-title>
          , vol.
          <volume>43</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>647</fpage>
          -
          <lpage>652</lpage>
          ,
          <year>2019</year>
          . DOI:
          <volume>10</volume>
          .18287/
          <fpage>2412</fpage>
          -6179-2019-43-4-
          <fpage>647</fpage>
          -652.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>I.S.</given-names>
            <surname>Zaqout</surname>
          </string-name>
          , “
          <article-title>An efficient block-based algorithm for hair removal in dermoscopic images</article-title>
          ,”
          <source>Computer Optics</source>
          , vol.
          <volume>41</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>521</fpage>
          -
          <lpage>527</lpage>
          ,
          <year>2017</year>
          . DOI:
          <volume>10</volume>
          .18287/
          <fpage>2412</fpage>
          -6179-2017-41-4-
          <fpage>521</fpage>
          -527.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>D.</given-names>
            <surname>Pons</surname>
          </string-name>
          and Zh. Forsait, “
          <article-title>Computer vision</article-title>
          . Modern approach,” Moscow: Viliams,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.G.</given-names>
            <surname>Tashlinskii</surname>
          </string-name>
          , “
          <article-title>Estimation of spatial deformation parameters of image sequences,”</article-title>
          <source>Ulyanovsk: ULSTU</source>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.G.</given-names>
            <surname>Tashlinskii</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.V.</given-names>
            <surname>Smirnov</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.S.</given-names>
            <surname>Biktimirov</surname>
          </string-name>
          , “
          <article-title>Methods of finding gradient estimates of target function for measurement of images parameters,” Pattern Recognition and Image Analysis</article-title>
          , vol.
          <volume>21</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>339</fpage>
          -
          <lpage>342</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.G.</given-names>
            <surname>Tashlinskii</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.V.</given-names>
            <surname>Smirnov</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.S.</given-names>
            <surname>Zhukov</surname>
          </string-name>
          , “
          <article-title>Analysis of methods of estimating objective function gradient during recurrent measurements of image parameters,” Pattern Recognition and Image Analysis</article-title>
          , vol.
          <volume>22</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>399</fpage>
          -
          <lpage>405</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>A.G.</given-names>
            <surname>Tashlinskii</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.V.</given-names>
            <surname>Voronov</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.V.</given-names>
            <surname>Smirnov</surname>
          </string-name>
          , “
          <article-title>A way to predict parameters of image registration by estimating inter-frame deformation of local fragments,” Pattern Recognition and Image Analysis</article-title>
          , vol.
          <volume>24</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>179</fpage>
          -
          <lpage>184</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>P.I.</surname>
          </string-name>
          <article-title>Hosur and</article-title>
          K.K Ma, “
          <article-title>Motion vector field adaptive fast motion estimation</article-title>
          ,” Second International Conference on Information,
          <source>Communications and Signal Processing</source>
          , pp.
          <fpage>7</fpage>
          -
          <lpage>10</lpage>
          ,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>