<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <article-id pub-id-type="doi">10.18287/1613-0073-2015-1490-268-276</article-id>
      <title-group>
        <article-title>3D scene stereo reconstruction with the use of epipolar restrictions</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Fursov V.A.</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Goshin Y.V.</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Samara State Aerospace University, Image Processing Systems Institute, Russian Academy of Sciences</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2015</year>
      </pub-date>
      <volume>3</volume>
      <fpage>268</fpage>
      <lpage>276</lpage>
      <abstract>
        <p>In the present paper a new approach to scene digital model reconstruction from pair of stereo images is considered. We propose to perform image matching with use of weight coefficient as a penalty function for the distance from the point to the correspondent epipolar line. Technology implementation for unknown camera parameters is also considered. In this case, identification of the fundamental matrix from corresponding points is performed on the initial stage. The main advantage of this technology is the absence of the image rectification stage which causes additional distortions due to image interpolation. There is an example of scene reconstruction from pair of test images and a digital model reconstruction from satellite images.</p>
      </abstract>
      <kwd-group>
        <kwd>stereo image processing</kwd>
        <kwd>image matching</kwd>
        <kwd>3D reconstruction</kwd>
        <kwd>projective geometry</kwd>
        <kwd>epipolar geometry</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The main problem with 3D-scene reconstruction is image matching. Solving this
problem presents some difficulties because projective distortions on stereo pairs are
usually significantly different. To overcome these difficulties, an image preprocessing
rectification technique is used. Rectification of stereo images is a transformation, in
which corresponding points on stereo images are arranged in the same rows.</p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], the theoretical basis for the method of projective rectification is provided,
and the basic algorithm for rectification is introduced. The algorithm involves
calculation of the fundamental matrix and projective transformation. The method of
projective rectification is also covered in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The idea of this method is to decompose
the matrix of the projective transformation into several matrices. But a common
disadvantage of projective rectification is its inapplicability in case when the epipoles
are located on the images as it results in infinite resolution images. In addition,
images can become very large, for example, when the epipole is close to the image.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], polar rectification is introduced as an alternative method of rectification.
The method consists in polar scanning of images around the epipoles. Operation of
the proposed approach is illustrated on real pairs of images. This method has two
main advantages: an opportunity to operate with epipoles on images and guaranteed
minimum size of images without losing pixels.
      </p>
      <p>
        Though it solves many problems of projective rectification mentioned above, polar
rectification has a number of disadvantages too. For instance, it does not operate
correctly when an epipole is located at infinity. In particular, there may be cases
where one epipole is located at infinity, and another one is located on the image. In
papers [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], the idea of polar rectification is extended for these cases.
      </p>
      <p>
        The above methods of rectification do not take into account possible differences in
the internal parameters of the camera, such as focal lengths. This problem is described
in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], in which an algorithm for rectification of heterogeneous and uncalibrated
image pairs is proposed, in particular, for the pairs of images obtained from static and
dynamic cameras of different focal lengths and/or different resolutions. Rectification
is performed in two steps. The first step is the correction of heterogeneity (different
scale) by the expansion, compression or shift of images. The second step is a standard
rectification of images. This approach avoids image distortions associated with
differences in scale (compression or tension as a result of rectification).
      </p>
      <p>At the same time, necessity for conversions directly on an image is a common
disadvantage of the rectification approach. The images and objects on them are
distorted considerably, owing to polar transformation. Feature points detection is
performed on the new interpolated image which causes additional errors. Although
polar rectification is more universal than the projective one, it still impairs some
problems connected with distortions of images during conversions.</p>
      <p>
        An alternative technique of matching is an approach of scene elements tracing, in
particular, using a method of optical flow [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. The method of an optical flow
operates efficiently with a sequence of images, for example, with sequences of video
frames. However, when the images are obtained from cameras which are located
relatively far apart, losses of objects during the tracing are possible.
      </p>
      <p>
        Earlier authors proposed a technology [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] in which a projective transform is
constructed from reliable corresponding points in an informative areas of images. And
then this transform is used to determine and adjust the corresponding points in
uninformative areas. Number of correspondence points on images given by this
technology is low and resulting 3D model is too rough.
      </p>
      <p>The technology of 3D scene reconstruction proposed in this paper to a
considerable degree allows to avoid the above mentioned disadvantages. The basic
idea is to take into account epipolar restrictions in the course of points matching.</p>
      <p>We consider the approach to the epipolar restrictions implementation in which the
most similar image fragments are selected in the neighborhood of the epipolar line.
Then, according to the given criterion, the best points with real coordinates belonging
to the epipolar line are selected.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Problem definition</title>
      <p>
        In order to solve the problem of image matching, the camera obscura model is used
[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Let us consider the case when the parameters of the cameras, as well as their
position and orientation are known. To characterize them, camera parameters matrices
are defined:
K   0f 0f uv00  , K   f0 f0 uv00  , (1)
 0 0 1   0 0 1 
where f and f  is focal length of the camera, u0 , v0  , u0 , v0  are cameras’
principal points location in the coordinate system associated with these cameras [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>Let us introduce a global coordinate system and the coordinate systems of first and
second cameras with their centers at points c , c in the global coordinate system.
Neither of these two points, in general, coincides with the origin of the global
coordinate system.</p>
      <p>Suppose M is a coordinate vector of some point in the global coordinate system.
Coordinate vectors of this point in the coordinate systems of the first and second
cameras m and m are defined as
m  KR tM , (2)
m  KR tM ,</p>
      <p>R , R are the matrices of 3 3 -dimension, describing the rotation of the
coordinate systems of the first and second cameras on the global coordinate system,
and t  tx , ty ,tz T , t  tx ,ty ,tz T are coordinates of the origin of the global
coordinate system in the coordinate systems of the first and second cameras
respectively, defined as
t  Rc ,
t  Rc ,</p>
      <p>If projections m and m (2), (3) on the first and second camera images are
known, point M coordinates in three-dimensional space can be calculated as the
intersection of rays c, m and c, m . Due to errors in the course of determining of
the corresponding points coordinates, these rays will not probably cross. The errors
occur in the internal and external calibration matrices and image distortions due to
rectification. Therefore, some inaccurate estimates of the point coordinates are usually
computed.</p>
      <p>
        A detailed review of methods for image matching is given in [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. It presents a
classification of methods for stereo matching by the cost function, aggregation area,
and a disparity map construction approach. The paper [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] describes stereo matching
by use of so-called super pixels. Superpixel is a set of pixels that are homogeneous in
their brightness and texture. In [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], the method of matching in a rectangular area,
based on cross-correlation and the maximum-surface method is proposed. In [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] an
intensity-based method that takes into account discontinues and occlusions are
presented.
(3)
      </p>
      <p>In our technology, rectified images are not generated. Matching points are searched
directly on the epipolar lines belonging to the same plane. Then, for each pair of
corresponding points, spatial coordinates of the scene are computed.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Points matching using weighting coefficients</title>
      <p>
        We assume that the fundamental matrix is calculated with use of cameras
parameters or estimated on the set of corresponding points (for example, with use of
the 8-th point algorithm [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]). We will designate the points coordinates on the first
image of stereo images as u, v , and the corresponding points coordinates on the
second stereo images as u  u, v  v , where u and v
are relative shifts of
the coordinates respectively. Let I u, v and I u  u, v  v are values of the
brightness distribution functions of these images. We will use Euclidean norm as a
measure of proximity between values of brightness for point u, v and the
corresponding point u  u, v  v :
eu, v, u, v  I u, v  I u  u, v  v
      </p>
      <p>The problem of the determination of shifts values is formulated as a problem of the
following criterion minimization:
E u0 , v0 , u, v   a u, v e u, v, u, v</p>
      <p>u,vDu0 ,v0 
where Du0 , v0  is the area around point u0 , v0  , and a u, v is the weight function
defined as multiplication of three weighting coefficients:
a u, v  wc  wd  wf
where wc , wd are the coefficients which reduce projective distortions, and wf is the
coefficient providing the location of the point on the epipolar line. These coefficients
depend on point coordinates u, v in area Du0 , v0  .</p>
      <p>
        Coefficients w , wc are determined similar to algorithm SimpleFlow [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] by the
d
following equations:
wd  exp u0 , v0   u, v 2,
wc  exp I u0 , v0   I u, v 2, u, v  D
u, v  Du0 , v0 ,
      </p>
      <p>Coefficient wd increases the weight of the points values depending on the
proximity to the center of area Du0 , v0  . It reduces distortions influence, as the
distance from the center to the edges of area increases.</p>
      <p>
        Weighting coefficient wc performs the same function; however, in this case, the
values of the brightness distribution functions are used. This coefficient extends the
effective area of the fragments comparison in case of sufficiently different intensity
values in area Du0 , v0  .
Along with the weight coefficients w , w , which were considered in paper [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ],
d c
we introduce weight coefficient wf , which characterizes the distance from a point to
an epipolar line. For each point u0 , v0  on the first image there is a corresponding
epipolar line l : au  bv  c  0 on the second image:
u, v   FF1211 FF1222 FF1233   uv    ba 
      </p>
      <p> F31 F32 F33   1   c 
a  u0 F11  u0 F12  F13 ,
b  u F 23</p>
      <p>0 21  u0 F22  F ,
c  u F 33</p>
      <p>0 31  u0 F32  F .</p>
      <p>Coefficient wf is calculated as
wf  expd u, v, l
where
d u, v , l 
au  bv  c
is a distance from point u  u, v  v on the second image to the epipolar line l  .
This coefficient is used as penalty function for the distance from the point to the
correspondent epipolar line.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Examples</title>
    </sec>
    <sec id="sec-5">
      <title>4.1. Test scene reconstruction</title>
      <p>We compared the algorithm of the corresponding points determination using the
epipolar restrictions with similar SimpleFlow algorithm. For this experiment we use a
set of test stereo images "Tsukuba" (Figure 1) in the daylight and in the flashlight.
Figure 2 shows the results of the disparity map reconstruction using proposed method
and SimpleFlow. For quantitative assessment of quality we use the number of the
disparity map pixels, which are not occlusions and differ more than by 10\% from
their exact values. The number of such pixels is given in Table 1. This table contains
the results of processing by Simple Flow algorithm as well. The first number is the
absolute number of pixels. The relative number of pixels (to total number of pixels) is
given in brackets.</p>
    </sec>
    <sec id="sec-6">
      <title>4.2. Digital terrain model reconstruction</title>
      <p>Stereo images obtained from a spacecraft and shown in Figures 2a and 2b were
used in the test. As parameters of the spacecraft at the moment of recording by an
optical device are unknown, the problem of the fundamental matrix determination
using the corresponding points of the stereo images shown in Figure 3 should be
solved. Let us refer to this stage as preliminary comparison of the stereo images.</p>
      <p>
        Figure 4 (a) shows the fragment of one (left) image of a stereo pair with white
lines illustrating the sizes and the directions of shifts between images (optical flow).
Let us note that the relative number of chaotically directed white lines is rather great.
When using a small number of the corresponding points these errors can lead to rough
errors when calculating a fundamental matrix. Therefore we form a system of linear
equations and use conforming estimate method as it is described in [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], [19]. Thus,
the fundamental matrix is obtained:
 6.73105 6.56 105 5.94
F   1, 22 104 6.25106 0.65
      </p>
      <p> 5.98 0.62 1 </p>
      <p>After the fundamental matrix is determined, it is possible to find the corresponding
points with using the epipolar restrictions. Figure 4 (b) gives a fragment of the same
image with the black lines showing sizes and directions of shifts errors under the
epipolar restrictions. It is easy to notice that the number of the chaotic black lines
characterizing errors in this case is significantly lower.</p>
      <p>In Figures 5 the disparity map (a) and the restored digital terrain model (b) of the
image segment, constructed with use of the corresponding points, are shown.</p>
    </sec>
    <sec id="sec-7">
      <title>5. Conclusion</title>
      <p>When the fundamental matrix is not known precisely, high quality of the
reconstruction can be reached by the use of epipolar restrictions in the form of weight
coefficient in the penalty function. These coefficients correspond to the distance from
the point to the epipolar line. The results of the comparison with the most popular
SimpleFlow algorithm demonstrate the prospects of the proposed approach.</p>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgements</title>
      <p>Study was funded by RFBR according to the research project #13-07-12030 ofi_m,
13-07-97000-r_povolzhye_a.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Hartley</surname>
            <given-names>RI</given-names>
          </string-name>
          .
          <article-title>Theory and Practice of Projective Rectification</article-title>
          .
          <source>International Journal of Computer Vision</source>
          ,
          <year>1999</year>
          ;
          <volume>35</volume>
          :
          <fpage>115</fpage>
          -
          <lpage>127</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Monasse</surname>
            <given-names>P</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Morel</surname>
            <given-names>JM.</given-names>
          </string-name>
          , Tang Z.
          <article-title>Three-step image rectification</article-title>
          .
          <source>BMVC 2010-British Machine Vision Conference</source>
          ,
          <year>2010</year>
          ;
          <volume>89</volume>
          .
          <fpage>1</fpage>
          -
          <lpage>89</lpage>
          .
          <fpage>10</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Pollefeys</surname>
            <given-names>M.</given-names>
          </string-name>
          <article-title>A simple and efficient rectification method for general motion</article-title>
          .
          <source>The Proceedings of the 7th IEEE International Conference on Computer Vision</source>
          ,
          <year>1999</year>
          ;
          <volume>1</volume>
          :
          <fpage>496</fpage>
          -
          <lpage>501</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Häming</surname>
            <given-names>K</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peters</surname>
            <given-names>G</given-names>
          </string-name>
          .
          <article-title>Extension of the generalized image rectification. Catching the infinity cases</article-title>
          .
          <source>Proceedings 4th International Conference on Informatics in Control, Automation, and Robotics (ICINCO</source>
          <year>2007</year>
          ),
          <year>2007</year>
          ;
          <volume>2</volume>
          :
          <fpage>275</fpage>
          -
          <lpage>279</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Oram</surname>
            <given-names>D.</given-names>
          </string-name>
          <article-title>Rectification for any epipolar geometry</article-title>
          .
          <source>British Machine Vision Conference</source>
          ,
          <year>2001</year>
          ;
          <fpage>653</fpage>
          -
          <lpage>662</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Kumar</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Micheloni</surname>
            <given-names>C</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Piciarelli</surname>
            <given-names>C</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Foresti</surname>
            <given-names>GL</given-names>
          </string-name>
          .
          <article-title>Stereo rectification of uncalibrated and heterogeneous images</article-title>
          .
          <source>Pattern Recognition Letters</source>
          ,
          <year>2010</year>
          ;
          <volume>31</volume>
          (
          <issue>11</issue>
          ):
          <fpage>1445</fpage>
          -
          <lpage>1452</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Fleet</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weiss</surname>
            <given-names>Y.</given-names>
          </string-name>
          <article-title>Optical flow estimation</article-title>
          .
          <source>Handbook of Mathematical Models in Computer Vision</source>
          ,
          <year>2006</year>
          ;
          <fpage>237</fpage>
          -
          <lpage>257</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Barron</surname>
            <given-names>JL</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fleet</surname>
            <given-names>DJ</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beauchemin</surname>
            <given-names>SS</given-names>
          </string-name>
          .
          <article-title>Performance of optical flow techniques</article-title>
          .
          <source>International Journal of Computer Vision</source>
          ,
          <year>1994</year>
          ;
          <volume>12</volume>
          (
          <issue>1</issue>
          ):
          <fpage>43</fpage>
          -
          <lpage>77</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Goshin</surname>
            <given-names>YeV</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kotov</surname>
            <given-names>AP</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fursov</surname>
            <given-names>VA</given-names>
          </string-name>
          .
          <article-title>Two-stage formation of a spatial transformation for image matching</article-title>
          .
          <source>Computer Optics</source>
          ,
          <year>2014</year>
          ;
          <volume>38</volume>
          (
          <issue>4</issue>
          ):
          <fpage>886</fpage>
          -
          <lpage>891</lpage>
          . [in Russian]
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Forsyth</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ponce J. Computer</surname>
          </string-name>
          <article-title>Vision: A Modern Approach</article-title>
          . Moscow: “Williams” Publisher,
          <year>2004</year>
          . 928 p. [in Russian]
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Hartley</surname>
            <given-names>R</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zisserman</surname>
            <given-names>A</given-names>
          </string-name>
          .
          <article-title>Multiple view geometry in computer vision</article-title>
          . Cambridge university press,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Scharstein</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Szeliski</surname>
            <given-names>R.</given-names>
          </string-name>
          <article-title>A taxonomy and evaluation of dense two-frame stereo correspondence algorithms</article-title>
          .
          <source>International journal of computer vision</source>
          ,
          <year>2002</year>
          ;
          <volume>47</volume>
          (
          <issue>1-3</issue>
          ):
          <fpage>7</fpage>
          -
          <lpage>42</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Mičušík</surname>
            <given-names>B</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Košecká</surname>
            <given-names>J.</given-names>
          </string-name>
          <article-title>Multi-view superpixel stereo in urban environments</article-title>
          .
          <source>International journal of computer vision</source>
          ,
          <year>2010</year>
          ;
          <volume>89</volume>
          (
          <issue>1</issue>
          ):
          <fpage>106</fpage>
          -
          <lpage>119</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Sun</surname>
            <given-names>C</given-names>
          </string-name>
          .
          <article-title>Fast stereo matching using rectangular subregioning and 3D maximum-surface techniques</article-title>
          .
          <source>International Journal of Computer Vision</source>
          ,
          <year>2002</year>
          ;
          <volume>47</volume>
          (
          <issue>1-3</issue>
          ):
          <fpage>99</fpage>
          -
          <lpage>117</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Luo</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Burkhardt</surname>
            <given-names>H.</given-names>
          </string-name>
          <article-title>An intensity-based cooperative bidirectional stereo matching with simultaneous detection of discontinuities and occlusions</article-title>
          .
          <source>International Journal of Computer Vision</source>
          ,
          <year>1995</year>
          ;
          <volume>15</volume>
          (
          <issue>3</issue>
          ):
          <fpage>171</fpage>
          -
          <lpage>188</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Hartley</surname>
            <given-names>R.</given-names>
          </string-name>
          <article-title>In defense of the eight-point algorithm</article-title>
          .
          <source>Pattern Analysis and Machine Intelligence</source>
          , IEEE Transactions on,
          <year>1997</year>
          ;
          <volume>19</volume>
          (
          <issue>6</issue>
          ):
          <fpage>580</fpage>
          -
          <lpage>593</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Tao</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bai</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kohli</surname>
            <given-names>P</given-names>
          </string-name>
          , Paris S. SimpleFlow:
          <article-title>A Non-iterative, Sublinear Optical Flow Algorithm</article-title>
          . Computer Graphics Forum,
          <year>2012</year>
          ;
          <volume>31</volume>
          (
          <issue>2</issue>
          ):
          <fpage>345</fpage>
          -
          <lpage>353</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Fursov</surname>
            <given-names>V</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Goshin Ye</surname>
          </string-name>
          .
          <article-title>Conformed Identification of the Fundamental Matrix in the Problem of a Scene Reconstruction, using Stereo Images Image Mining</article-title>
          .
          <article-title>Theory and Applications</article-title>
          .
          <source>Proceedings of IMTA-4</source>
          ,
          <year>2013</year>
          ;
          <fpage>29</fpage>
          -
          <lpage>37</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>