<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Accuracy analysis of 3D object reconstruction using point cloud filtering algorithms</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>A N Ruchay</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>K A Dorofeev</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>V V Kalschikov</string-name>
          <email>vkalschikov@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Chelyabinsk State University</institution>
          ,
          <addr-line>Bratiev Kashirinykh street 129, Chelyabinsk, Russia, 454001</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Federal Research Centre of Biological Systems and Agro-technologies of the Russian Academy of Sciences</institution>
          ,
          <addr-line>9 Yanvarya street, 29, Orenburg, Russia, 460000</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <fpage>169</fpage>
      <lpage>174</lpage>
      <abstract>
        <p>In this paper, we first analyze the accuracy of 3D object reconstruction using point cloud filtering applied on data from a RGB-D sensor. Point cloud filtering algorithms carry out upsampling for defective point cloud. Various methods of point cloud filtering are tested and compared with respect to the reconstruction accuracy using real data. In order to improve the accuracy of 3D object reconstruction, an efficient method of point cloud filtering is designed. The presented results show an improvement in the accuracy of 3D object reconstruction using the proposed point cloud filtering algorithm.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The 3D object reconstruction is a popular task in the field of medicine, agriculture, architecture,
games, and film industry [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1, 2, 3</xref>
        ]. Accurate 3D object reconstruction is an important aspect for
object recognition, object retrieval, scene understanding, object tracking, virtual maintenance
and visualization [
        <xref ref-type="bibr" rid="ref4 ref5 ref6 ref7">4, 5, 6, 7</xref>
        ].
      </p>
      <p>
        RGB-D low-cost sensors such as the Kinect can provide a high-resolution RGB color image
with a depth map of the environment [
        <xref ref-type="bibr" rid="ref10 ref8 ref9">8, 9, 10</xref>
        ]. The depth map discontinuity, and a small error
around object boundary may lead to significant ringing artifacts in rendered views. Also, the
depth map provided by a RGB-D camera is often noisy due to imperfections associated with
infrared light reflections, and missing pixels without any depth value appearing as black holes
in the maps. Therefore, the point cloud obtained from the depth map inevitably suffers from
noise contamination and contains outliers.
      </p>
      <p>
        The noise and holes can greatly affect to the accurate of 3D reconstruction [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ], therefore,
noise-reduction and hole-filling enhancement algorithms are intended to serve as a pre-processing
step for 3D reconstruction systems using Kinect cameras [
        <xref ref-type="bibr" rid="ref13 ref14 ref15 ref16">13, 14, 15, 16</xref>
        ]. To reduce impulsive
noise and to fill small holes, the filters [
        <xref ref-type="bibr" rid="ref17 ref18 ref19 ref20 ref21">17, 18, 19, 20, 21</xref>
        ] are used.
      </p>
      <p>
        In this paper, we are interested in the design of a filtering algorithm of a poin cloud to improve
the quality of the 3D reconstruction. In recent years, a large number of methods contributing to
3D point cloud filtering have been proposed: Normal-based Bilateral Filter [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], Moving Least
Square [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], Iterative guidance normal filter [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], Bilateral Filter [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], Density-based Denoising
[
        <xref ref-type="bibr" rid="ref25">25</xref>
        ], Rolling normal filter [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ], Statistical Outlier Removal filter [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ], Radius Outlier Removal
filter [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ], Voxel Grid filter [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], 3D Bilateral filter [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ].
      </p>
      <p>
        In a common approach of noise reduction it is supposed that the raw point cloud contains
the ground truth, distorted by artificial noise such as additive [
        <xref ref-type="bibr" rid="ref29 ref30">29, 30</xref>
        ]. Although this common
approach could be used for quantitative comparison (e.g., PSNR, MSE, etc), common methods
reduce only artificial noise but not original noise contained in the raw point cloud. In this
paper, we consider denoising point cloud algorithms for 3D object reconstruction. We propose
a denoising method using a point cloud as the input. We also evaluate the performance of
denoising methods on the base of the accuracy of 3D object reconstruction. Actually, RSME
errors by ICP algorithm and hausdorf distance [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ] between input cloud and filtered cloud are
calculated.
      </p>
      <p>General denoising methods are not designed to clean coarse noise contained in the input point
cloud. Therefore, our main goal is to evaluate denoising methods in terms of reconstruction
accuracy which depends on the quality of the input point cloud.</p>
      <p>The paper is organized as follows. Section 2 discusses related denoising point cloud methods.
Computer simulation results are provided in Section 3. Finally, Section 4 summarizes our
conclusions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Point cloud denoising filters</title>
      <p>This section contains information about point cloud denoising filters.</p>
      <p>
        The paper [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] presents a good survey of filtering approaches for 3D point cloud. We compare
the following point cloud denoising algorithms in terms of accuracy of 3D object reconstruction:
Statistical Outlier Removal filter (SOR) [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ], Radius Outlier Removal filter (ROR) [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ], Voxel
Grid filter (VG) [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], 3D Bilateral filter (3DBF) [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ].
      </p>
      <sec id="sec-2-1">
        <title>2.1. Statistical Outlier Removal filter (SOR)</title>
        <p>
          SOR uses point neighborhood statistics to filter outlier data [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ]. Sensor scans typically generate
point cloud datasets of varying point densities. Additionally, measurement errors lead to sparse
outliers which corrupt the results even more. This complicates the estimation of local point
cloud characteristics such as surface normals or curvature changes, leading to erroneous values,
which in turn might cause point cloud registration failures. Some of these irregularities can be
solved by performing a statistical analysis on each points neighborhood, and trimming those
which do not meet a certain criteria. This sparse outlier removal is based on the computation of
the distribution of point to neighbors distances in the input dataset. For each point, we compute
the mean distance from it to all its neighbors. By assuming that the resulted distribution is
Gaussian with a mean and a standard deviation, all points whose mean distances are outside
an interval defined by the global distances mean and standard deviation can be considered as
outliers and trimmed from the dataset.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Radius Outlier Removal filter (ROR)</title>
        <p>
          ROR removes outliers if the number of neighbors in a certain search radius is smaller than a
given K [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ]. We can specifie a number of neighbors which every index must have within a
specified radius to remain in the point cloud.
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Voxel Grid filter (VG)</title>
        <p>VG filtering method first defines a 3D voxel grid (3D boxes in 3D space) on a point cloud. Then,
in each voxel, a point is chosen to approximate all the points that lie on that voxel. Normally,
the centroid of these points or the center of this voxel is used as the approximation. The former
is slower than the later, while its representative of underlying surface is more accurate. The VG
method usually leads to geometric information loss.</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. 3D Bilateral filter (3DBF)</title>
        <p>
          3DBF filter denoises a point with respect to its neighbors by considering not only the distance
from the neighbors to the point but also the distance along a normal direction [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ].
        </p>
        <p>Let us first consider a point cloud M with known normals nv at each vertex position v. Let
N (v) be the 1-ring neighborhood of vertex v (i.e. the set of vertices sharing an edge with v).
Then, the filtered position of v is</p>
        <p>v + δv · nv,
where
δv =</p>
        <p>Pp∈N(v) wd(kp − vk)wn(| hnv, p − vi |) h nv, p − vi</p>
        <p>Pp∈N(v) wd(kp − vk)wn(| hnv, p − vi |)
and wd and wn are two decreasing functions. Here vertex v is shifted along its normal toward a
weighted average of points that are both close to v in the ambient space and close to the plane
passing through v with normal nv.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Experimental results</title>
      <p>In this section, we evaluate the performance of the tested denoising methods in terms of
reconstruction accuracy which depends on the quality of the input point cloud.</p>
      <p>
        We compare the following point cloud denoising algorithms in terms of the accuracy of 3D
object reconstruction and speed: Statistical Outlier Removal filter (SOR) [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ], Radius Outlier
Removal filter (ROR) [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ], Voxel Grid filter (VG) [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], 3D Bilateral filter (3DBF) [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ]. SOR,
ROR, and VG are available in the Point Cloud Library (PCL). 3DBF was implemented in C++.
The experiments are carried out on a PC with Intel(R) Core(TM) i7-4790 CPU @ 3.60 GHz
and 16 GB memory.
      </p>
      <p>
        In our experiments we use the point clouds of a lion and an apollo from dataset [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ] and a
chair from database [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ]. Fig. 1 shows RGB images and depth maps of a lion, an apollo, and a
chair.
      </p>
      <p>We construct couples of point clouds for each model using the following steps:
(i) Registration RGB and depth data (Fig. 1).
(ii) Making point clouds (Fig. 2).
(iii) Computing point cloud statistics, such as points count, minimum, maximum and median
distance between points in point cloud (Table. 1). Point clouds statistics is required for
further calculation of optimum parameters of 3D filters.
(iv) Metric calculation algorithms between couples of frames of point clouds. We calculate
transorfmation matrix with standart ICP algorithm and euclidian fitness score (ICP error).
Since filtered point cloud can contain different points count of rather initial cloud, we also
calculate the hausdorf distance between initial and filtered clouds for estimation of the
quality of 3D filtration.</p>
      <p>Result of calculation and visualization of the hausdorf metric between two frames of chair
model shown in Fig. 3.</p>
      <p>The metrics between two frames of each model without filtering are presented in Table 2.</p>
      <p>The corresponding ICP error and hausdorf distance calculated for chair model with SOR,
ROR, VG, 3DBF point cloud denoising algorithms are shown in Table 3. The ROR filter yields
the best result in terms of ICP error and hausdorf distance evaluation among all point cloud
denoising algorithms. Fig. 4 shows the point clouds of a chair after denoising ROR and SOR
filtering.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>In this paper, we compared various point cloud algorithms in terms of accuracy of 3D object
reconstruction using real data from a RGB-D sensor. The experiment has shown that the ROR
filter yields the best result in terms of ICP error and hausdorf distance evaluation among all point
cloud denoising algorithms.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Echeagaray-Patron</surname>
            <given-names>B A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kober</surname>
            <given-names>V I</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Karnaukhov</surname>
            <given-names>V N</given-names>
          </string-name>
          and
          <string-name>
            <surname>Kuznetsov</surname>
            <given-names>V V</given-names>
          </string-name>
          <year>2017</year>
          <source>Journal of Communications Technology and Electronics</source>
          <volume>62</volume>
          <fpage>648</fpage>
          -
          <lpage>652</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Ruchay</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dorofeev</surname>
            <given-names>K</given-names>
          </string-name>
          <source>and Kober A 2018 Proc. SPIE</source>
          10752
          <fpage>1075222</fpage>
          -
          <lpage>8</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Ruchay</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dorofeev</surname>
            <given-names>K</given-names>
          </string-name>
          and
          <string-name>
            <surname>Kolpakov</surname>
            <given-names>V 2018</given-names>
          </string-name>
          <article-title>Fusion of information from multiple Kinect sensors for 3D object reconstruction</article-title>
          <source>Computer Optics</source>
          <volume>42</volume>
          (
          <issue>5</issue>
          )
          <fpage>898</fpage>
          -
          <lpage>903</lpage>
          DOI: 10.18287/
          <fpage>2412</fpage>
          -6179-2018-42-5-
          <fpage>898</fpage>
          -903
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Echeagaray-Patron</surname>
            <given-names>B A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Kober</surname>
            <given-names>V</given-names>
          </string-name>
          <source>2015 Proc. SPIE 9598</source>
          95980V-
          <fpage>8</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Echeagaray-Patron</surname>
            <given-names>B A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Kober</surname>
            <given-names>V</given-names>
          </string-name>
          <source>2016 Proc. SPIE</source>
          9971
          <fpage>9971</fpage>
          -
          <lpage>6</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Ruchay</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dorofeev</surname>
            <given-names>K</given-names>
          </string-name>
          <source>and Kober A 2018 CEUR Workshop Proceedings</source>
          <volume>2210</volume>
          <fpage>82</fpage>
          -
          <lpage>88</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Ruchay</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dorofeev</surname>
            <given-names>K</given-names>
          </string-name>
          <source>and Kober A 2018 CEUR Workshop Proceedings</source>
          <volume>2210</volume>
          <fpage>300</fpage>
          -
          <lpage>308</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Tihonkih</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Makovetskii</surname>
            <given-names>A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Kuznetsov</surname>
            <given-names>V</given-names>
          </string-name>
          <source>2016 Proc. SPIE 9971</source>
          99712D-
          <fpage>8</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Nikolaev</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tihonkih</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Makovetskii</surname>
            <given-names>A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Voronin</surname>
            <given-names>S</given-names>
          </string-name>
          <source>2017 Proc. SPIE</source>
          10396
          <fpage>10396</fpage>
          -
          <lpage>8</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Gonzalez-Fraga</surname>
            <given-names>J A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kober</surname>
            <given-names>V</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Diaz-Ramirez</surname>
            <given-names>V H</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gutierrez</surname>
            <given-names>E</given-names>
          </string-name>
          and
          <string-name>
            <surname>Alvarez-Xochihua</surname>
            <given-names>O</given-names>
          </string-name>
          <source>2017 Proc. SPIE</source>
          10396
          <fpage>10396</fpage>
          -
          <lpage>7</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Makovetskii</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Voronin</surname>
            <given-names>S</given-names>
          </string-name>
          and
          <string-name>
            <surname>Kober</surname>
            <given-names>V 2018</given-names>
          </string-name>
          <string-name>
            <surname>Proceedings of</surname>
            <given-names>SPIE</given-names>
          </string-name>
          - The
          <source>International Society for Optical Engineering</source>
          <volume>10752</volume>
          107522V
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Voronin</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Makovetskii</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Voronin</surname>
            <given-names>A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Diaz-Escobar</surname>
            <given-names>J</given-names>
          </string-name>
          2018
          <source>Proceedings of SPIE - The International Society for Optical Engineering</source>
          <volume>10752</volume>
          107522S
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Makovetskii</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Voronin</surname>
            <given-names>S</given-names>
          </string-name>
          and
          <string-name>
            <surname>Kober</surname>
            <given-names>V 2017</given-names>
          </string-name>
          <article-title>Analysis of Images, Social Networks</article-title>
          and Texts (Cham: Springer International Publishing)
          <volume>326</volume>
          -
          <fpage>337</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Tihonkih</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Makovetskii</surname>
            <given-names>A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Voronin</surname>
            <given-names>A</given-names>
          </string-name>
          <source>2017 Proc. SPIE</source>
          10396
          <fpage>10396</fpage>
          -
          <lpage>7</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Ruchay</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dorofeev</surname>
            <given-names>K</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kober</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kolpakov</surname>
            <given-names>V</given-names>
          </string-name>
          and
          <string-name>
            <surname>Kalschikov</surname>
            <given-names>V</given-names>
          </string-name>
          <source>2018 Proc. SPIE</source>
          10752
          <fpage>1075221</fpage>
          -
          <lpage>10</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Ruchay</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dorofeev</surname>
            <given-names>K</given-names>
          </string-name>
          <source>and Kober A 2018 Proc. SPIE</source>
          10752
          <fpage>1075223</fpage>
          -
          <lpage>8</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Ruchay</surname>
            <given-names>A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Kober</surname>
            <given-names>V</given-names>
          </string-name>
          <source>2016 Proc. SPIE 9971</source>
          99712Y-
          <fpage>10</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Ruchay</surname>
            <given-names>A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Kober</surname>
            <given-names>V</given-names>
          </string-name>
          <source>2017 Proc. SPIE</source>
          10396
          <fpage>1039626</fpage>
          -
          <lpage>10</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Ruchay</surname>
            <given-names>A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Kober</surname>
            <given-names>V</given-names>
          </string-name>
          <source>2017 Proc. SPIE</source>
          10396
          <fpage>1039627</fpage>
          -
          <lpage>9</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Ruchay</surname>
            <given-names>A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Kober</surname>
            <given-names>V 2018</given-names>
          </string-name>
          <article-title>Analysis of Images, Social Networks</article-title>
          and Texts (Cham: Springer International Publishing)
          <volume>280</volume>
          -
          <fpage>291</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Ruchay</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kober</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kolpakov</surname>
            <given-names>V</given-names>
          </string-name>
          and
          <string-name>
            <surname>Makovetskaya</surname>
            <given-names>T</given-names>
          </string-name>
          <source>2018 Proc. SPIE</source>
          10752
          <fpage>1075224</fpage>
          -
          <lpage>12</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Han</surname>
            <given-names>X F</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Jin J S</given-names>
            ,
            <surname>Wang</surname>
          </string-name>
          <string-name>
            <given-names>M J</given-names>
            ,
            <surname>Jiang</surname>
          </string-name>
          <string-name>
            <given-names>W</given-names>
            ,
            <surname>Gao</surname>
          </string-name>
          <string-name>
            <given-names>L</given-names>
            and
            <surname>Xiao L 2017 Signal Processing</surname>
          </string-name>
          :
          <source>Image Communication</source>
          <volume>57</volume>
          <fpage>103</fpage>
          -
          <lpage>112</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Han</surname>
            <given-names>X F</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Jin J S</given-names>
            ,
            <surname>Wang</surname>
          </string-name>
          <string-name>
            <given-names>M J</given-names>
            and
            <surname>Jiang W 2018 Multimedia</surname>
          </string-name>
          <string-name>
            <surname>Tools</surname>
          </string-name>
          <source>and Applications</source>
          <volume>77</volume>
          <fpage>16887</fpage>
          -
          <lpage>16902</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Paris</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kornprobst</surname>
            <given-names>P</given-names>
          </string-name>
          and
          <string-name>
            <surname>Tumblin J 2009 Bilateral</surname>
          </string-name>
          <article-title>Filtering (Hanover</article-title>
          , MA, USA: Now Publishers Inc.)
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>Zaman</surname>
            <given-names>F</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wong Y P and Ng</surname>
            <given-names>B Y</given-names>
          </string-name>
          <year>2017</year>
          9th International Conference on Robotic, Vision, Signal Processing and Power
          <string-name>
            <surname>Applications</surname>
          </string-name>
          (Singapore: Springer Singapore)
          <volume>287</volume>
          -
          <fpage>295</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <surname>Zheng</surname>
            <given-names>Y</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            <given-names>G</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xu</surname>
            <given-names>X</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wu</surname>
            <given-names>S</given-names>
          </string-name>
          and
          <string-name>
            <surname>Nie</surname>
            <given-names>Y 2018</given-names>
          </string-name>
          <string-name>
            <surname>Computer Aided</surname>
          </string-name>
          Geometric Design 62
          <fpage>16</fpage>
          -
          <lpage>28</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <surname>Rusu R B and Cousins S 2011 IEEE International</surname>
          </string-name>
          <article-title>Conference on Robotics and Automation 1-4</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <surname>Digne</surname>
          </string-name>
          J and
          <string-name>
            <surname>de Franchis C 2017 Image Processing</surname>
          </string-name>
          On Line 7
          <fpage>278</fpage>
          -
          <lpage>287</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <surname>Boubou</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Narikiyo</surname>
            <given-names>T</given-names>
          </string-name>
          and
          <string-name>
            <surname>Kawanishi</surname>
            <given-names>M 2017</given-names>
          </string-name>
          <year>3DTV</year>
          <article-title>Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON) 1-4</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <surname>Chen</surname>
            <given-names>R</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            <given-names>X</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhai</surname>
            <given-names>D</given-names>
          </string-name>
          and
          <string-name>
            <surname>Zhao</surname>
            <given-names>D</given-names>
          </string-name>
          2018
          <string-name>
            <surname>Digital</surname>
            <given-names>TV</given-names>
          </string-name>
          and Wireless Multimedia Communication (Springer Singapore)
          <volume>128</volume>
          -
          <fpage>137</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <surname>Alexiou</surname>
            <given-names>E</given-names>
          </string-name>
          and
          <string-name>
            <surname>Ebrahimi</surname>
            <given-names>T 2017</given-names>
          </string-name>
          <string-name>
            <surname>Ninth International</surname>
          </string-name>
          <article-title>Conference on Quality of Multimedia Experience (QoMEX) 1-3</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <surname>Lee</surname>
            <given-names>K</given-names>
          </string-name>
          and
          <string-name>
            <surname>Nguyen T Q 2016</surname>
          </string-name>
          <article-title>Mach</article-title>
          . Vis. Appl.
          <volume>27</volume>
          <fpage>377</fpage>
          -
          <lpage>385</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <surname>Choi</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            <given-names>Q</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miller</surname>
            <given-names>S</given-names>
          </string-name>
          and
          <string-name>
            <surname>Koltun V 2016 CoRR ArXiv</surname>
          </string-name>
          : abs/1602.02481
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>