<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Workshop on Knowledge Discovery and User Modelling for Smart Cities
August</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>A Fast Planner Detection Method in LiDAR Point Clouds Using GPU-based RANSAC</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jun Lan</string-name>
          <email>junlan1234@163.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yifei Tian</string-name>
          <email>tianyifei0000@sina.com</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Wei Song</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Simon Fong</string-name>
          <email>ccfong@umac.mo</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zhitong Su</string-name>
          <email>suzhitong@ncut.edu.cn</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>North China University of Technology - P.</institution>
          <addr-line>O. Box 100144, Beijing</addr-line>
          ,
          <country country="CN">China</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>North China University of Technology - P.</institution>
          <addr-line>O. Box 100144, Beijing</addr-line>
          ,
          <country>China Corresponding:</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Macau - P.</institution>
          <addr-line>O. Box999078, Macau</addr-line>
          <country country="CN">China</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <volume>20</volume>
      <issue>2018</issue>
      <abstract>
        <p>Unmanned ground vehicle (UGV) technology benefits for intelligent transportation and automatic digital map reconstruction in smart city domain. In most urban areas, the planes are usually existed in buildings and infrastructures, which consist of massive 3D points detected by the sensors on UGV. This paper aims to study planar detection from 3D point clouds and represent the planes using a mesh instead of plenty of points in 3D reconstruction. A fast planar detection method using GPU-based Random Sample Consensus (RANSAC) is developed to estimate the planar parameters existed in surrounding environment. The sensed 3D point clouds are segmented into several planes. Each planar surface is reconstructed and rendered using only four vertices, so that the memory usage is reduced in 3D reconstruction. For improving the time efficiency in plane recognition process, GPU is utilized to fasten RANSAC algorithm using CUDA parallel programming technology.</p>
      </abstract>
      <kwd-group>
        <kwd>Unmanned ground vehicle</kwd>
        <kwd>RANSAC</kwd>
        <kwd>Terrain modeling</kwd>
        <kwd>GPU</kwd>
        <kwd>LiDAR</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>requires massive iterations in plane detection. In order to improve the accuracy and speed performance of
environment perception of UGV, this paper proposed a fast planner detection method in LiDAR point
clouds using GPU-based RANSAC algorithm. In our method, raw 3D point cloud is segmented into several
sub-spaces to detect planes. To recognize plane in individual sub-spaces, a preset plane model is initialized
to record the point count of which suitable to the plane model. After test several different plane model, the
plane model of the most registered points is considered as the plane in the sub-space. Thus, the optimum
plane model in each sub-space is computed in parallel by a CUDA development kit. The remainder of this
paper is organized as follows, Section 2 overviews related work. Section 3 discusses the method of plane
detection. Section 4 describes the experiments using the proposed method. Section 5 concludes this paper.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Works</title>
      <p>In mobile robot and intelligent vehicle domain, plane recognition methods based on 3D point clouds are
widely researched in 3D environment sensing technology. Ishida et al. [5] detected planes from depth pixels
by using a 3D Hough transform algorithm in unknown environment. In the method, the Hough space
was divided into several parts to extract plane parameters through several votes and iterations. Hulik
et al. [4] utilized a 3D Hough transform to extract large planes from point cloud collected from LiDAR
sensor. In order to preserve the accuracy of the plane detection process, a Gauss smoothing function was
used to eliminate the noise distribution in the Hough space after executing Hough transform on LiDAR
points. To increase the speed performance of updating process in Hough space, a caching technique was
applied in point registration. Compared with 2D Hough transform algorithm, 3D Hough transform utilized
more time consumption in plane detection process. The detection precision of plane parameters is not
accurate enough to support the decision making of UGV so that 3D Hough transform always utilized in
indoor environment. In 3D environment scene, the RANSAC algorithm is widely researched to extract
parameters from LiDAR point cloud. Schnabel et al. [12] utilized tradition RANSAC algorithm to compute
plane parameters from three random and dispersed points. Through several iterations, a goal function
was utilized to search the optimum plane model with the most fitting counts of the remaining points. To
improve operational efficiency of RANSAC, Choi et al. [1] reduced unknown number of plane equation
model. Wang et al. [15] proposed a preprocessing model based on bucketing mode. Nevertheless, RANSAC
algorithm is hard to achieve fast speed due to its iterative computation characteristics. CPU is well suited
for complex and intensive computing tasks. However, it has been difficult to improve the computational
efficiency of the RANSAC algorithm implemented on large-scale datasets. In the past few years, GPU
have transformed from a graphics-rendering device to a high-performance computing device that solves the
parallelism problems [8,2]. Data-intensive computing tasks perform more efficiently on large-scale datasets
using GPU. Utilize the inherent parallelism of RANSAC and the high computational efficiency of the GPU,
this paper proposes a fast planner detection method in LiDAR point clouds.
3</p>
    </sec>
    <sec id="sec-3">
      <title>PLANAR DETECTION METHOD</title>
      <p>To estimate the plane parameters from the raw 3D point cloud accurately, a plane detection framework
is proposed as shown in Figure 1. The framework mainly is consist of three procedures, including the
segmentation of 3D point cloud, mapping the point cloud to the GPU, plane detection and plane fitting.
3.1</p>
      <sec id="sec-3-1">
        <title>Point Cloud Segmentation</title>
        <p>
          It is difficult to extract plane fromLiDAR points cloud because the distribution characters of uneven
density and discontinuous spatial. Firstly, raw 3D point clouds sensed from 3D space are divided to some
subspaces. The 3D point count existed in each subspace is assumed asDnumi. Dnummax is the maximum
count in all 3D point clouds. The changing rate of 3D point cloud density between adjacent spaces is
formulated using the equation (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ):
Page 29 of 40
        </p>
        <p>
          Framework of the proposed planner detection method in LiDAR point
If the difference between the normal vectors of two planes is less than threshold , expressed as equation
(
          <xref ref-type="bibr" rid="ref4">4</xref>
          ), we assume they are coplanar:
        </p>
        <p>arcos (ni; nj )
Page 30 of 40</p>
        <p>Dden_changei =</p>
        <sec id="sec-3-1-1">
          <title>Ddeni+1</title>
        </sec>
        <sec id="sec-3-1-2">
          <title>Ddeni</title>
        </sec>
        <sec id="sec-3-1-3">
          <title>Dnummax</title>
        </sec>
        <sec id="sec-3-1-4">
          <title>Dden_changei</title>
          <p>3; Hi [ Hi+1</p>
          <p>The entire space of 3D point cloud is divided into a few subspaces Hi by maximum and minimum of
z-axis. After establishing the statistical histogram of change rate in point cloud density, 3D point clouds
are merged if changing rate of 3D point cloud density Dden_changei is lower than a threshold з.
3.2</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>Plane Detection</title>
        <p>
          After the 3D point clouds are segmented into several subspaces and registered into the terrain model on
the CPU, each data in spatial is processed independently. Because GPU has many blocks and there are
many threads in each block. In order to improve computing efficiency of algorithm, the registered 3D point
cloud datasets are copied to the GPU memory, which are able to be executed in parallel. The 3D points in
each subspace Pi are executed four procedures as follow:
1. Plane L’ is constructed by three point selected from Pi, the normal vector of the plane L’ is n’.
2. Angle i is calculated as the angle between n’ and the other points in Pi. Score SL’ of plane L’ records
the number of angles less than a threshold.
3. After repeating 1),2) for T times, plane L’ with the highest score is chosen. T is formulated using the
equation (
          <xref ref-type="bibr" rid="ref3">3</xref>
          ):
        </p>
        <p>T =</p>
        <p>
          log (1
log 1
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
)2
The variable is the chances of best plane to be selected. The variable
point of plane L’.
4. Marking plane L’ and removing these points.
3.3
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>Plane Fitting</title>
        <p>
          is the proportion outside the
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
(
          <xref ref-type="bibr" rid="ref4">4</xref>
          )
        </p>
        <p>
          In the other plane fitting constraint, centroid-centroid vector Mij is the line vector calculated from the
plane Li centroid to the plane Lj centroid. When the angles between centroid-centroid vector and the
planes’ normal vectors ni or nj are approximately vertical to each other, the two planes Li and Lj are
considered as belong to a same plane. Thus, if the angles between the two vectors less than a threshold '
after subtracting 90-degrees, the plane fitting constraint is satisfied as shown in equation (
          <xref ref-type="bibr" rid="ref5">5</xref>
          ):
arcos (Mi; ni) 90
arcos (Mj; nj) 90
'
'
(
          <xref ref-type="bibr" rid="ref5">5</xref>
          )
        </p>
        <p>The variable ' is nearly to zero.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>EXPERIMENTS</title>
      <p>In this section, we analyze the performance of the proposed plane detection method from LiDAR points.
The experiments were implemented on a 3.20 GHz Intel® Core™ Quad CPU computer with a GeForce GT
770 graphics card, 4 GB RAM. The applied HDL-32E Velodyne LiDAR had capability to sense 32 × 12 3D
points in a packet per 560.96 ms. In our experiment, the 3D point clouds were built using DirectX software
development kits. Figure 2 presents the raw datasets of 180 × 32 × 12 points tested by a stationary LiDAR.
In the experiment, all the sensed 3D points were divided into 20 groups according to their y coordinates.
The changing rate of 3D point cloud in density histogram is shown in Figure 3. Then, using a threshold
to merge point cloud based on the histogram, some redundant points were removed as shown in Figure
4. Finally, a series of initial plane models was calculated after initialization detection. In Figure 5, the
optimum plane was detected after plane fitting.
Page 31 of 40
Page 32 of 40
not perform in real time. Final detection results of CPU-optimized and GPU plane detection methods
were shown as in Figure 6 and Figure 7. It was obvious that our proposed method was implemented fast
and accurately.</p>
    </sec>
    <sec id="sec-5">
      <title>CONCLUSIONS</title>
      <p>To contribute efficient planar detection method for urban terrain modeling, this paper demonstrated a
GPU-based RANSAC method in LiDAR point clouds. The point segmentation and plane fitting processes
reduced iteration times. Using GPU programming technology, the improved RANSAC method performed
planar detection much faster than CPU-based method. CPU executed the complex computation processes,
while GPU executed the interactive and repeated processes of low computation complexity. This way, the
proposed method combined the advantages of CPU and GPU. The proposed method was able to reduce
the memory consumption for the urban terrain reconstruction for smart city modeling and intelligent
transportation. In the future, we will apply this plane detection method in the 3D terrain modeling on
UGV.</p>
      <p>ACKNOWLEDGMENTS This research was supported by the National Natural Science Foundation
of China (61503005), by NCUT “The Belt and Road” Talent Training Base Project, by NCUT “Yuxiu”
Project, by Beijing Natural Science Foundation (4162022), and by High Innovation Program of Beijing
(2015000026833ZK04).
Page 33 of 40
Page 34 of 40</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>S.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Byun</surname>
          </string-name>
          , and
          <string-name>
            <given-names>W.</given-names>
            <surname>Yu</surname>
          </string-name>
          .
          <article-title>Robust ground plane detection from 3d point clouds</article-title>
          .
          <source>In 2014 14th International Conference on Control, Automation and Systems (ICCAS</source>
          <year>2014</year>
          ), pages
          <fpage>1076</fpage>
          -
          <lpage>1081</lpage>
          ,
          <year>Oct 2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>N.</given-names>
            <surname>Faujdar</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Saraswat</surname>
          </string-name>
          .
          <article-title>A roadmap of parallel sorting algorithms using gpu computing</article-title>
          .
          <source>In 2017 International Conference on Computing, Communication and Automation (ICCCA)</source>
          , pages
          <fpage>736</fpage>
          -
          <lpage>741</lpage>
          , May
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>X.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. S.</given-names>
            <surname>Dawam</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Amin</surname>
          </string-name>
          .
          <article-title>A new digital forensics model of smart city automated vehicles</article-title>
          .
          <source>In 2017 IEEE International Conference on Internet of Things (iThings)</source>
          and
          <article-title>IEEE Green Computing and Communications (GreenCom) and</article-title>
          IEEE Cyber,
          <article-title>Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData)</article-title>
          , pages
          <fpage>274</fpage>
          -
          <lpage>279</lpage>
          ,
          <year>June 2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>Rostislav</given-names>
            <surname>Hulik</surname>
          </string-name>
          , Michal Spanel, Pavel Smrz, and
          <string-name>
            <given-names>Zdenek</given-names>
            <surname>Materna</surname>
          </string-name>
          .
          <article-title>Continuous plane detection in point-cloud data based on 3d hough transform</article-title>
          .
          <source>J. Vis. Comun. Image Represent</source>
          .,
          <volume>25</volume>
          (
          <issue>1</issue>
          ):
          <fpage>86</fpage>
          -
          <lpage>97</lpage>
          ,
          <year>January 2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ishida</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Izuoka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Chinthaka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Premachandra</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Kato</surname>
          </string-name>
          .
          <article-title>A study on plane extraction from distance images using 3d hough transform</article-title>
          .
          <source>In The 6th International Conference on Soft Computing and Intelligent Systems, and The 13th International Symposium on Advanced Intelligence Systems</source>
          , pages
          <fpage>812</fpage>
          -
          <lpage>816</lpage>
          ,
          <year>Nov 2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>K.</given-names>
            <surname>Jo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Jang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Sunwoo</surname>
          </string-name>
          .
          <article-title>Development of autonomous car-part i: Distributed system architecture and development process</article-title>
          .
          <source>IEEE Transactions on Industrial Electronics</source>
          ,
          <volume>61</volume>
          (
          <issue>12</issue>
          ):
          <fpage>7131</fpage>
          -
          <lpage>7140</lpage>
          ,
          <year>Dec 2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>Z.</given-names>
            <surname>Khalid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. A.</given-names>
            <surname>Mohamed</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Abdenbi</surname>
          </string-name>
          .
          <article-title>Stereo vision-based road obstacles detection</article-title>
          .
          <source>In 2013 8th International Conference on Intelligent Systems: Theories and Applications (SITA)</source>
          , pages
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          , May
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>Alexander</given-names>
            <surname>Kubias</surname>
          </string-name>
          , Frank Deinzer, Matthias Kreiser, and
          <string-name>
            <given-names>Dietrich</given-names>
            <surname>Paulus</surname>
          </string-name>
          .
          <article-title>Efficient computation of histograms on the gpu</article-title>
          .
          <source>In Proceedings of the 23rd Spring Conference on Computer Graphics, SCCG '07</source>
          , pages
          <fpage>207</fpage>
          -
          <lpage>212</lpage>
          , New York, NY, USA,
          <year>2007</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>Ki</given-names>
            <surname>Yong</surname>
          </string-name>
          <string-name>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>Joon Woong Lee, and Myeong Rai Cho</article-title>
          .
          <article-title>Detection of road obstacles using dynamic programming for remapped stereo images to a top-view</article-title>
          .
          <source>In IEEE Proceedings. Intelligent Vehicles Symposium</source>
          ,
          <year>2005</year>
          ., pages
          <fpage>765</fpage>
          -
          <lpage>770</lpage>
          ,
          <year>June 2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Caio</surname>
          </string-name>
          <article-title>César Teodoro Mendes and Denis Fernando Wolf</article-title>
          .
          <article-title>Real time autonomous navigation and obstacle avoidance using a semi-global stereo method</article-title>
          .
          <source>In Proceedings of the 28th Annual ACM Symposium on Applied Computing, SAC '13</source>
          , pages
          <fpage>235</fpage>
          -
          <lpage>236</lpage>
          , New York, NY, USA,
          <year>2013</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>P.</given-names>
            <surname>Rizwan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Suresh</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Babu</surname>
          </string-name>
          .
          <article-title>Real-time smart traffic management system for smart cities by using internet of things and big data</article-title>
          .
          <source>In 2016 International Conference on Emerging Technological Trends (ICETT)</source>
          , pages
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          ,
          <year>Oct 2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>R.</given-names>
            <surname>Schnabel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Wahl</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Klein</surname>
          </string-name>
          .
          <article-title>Efficient ransac for point-cloud shape detection</article-title>
          .
          <source>Computer Graphics Forum</source>
          ,
          <volume>26</volume>
          (
          <issue>2</issue>
          ):
          <fpage>214</fpage>
          -
          <lpage>226</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13. Erfan Babaee Tirkolaee, Ali Asghar Rahmani Hosseinabadi, Mehdi Soltani, Arun Kumar Sangaiah, and
          <string-name>
            <given-names>Jin</given-names>
            <surname>Wang</surname>
          </string-name>
          .
          <article-title>A hybrid genetic algorithm for multi-trip green capacitated arc routing problem in the scope of urban services</article-title>
          .
          <source>Sustainability</source>
          ,
          <volume>10</volume>
          (
          <issue>5</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>21</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <given-names>A.</given-names>
            <surname>Torii</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Havlena</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Pajdla</surname>
          </string-name>
          .
          <article-title>From google street view to 3d city models</article-title>
          .
          <source>In 2009 IEEE 12th International Conference on Computer Vision Workshops</source>
          , ICCV Workshops, pages
          <fpage>2188</fpage>
          -
          <lpage>2195</lpage>
          ,
          <year>Sept 2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Xiaoyan</surname>
            <given-names>Wang</given-names>
          </string-name>
          , Hui Zhang, and Sheng Liu.
          <article-title>Reliable ransac using a novel preprocessing model</article-title>
          .
          <source>Computational and mathematical methods in medicine,</source>
          <year>2013</year>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <given-names>S.</given-names>
            <surname>Yang</surname>
          </string-name>
          and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Fan</surname>
          </string-name>
          .
          <article-title>3d building scene reconstruction based on 3d lidar point cloud</article-title>
          .
          <source>In 2017 IEEE International Conference on Consumer Electronics - Taiwan (ICCE-TW)</source>
          , pages
          <fpage>127</fpage>
          -
          <lpage>128</lpage>
          ,
          <year>June 2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17. Hyun Woo Yoo, Woo Hyun Kim, Jeong Woo Park, Won Hyong Lee, and Myung Jin Chung.
          <article-title>Real-time plane detection based on depth map from kinect</article-title>
          .
          <source>In IEEE ISR</source>
          <year>2013</year>
          , pages
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          ,
          <year>Oct 2013</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>