<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Novel Algorithm for Road Extraction from Airborne Lidar Data</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Li Liu</string-name>
          <email>l.liu@unsw.edu.au</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Samsung Lim</string-name>
          <email>s.lim@unsw.edu.au</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>School of Civil &amp; Environmental Engineering, The University of New South Wales</institution>
          ,
          <addr-line>Sydney, NSW 2032</addr-line>
          <country country="AU">Australia</country>
        </aff>
      </contrib-group>
      <fpage>154</fpage>
      <lpage>163</lpage>
      <abstract>
        <p>Road data in 3-dimensional forms is required for a variety of geospatial applications e.g. road maintenance, transport planning and location-based services. Although airborne lidar can produce dense point clouds from which 3-dimensional road information can be retrieved in detail, lidar data is often incomplete due to the line-of-sight requirement, and therefore a better result of information extraction from lidar data can be achieved if supplement data is used. This paper presents a novel algorithm for road extraction from lidar point clouds and associative vector data. A moving window-based classification technique using various cues determined from the lidar data is applied in a hierarchical way to separate road from other objects. In order to fill the gaps caused by the line-of-sight problem e.g. shadows of trees and buildings, the vector data is used in the refinement process to improve the results by including a buffer, deleting false positives, interpolating and fitting the road-representing points into polylines. To validate the proposed algorithm, four samples of lidar data sets are tested. The test result shows that it is a practical method for road extraction from airborne lidar data for both structured and unstructured lanes.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>
        Feature extraction of urban environments from geospatial data acquired by airborne mapping has been a
significant research topic in photogrammetry and remote sensing for many decades
        <xref ref-type="bibr" rid="ref10">(Mayer, 2008)</xref>
        . For
example, many research efforts have been attempted to extract roads from aerial images. However, roads have
different appearances and characteristics in different scenes, and therefore road extraction from aerial images
has to be adapted to particular datasets or scenes (
        <xref ref-type="bibr" rid="ref5">Grote, et al, 2012</xref>
        ).
      </p>
      <p>
        Urban object extraction is still a sought-after topic, with a focus shifting to the detailed representation of
objects, to using data from new sensors, and to advanced data processing techniques
        <xref ref-type="bibr" rid="ref12">(Rottensteiner, et al.,
2013)</xref>
        . Compared with aerial photogrammetry, airborne lidar is a relatively new technique for geospatial data
capture, providing much denser points. Triggered by the advent of lidar, experiments have been conducted to
extract geo-features (e.g. trees, buildings, roads etc.) from lidar points. While a significant progress has been
made in the extraction of buildings and trees from lidar data, little research has been performed on the
extraction of roads
        <xref ref-type="bibr" rid="ref13">(Vosselman and Zhou, 2009)</xref>
        . Due to the thriving 3-dimensional (3D) mapping and
geo-database updating, road extraction from lidar data is now feasible but is still in its infancy.
      </p>
      <p>
        Most algorithms have exploited height differences, normal vectors, and contextual information to extract
roads from ‘raw’ lidar data. In general, common strategies consist of segmentation and clustering
        <xref ref-type="bibr" rid="ref16 ref17 ref19">(Choi et al.
2007; Zhu and Mordohar, 2009; Zhao and You, 2012)</xref>
        , region growing
        <xref ref-type="bibr" rid="ref1 ref2">(Akel et al., 2005; Clode et al. 2004a,
2004b, 2007)</xref>
        and curb detection
        <xref ref-type="bibr" rid="ref13">(Vosselman and Zhou, 2009)</xref>
        . Choi et al.(2007) extracted roads by utilizing a
series of circled buffers. Also, a maximum possible slope of roads is applied to eliminate the erroneous object
clusters.
        <xref ref-type="bibr" rid="ref19">Zhu and Mordohar (2009)</xref>
        generated a road likelihood map with lidar points and extract dominant
road regions with a minimum cover-set algorithm.
        <xref ref-type="bibr" rid="ref16">Zhao and You (2012)</xref>
        developed the elongated structure
templates that detect candidate road regions and a voting scheme is introduced to refine the road parameters.
However, it is not feasible to detect highly occluded roads. Akel et al. (2005) applied a region growing
segmentation method based on elevation and normal vectors to detect road areas.
        <xref ref-type="bibr" rid="ref2">Clode et al. (2007)</xref>
        introduced a hierarchical system to extract road points from airborne lidar data, and the results are convoluted
with Phase Coded Disk (PCD) to generate the vectorised results.
        <xref ref-type="bibr" rid="ref13">Vosselman and Zhou (2009)</xref>
        detected small
height jumps between curbstones and road surfaces according to height differences and an elevation threshold.
        <xref ref-type="bibr" rid="ref17">Zhou and Vosselman (2012)</xref>
        applied a method to mobile laser points and refined the detection process by a
sigmoidal function. However, the results show that it is not well-suited to mobile laser points because of the
occlusion by large vehicles and trees.
      </p>
      <p>
        Because of the shadow in some highly-densed vegetation areas, the results in these areas are not promising.
For the purpose of better accuracy and completeness, some researchers utilized aerial images as well as lidar
data.
        <xref ref-type="bibr" rid="ref9">Hu et al. (2004)</xref>
        exploited imagery and lidar data together in combination to extract road points. Ground
points are generated by lidar data and imagery is used to distinguish roads from open areas. The results show
that the accuracy and completeness were improved.
        <xref ref-type="bibr" rid="ref18">Zhu et al. (2004)</xref>
        introduced the associated road image
(ARI) extracted from lidar points to enhance the results from real road images (RRIs) from aerial imagery.
However, road extraction from images and lidar points is still difficult since the registration of images and
lidar points remains to be a problem.
      </p>
      <p>
        Existing geospatial data is a complementary source to serve as a priori geometric knowledge about roads
        <xref ref-type="bibr" rid="ref14 ref4 ref6">(Vosselman, 2003; Hatger and Brenner, 2003; Oude Elberink and Vosselman, 2006)</xref>
        .
        <xref ref-type="bibr" rid="ref14">Vosselman (2003)</xref>
        used
lidar points and cadastral maps to reconstruct roads.
        <xref ref-type="bibr" rid="ref6">Hatger and Brenner (2003)</xref>
        applied a fast region-growing
algorithm to extract road geometry parameters from lidar data and existing road databases. Similarly, Oude
        <xref ref-type="bibr" rid="ref4">Elberink and Vosselman (2006)</xref>
        fused lidar data and 2-dimensional topographic maps to extract road points.
However, these methods suffer from the map scale and the generalization process.
      </p>
      <p>In this paper, we propose a novel algorithm for road extraction from airborne lidar point clouds and vector
data. The proposed method uses a hierarchical moving window that detects road points based on slope,
elevation, and intensity values. Because of the highly-densed vegetation areas, the completeness of the results
can be challenging to achieve. To refine the initial extraction results, vector data are utilised to improve the
accuracy and completeness of the results, including creating a buffer, deleting false positives, filling gaps and
fitting into lines. The main difference between existing algorithms and ours is that the vector data is used only
in the refinement process to minimise the effect caused by the scale problem and the generalization problem
in the vector data. If the vector data is used to partition the neighborhood, there are chances that road points
may be left out when a buffer is created to retain lidar points within the buffer since there may be errors
in the vector data. Our goal of this work is to develop an algorithm suitable to both structured and
unstructured roads.</p>
    </sec>
    <sec id="sec-2">
      <title>2 Road Extraction</title>
      <p>The proposed method has the following core steps as shown in Fig.1. Firstly, partitioning road points
according to x/y coordinates and reordering the points according to y/x coordinates. Secondly, setting the
minimum and maximum sizes of moving windows and detecting road points within the given window.
Thirdly, enlarging the window size and repeating the detection process until the window size meets the
maximum threshold. Lastly, refining the results by vector data, filling the gaps and extracting the centerlines.</p>
      <sec id="sec-2-1">
        <title>2.1 Overview</title>
        <p>Figure 3 An initial process of roads extraction: (a) Part of the profile of lidar points from the transect line in
Fig.2, (b) initialise the window size with L0, (c) increase the window size to L1 and repeat the process,(d)
increase the window size to L2 and repeat the process</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2 Partitioning lidar points</title>
        <p>
          Airborne lidar data contains typically a huge amount of points. Therefore, lidar data possessing is overly
time-consuming and complicated. To reduce the computation time, Vosselman and
          <xref ref-type="bibr" rid="ref17">Zhou (2012)</xref>
          and
          <xref ref-type="bibr" rid="ref15">Yang et
al. (2013)</xref>
          divided lidar points according to scanlines and then extracted road points scanline by scanline;
Boyko and Funkhouser (2011) used road maps to define the road surroundings and extract features within the
surroundings.
          <xref ref-type="bibr" rid="ref11">Pu et al. (2011)</xref>
          partitioned lidar data using buffers along the vehicle trajectories. Since road
maps and trajectory data may suffer from a low accuracy, we partitioned lidar points according to the
coordinates. At first, all points are partitioned by x/y coordinates, and then they are reordered by the y/x
coordinates. In order to be suitable for road extraction in different directions, the dataset is also partitioned by
x and y coordinates respectively.
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3 Road Extraction</title>
        <p>After partitioning the lidar data with respect to x and y axes, a moving window is initialized. This process
requires the following steps:</p>
        <p>Step 1. Set the minimum and maximum window sizes.</p>
        <p>Step 2. Set the current window size as the minimum.</p>
        <p>Step 3. Detect road points along x and y axes . If the points meet the criteria, increase the classification
index of the points by one;</p>
        <p>Step 4. Increase the window size by the pre-defined increasing step and repeat Step 3. If the window size is
larger than the maximum threshold, do Step 5.</p>
        <p>Step 5. Repeat Steps 2-4 until all x and y axes are processed.</p>
        <p>Any point whose classification index is above the threshold is marked as the candidate for road points. To
distinguish road points from other points, we defined three rules for slope, elevation and height differences,
and intensity, as follows.
2.3.1 Rule 1: Slope
Since a road is a smooth and continuous surface, the slope of two adjacent road points should be low.
Therefore the slope of two adjacent road points should meet the following condition:
where
is the slope of two adjacent points, and
is the maximum slope of the surrounding points.</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.3.2 Rule 2: Elevation and Height Differences</title>
        <p>
          To some extent, a road is the lowest part of the surrounding points
          <xref ref-type="bibr" rid="ref7">(Haugerud and Harding, 2001)</xref>
          . Elevation
of the road is usually lower than the surrounding and the height difference of the road is small. Therefore the
road points should meet these conditions:
where
is the height of a lidar point of interest,
is the height threshold for surrounding lidar
points calculated by the accumulative histogram algorithm,
is the maximum height of the selected
road points within the moving window,
        </p>
        <p>is the minimum height of the selected road points within the
moving window, and</p>
        <p>is the self-defined threshold of the height difference.</p>
      </sec>
      <sec id="sec-2-5">
        <title>2.3.3 Rule 3: Intensity</title>
        <p>
          The distribution of intensity values of ground points should meet the Gaussian Normal Distribution
          <xref ref-type="bibr" rid="ref3">(Crosilla
et al., 2013)</xref>
          . In addition to that, intensity values of a road are obviously different from the surroundings,
either higher or lower. Therefore the road points should meet the condition:
(1)
(2)
(3)
where:
is the intensity of the lidar point of interest,
is the intensity threshold for the surrounding
lidar points, calculated by the accumulative histogram algorithm.
        </p>
        <p>As seen in Fig. 3(b), if the window size is not properly chosen, some building rooftops can be misclassified
as road points. In order to avoid the misclassification, we apply statistical methods to constrain the
classification: when the point meets all these criteria with one window size, we add one to its classification
index. And the classification index of road points should be larger than the preset threshold; otherwise the
point would be marked as an object point.</p>
      </sec>
      <sec id="sec-2-6">
        <title>2.4 Refinement of Candidate Road Points</title>
        <p>After the initial roads extraction, some open areas may be misclassified as roads since they present similar
characteristics as roads. Further refinement is needed to obtain more accurate results. In this paper, we utilise
vector data to refine the initial results. The refinement process includes creating a buffer, deleting false
positives, interpolating and fitting to a line, as shown in Fig.4. At first, a buffer is created along each vector
segment to constrict the surrounding area and filter out the false positives. Points that fall into the buffer of
one road segment are labelled as part of the road segment and will not be taken into account afterwards. If
points are outside the buffers of all road segments, they will be labelled as noise and be removed. And then a
linear interpolation will be applied along the main direction of each road segment to fill the gaps. To get a
smooth surface, fitting is adapted to refine the results. And the centerline is calculated according to the refined
results.</p>
        <p>(a) Create a buffer</p>
        <p>(b) Eliminate false positives
(c) Interpolation
(d) Fitting the road segments</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3 Results</title>
      <sec id="sec-3-1">
        <title>3.1 Test Data</title>
        <p>Four samples of different areas and scenes are chosen for our study as shown in Fig.5 and summarized in
Table 1. The first sample is a residential area near the University of New South Wales (UNSW), including
144,255 points. The maximum elevation difference within the dataset is 50.73 m. There are high slopes along
the horizontal streets, and in some areas roads are heavily blocked by trees. The second sample is a part of
Anzac Parade (an arterial road) with its surrounding residential areas, with 128,847 points. It consists of
high-rise buildings, small residential houses, vegetation, structured roads and unstructured roads. In some part
of Anzac Parade, it is highly occluded by cars. The third sample is a part of Barker Street (a main road) with
its surrounding areas with 124,690 points. Residential houses, trees and roads make up the entire dataset and
there is a high slope along the street leading to the fact that in some areas the heights of houses are even lower
than that of adjacent roads. The last sample is a part of Botany Street (a main road) with its surrounding areas
with 69,064 points. It mainly consists of houses, vegetation and roads. The vector data available is the road
network whose resolution is 2 m. Because of the fast development of cities, road geometry can change
quickly, which causes a problem to the vector data updating. In this sense, the vector data should comply with
the lidar dataset. In this experiment, only the coordinates of starting points and ending points for road
segments are available. And the assumption of the vector data is that the road geometry is accurate enough to
be used for refinement of the lidar data processing results.
73.60
Figure 5 Four airborne lidar datasets: (a) a residential area, (b) Anzac Parade, (c) Botany Street, (d) Barker</p>
        <p>Street.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2 Extraction Results</title>
        <p>Based on the proposed method, the minimum window size is set to 15; the maximum window size is set to 35;
and the increase step is defined as 5. Then the slope, the height difference, the classification index threshold
and the gap between two candidate road segments are also specified as listed in Table 2. The size of window
changes gradually from 15 points to 35 points. If a point meets the criterion in all neighborhoods from a
window size of 15 to 35, it is regarded as a road point.</p>
        <sec id="sec-3-2-1">
          <title>Symbol</title>
        </sec>
        <sec id="sec-3-2-2">
          <title>SMax</title>
        </sec>
        <sec id="sec-3-2-3">
          <title>SMin Si Ni</title>
          <p>Fig.6 shows the road extraction results and respective centerlines of four data samples, which indicates the
feasibility of the proposed method.
32.31
61.98</p>
          <p>0
58.20
Figure. 6 Results of the extracted road network and respective centerlines: (a) extraction results of Anzac
Parade, (b) estimated centerlines of Anzac Parade, (c) extraction results of Botany Street, (d) estimated
centerlines of Botany Street, (e) extraction results of Barker Street, (f) estimated centerlines of Barker Street,
(g) extraction results of a residential area, (h) estimated centerlines of the residential area.</p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>3.3 Quantitative evaluation of 3D road extraction</title>
        <p>
          <xref ref-type="bibr" rid="ref8">Heipke et al., (1997</xref>
          ) stated that the quantitative evaluation of road extraction can be addressed in three ways:
completeness, correctness and quality, which are also used by Clode et al.(2007) and Yang et al.(2013).
Completeness is the ratio of the total length of the extracted road that matches the reference roads to the total
length of the reference data. Correctness is the ratio of the total length of the extracted road that matches the
reference roads to the total length of the extracted data. Quality is the ratio of the total length of the extracted
road that matches the reference roads to the total length of the sum of extracted data and undetected roads.
Firstly, reference or ground truth data is extracted from images by digitizing the road centerlines and the
length of reference roads is measured several times to get the average value. The length of extracted roads is
calculated by summing up the lengths of each segment which can be achieved by a coordinate-based
computation. Then results of the proposed method are compared with the reference data by calculating the
maximum bias between the extracted road segment and the reference segment. If the maximum bias is above
the threshold, the whole road segment is regarded as an unmatched road. The maximum tolerance of bias is
set to 1.5 m. The statistics for our quantitative analysis are given in Table 3. Accuracy metrics are calculated
by the following equations (4-6):
        </p>
        <p>Completeness = TP/Lr</p>
        <p>Correctnes = TP/Le
Quality = TP/(Le + FN)
(4)
(5)
(6)
where:
is the total length of the reference roads;
is the total length of the extracted roads;
is true positives, namely, the total length of the extracted roads that matches the reference roads;
is false negatives, namely, the total length of the undetected roads.</p>
        <p>The statistics for each dataset are computed as follows:</p>
        <p>The quantitative analysis results are summarized in Table 4, which indicates that Sample 2 shows the
worst case. A further inspection indicates that the majority of unmatched roads occurred in two polylines.
The main reason is that only starting nodes and ending nodes are available in the vector data and the
way points are not present, which obviously contributes to the biases. The majority of undetected roads
are the traffic islands and crossings of the two roads. As for traffic islands, although most of the edges
are extracted in the initial extraction, they are merged to other road segments when centrelines are
extracted. As for the crossings, while they are listed as separated road segments in the reference data,
they will be ignored and regarded as one segment in the initial extraction.
Roads are important infrastructure and therefore road extraction from lidar data is of great importance in
terms of civil engineering and urban planning. In this paper, we presented a novel method for road extraction
from airborne lidar data and vector data. The proposed method utilises the local geometry and the radiometric
feature of roads to classify road points from the lidar point clouds. To effectively process the data, all points
are partitioned with respect to x and y axes and reordered by coordinates. A moving window is initialized to
detect the road points. After the initial process, those points whose classification indices are above the
threshold are marked as road points..In order to refine the results, vector data is used to eliminate false
positives and, in the end, gaps between road segments are filled by a linear interpolation method. The
proposed method successfully extracts the roads from airborne lidar data. The quantitative results show that it
is a promising method for road extraction, not only suitable for structured roads, but also feasible to
unstructured lanes.</p>
        <p>Although the presented method is able to extract roads with reasonable completeness, quality and
correctness, it is vulnerable to the problem of traffic islands extraction. If not only the starting and ending
nodes but also way points are available in the vector data, our a priori geometry knowledge of road segments
will help reduce biases in the refinement process. Also, the choice of a window size is critical to the extraction
results. If the maximum window size is too small, some building rooftops may fall into the whole window,
thus being misclassified as roads. On the other hand, if the minimum window size is too small, it would take
too much time for iteration. The aforementioned vulnerability of the proposed algorithm will be the focus of
our further research.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>References:</title>
      <p>Akel, N.A., Kremeike, K., Filin, S., Sester, M., Doytsher, Y., (2005). Dense DTM generalization aided by
roads extracted from LIDAR data. In: IAPRS 36 (Part 3/W19), 54–59.</p>
      <p>Boyko, A., Funkhouser, T. (2011). Extracting roads from dense point clouds in large scale urban environment.</p>
      <p>ISPRS J. Photogrammetry and Remote Sens. 66 (6), S2–S12.</p>
      <p>Cheng, L., Zhao, W., Han, P., Shan, J., Liu, Y.X., Li, M.C., (2013). Building region derivation from Lidar
data using a reversed iterative mathematic morphological algorithm. Optics Communications (286):
244-250.</p>
      <p>Choi, Y.W., Jang, Y.W., Lee, H.J., Cho, G.S., (2007). Heuristic road extraction. In: International Symposium
on Information Technology Convergence, IEEE Computer Society.</p>
      <p>Clode, S., Kootsookos, P. Rottensteiner, F., (2004(a)). The Automatic Extraction of Roads from Lidar Data.</p>
      <p>In: IAPRSIS, Vol. XXXV-B3, pp. 231 – 236.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Clode</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zelniker</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kootsookos</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Clarkson</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <article-title>(2004(b)). A Phase Coded Disk Approach to Thick Curvilinear Line Detection</article-title>
          .
          <source>In: 17th European Signal Processing Conference</source>
          ,
          <volume>6</volume>
          -
          <issue>10</issue>
          <year>September</year>
          ,
          <year>2004</year>
          ,Vienna, Austria, pp.
          <fpage>1147</fpage>
          -
          <lpage>1150</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Clode</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rottensteiner</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kootsooks</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zelniker</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , (
          <year>2007</year>
          ).
          <article-title>Detection and Vectorisation of Roads from Lidar Data</article-title>
          .
          <source>PE&amp;RS</source>
          , Vol.
          <volume>73</volume>
          (
          <issue>5</issue>
          ),
          <fpage>517</fpage>
          -
          <lpage>536</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Crosilla</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Macorig</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scaioni</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sebastianutti</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Visintini</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , (
          <year>2013</year>
          ).
          <article-title>Lidar data filtering and classification by skewness and kurtosis iterative analysis of multiple point cloud data categories</article-title>
          .
          <source>Applied Geomatics</source>
          . Vol.
          <volume>5</volume>
          , Issue 3, pp
          <fpage>225</fpage>
          -
          <lpage>240</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Elberink</surname>
            ,
            <given-names>S.J.O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vosselman</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , (
          <year>2006</year>
          ).
          <article-title>3D modelling of topographic objects by fusing 2D maps and lidar data</article-title>
          .
          <source>In: IAPRS</source>
          , Vol.
          <volume>36</volume>
          ,
          <string-name>
            <surname>part</surname>
            <given-names>4</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Goa</surname>
          </string-name>
          , India,
          <source>September</source>
          <volume>27</volume>
          -30.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <given-names>Gröte A.</given-names>
            ,
            <surname>Heipke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Rottensteiner</surname>
          </string-name>
          ,
          <string-name>
            <surname>F.</surname>
          </string-name>
          , (
          <year>2012</year>
          ).
          <article-title>Road Network Extraction in Suburban Areas. The Photogrammetric Record</article-title>
          . Vol.
          <volume>27</volume>
          ,No.
          <volume>137</volume>
          , pp:
          <fpage>8</fpage>
          -
          <lpage>28</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Hatger</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brenner</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , (
          <year>2003</year>
          ).
          <article-title>Extraction of road geometry parameters from laser scanning and existing databases</article-title>
          .
          <source>IAPRS</source>
          , Vol.
          <volume>34</volume>
          ,
          <year>2003</year>
          ,
          <fpage>225</fpage>
          -
          <lpage>230</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Haugerud</surname>
            ,
            <given-names>R.A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Harding</surname>
            ,
            <given-names>D.J.</given-names>
          </string-name>
          , (
          <year>2001</year>
          ),
          <article-title>Some algorithms for virtual deforestation (VDF) of lidar topographic survey data: IAPRS , XXXIV-3/W4</article-title>
          , p.
          <fpage>211</fpage>
          -
          <lpage>217</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Heipke</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mayer</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wiedemann</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jamet</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          , (
          <year>1997</year>
          ).
          <article-title>Evaluation of automatic road extraction</article-title>
          .
          <source>IAPRS XXXII</source>
          (
          <year>1997</year>
          ), pp.
          <fpage>47</fpage>
          -
          <lpage>56</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Hu</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tao</surname>
            ,
            <given-names>C.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          , (
          <year>2004</year>
          ).
          <article-title>Automatic road extraction from dense urban area by integrated processing of high-resolution imagery and LIDAR data</article-title>
          .
          <source>IAPRS 35 (Part B3)</source>
          ,
          <fpage>288</fpage>
          -
          <lpage>292</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>Mayer</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , (
          <year>2008</year>
          ).
          <article-title>Object extraction in photogrammetric computer vision</article-title>
          .
          <source>ISPRS J. Photogrammetry &amp; Remote Sens</source>
          .
          <volume>63</volume>
          (
          <issue>2</issue>
          ):
          <fpage>213</fpage>
          -
          <lpage>222</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Pu</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rutzinger</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vosselman</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Oude</given-names>
            <surname>Elberink</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          (
          <year>2011</year>
          )
          <article-title>Recognizing basic structures from mobile laser scanning data for road inventory studies</article-title>
          .
          <source>ISPRS J. Photogrammetry and Remote Sens</source>
          .
          <volume>66</volume>
          :
          <fpage>S28</fpage>
          -
          <lpage>S39</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>Rottensteiner</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sohn</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gerke</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wegner</surname>
            , J.,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Breitkopf</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jung</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , (
          <year>2013</year>
          ),
          <article-title>Results of the ISPRS benchmark on urban object detection and 3D building reconstruction</article-title>
          .
          <source>ISPRS J. Photogrammetry &amp; Remote Sens</source>
          . In Press.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Vosselman</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , (
          <year>2009</year>
          ).
          <article-title>Detection of curbstones in airborne laser scanning data</article-title>
          .
          <source>In: IAPRS XXXVIII - 3/W8</source>
          , pp.
          <fpage>111</fpage>
          -
          <lpage>117</lpage>
          , Paris, France,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <surname>Vosselman</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , (
          <year>2003</year>
          ). 3-
          <string-name>
            <given-names>D</given-names>
            <surname>Reconstruction</surname>
          </string-name>
          of
          <article-title>Roads and Trees for City Modelling</article-title>
          .
          <source>In: IAPRS</source>
          , Vol.
          <volume>34</volume>
          , part 3/W13, Dresden, Germany, pp.
          <fpage>231</fpage>
          -
          <lpage>236</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>B. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fang</surname>
            ,
            <given-names>L.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , (
          <year>2013</year>
          ).
          <article-title>Semi-automated extraction and delineation of 3D roads of street scene from mobile laser scanning point clouds</article-title>
          .
          <source>ISPRS</source>
          , Vol.
          <volume>79</volume>
          :
          <fpage>80</fpage>
          -
          <lpage>93</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>J.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>You</surname>
            ,
            <given-names>S.Y.</given-names>
          </string-name>
          , (
          <year>2012</year>
          ).
          <article-title>Road Network Extraction from Airborne Lidar Data Using Scene Context</article-title>
          . In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Vosselman</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , (
          <year>2012</year>
          ).
          <article-title>Mapping curbstones in airborne and mobile laser scanning data</article-title>
          .
          <source>International Journal of Applied Earth Observation and Geoinformation</source>
          <volume>18</volume>
          (
          <year>2012</year>
          ),
          <fpage>293</fpage>
          -
          <lpage>304</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <surname>Zhu</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Honda</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eiumnoh</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , (
          <year>2004</year>
          ).
          <article-title>Extraction of city roads through shadow path reconstruction using laser scanning</article-title>
          .
          <source>PE&amp;RS</source>
          <volume>70</volume>
          (
          <issue>12</issue>
          ),
          <fpage>1433</fpage>
          -
          <lpage>1440</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <given-names>Zhu. Q.H.</given-names>
            ,
            <surname>Mordohai</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          , (
          <year>2009</year>
          ).
          <article-title>A Miumum Cover Approach for Extracting the Road Network from Airborne Lidar Data</article-title>
          .
          <source>In: IEEE 12th Conference on Computer Vision Workshop</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>