<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Journal of Visual Communication and Image Representation 64 (2019)
102634. doi:10.1016/j.jvcir.2019.102634.
[20] Y. T. Cheng</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.48550/arXiv.2012.10902</article-id>
      <title-group>
        <article-title>Improving the Streaming Image Quality with LiDAR⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ostap Kuzyk</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksandr Prydatko</string-name>
          <email>o_prydatko@ukr.net</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nazarii Burak</string-name>
          <email>nazar.burak@ukr.net</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andriy Kuzyk</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Lviv State University of Life Safety</institution>
          ,
          <addr-line>Kleparivska 35, 79007 Lviv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <volume>502</volume>
      <issue>1</issue>
      <fpage>605</fpage>
      <lpage>616</lpage>
      <abstract>
        <p>LiDARs, which scan the space and measure the distance to each point, do not always produce the required quality images. Therefore, the image quality improvement is ruther actual. The aim of this work is to develop a method for the image quality LiDAR generated improvement. The paper describes a study using an Intel RealSense L515 LiDAR, Intel RealSense Viewer v.2.53.1. When a surface is scanned by a LiDAR, the part of it that is placed between two consecutive pixels is not displayed, which can lead to a deterioration in image quality and detail loss. A method of improving the image quality is proposed, which consists in moving (horizontal rotation) of the LiDAR camera around the axis or optical center, which will give the possibility to scan the areas between two consecutive pixels. An algorithm for the practical implementation of the method has been developed, consisting of two interrelated parts, the first controlling the LiDAR operation, and the second - the software that generates the surface image. The suggested method will enable us to obtain better quality images. In the case of a video stream, this increase in resolution will lead to a decrease in frame rate. This method implementation possibility has been confirmed by the experimental study results.</p>
      </abstract>
      <kwd-group>
        <kwd>LiDAR</kwd>
        <kwd>image quality</kwd>
        <kwd>streaming image</kwd>
        <kwd>scanning 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        LiDARs are widely used for scanning space and forming its three-dimensional image in various
human activity fields. LiDARs are also used to create digital terrain models. Their use is especially
relevant for area and environments mapping that change over time, including beaches and dunes
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], river and sea coastlines [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], the forest structure and spatial transformation, agricultural and
urban ecosystems [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], changes in road conditions for self-driving vehicles [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], archaeological
research [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], territory mapping [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], snow depth measurements [10], etc. In most previously
mentioned works and other sources, considerable attention is paid to the accuracy of the image
obtained, in particular, the distance to the scanning surface points [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Modern LiDARs have
undergone a number of improvements as a result of scientific and technological progress and are
now
widely used for precision
      </p>
      <p>measurements in various systems: mapping, autonomous
navigation, vegetation analysis, emergency management, and military support [22].</p>
      <p>Many tasks that are solved</p>
      <p>
        with the LiDAR help require not only accurate distance
measurement but also high-quality images (to the nearest centimeter), in particular in real-time
systems that work with surfaces with unequal reflectivity and obstacle presence [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. In general, in
the odometry process, the image formation and analysis speed does not provide an opportunity for
the image quality increase. In such cases, two algorithms are used simultaneously: low accuracy
and high frequency for motion estimation, and high accuracy but an order of magnitude lower
frequency for generating a high-quality image [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. To improve the quality in the concurrent
localization and mapping process, a method proposed [11], is based on effective local mapping and
hierarchical optimization. 3D laser scanner measurements are aggregated into local maps at
different resolutions by surface-based registration applying graph theory. Practical LiDAR
implementation requires not only image improvement but also evaluation of the last. A number of
works are devoted to this problem. In particular, [18] reviewed the relevant technologies, provided
a formula for image quality assessment and proposed methods for their improvement. In [19], a
strategy is presented, aimed to optimize such images using iterations and statistical methods. In
[20], the image intensity correction was performed by adjusting the geometric parameters of the
scan with a single-beam or multi-beam LiDAR. Evaluation was conducted via
geometric/morphological and profound/portable teaching methods with intensity
correction/normalization. These and similar image evaluation and enhancement methods are
complex, not always effective in implementation, and require powerful computing resources.
      </p>
      <p>In some cases, the LiDARs images are combined with other spatial data: traffic density maps [9],
which help to perform a deeper analysis of the image and increase the laser scanning method
functionality. There is a concurrent localization complexity as well as a display in the process of
autonomous navigation and positioning of unmanned systems that use multisensory fusion [12,
23]. The effectiveness of this type of localization and mapping systems depends on the algorithms
used for navigation and fusion.</p>
      <p>The problem of processing streaming video images is relevant not only for LiDARs, but also for
optical devices operating in conditions of low visibility. In [25] a neural network was used to
analyze moving objects in a video stream.</p>
      <p>The principles and technologies of LiDARs practical application are described in detail in [13].
LiDARs for automatic vehicle control systems, their structure and functioning are described in [14].
To carry out topographic work, LiDARs are placed on board aircraft, including unmanned ones.
Appropriate technologies for their application are given in [15]. In addition to traditional LiDARs,
single-photon LiDARs have become widespread, which send only one pulse to the object and
measure the individual photons flight time [16].</p>
      <p>The structure of modern LiDARs is based on a variety of principles. These are usually four
image scanning mechanisms [21]: optical-mechanical, electromechanical, micro-electromechanical
systems (MEMS) and solid-state scanning systems. Electromechanical scanning is the most
common, but MEMS is a more advanced technology than other existing ones. Solid-state scanning
has prospects for development, since it has high reliability, a large field of vision and scanning
speed, but today it is technologically difficult to manufacture.</p>
      <p>Improving the LiDAR image quality can be achieved not only by mathematical methods and
algorithms. This problem successful solution example based on physical principles is given in [24].
Based on the Schaempflug method LiDAR, scans at an angle with correction, increasing the clarity
of the entire image. In general, the quality of the LiDAR in practice should be at a high level, and
its improvement is an urgent scientific and technical task.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Proposed methodology</title>
      <p>The aim of the work is to develop a method for improving the image quality obtained by the
LiDAR.</p>
      <p>The method is based on mechanical movement of the camera at a small angle, which allows
scanning the space between neighboring pixels and detecting image details.</p>
      <p>Experimental studies using Intel RealSense L515 LiDAR [17], and Intel RealSense Viewer
v.2.53.1 software have confirmed the possibility of its implementation.</p>
      <p>A step-by-step algorithm for implementing the proposed method has been developed, the
probably issues of its implementation and ways to solve them have been analyzed.</p>
    </sec>
    <sec id="sec-3">
      <title>3. LiDAR formed image quality</title>
      <p>Digital images formed by various devices, including LiDARs, must be of appropriate quality for
practical usage, dictated by relevant requirements. Image quality indicators include the resolution –
the number of pixels horizontally and vertically – as well as the color sampling range of each pixel
in bits. However, to achieve high-quality images, it is necessary to ensure the correct properties
reproduction of the real object being depicted. The image quality generated by optical devices can
be influenced by the lens optical properties. In the case of a LiDAR, the quality of the raw image is
determined by the scanning mechanisms precision and distance measurements.</p>
      <sec id="sec-3-1">
        <title>3.1. LiDAR image formation and its improvement</title>
        <p>This section examines how Intel RealSense L515 LiDAR creates images, designed for indoor
environments at short ranges. The LiDAR scans a space section enclosed by pyramid sides,
presenting the distance field as a pixel frame. Micro-electromechanical systems (MEMS) technology
controls the laser beam, which operates in pulsed mode. Scanning begins along the top horizontal
line, then the beam descends to the next line, continuing the process down to the lowest line. The
laser beam is focused sufficiently and forms a narrow solid angle (Figure 1). Reflected light from the
laser-illuminated surface reaches the photodiode, enabling measurement of the distance to the area,
and that distance is shown in the pixel, possibly through a certain colour. However, between
neighboring laser-illuminated sections on the surface within the field, some sections remain unlit
by the laser. As distance increases, the size of these unlit sections grows, leading to a loss in
detecting small objects or parts, especially at longer distances, which reduces clarity and image
quality.</p>
        <p>It should be noted separately that improving the video image quality is more complex and involves
not only processing a single frame but also that of a consecutive frame series.</p>
        <p>In LiDAR, reduced clarity and image quality may result from low rate reflectivity on the
illuminated surface and the detail presence between neighboring pixels. The first issue is tied to the
physical properties of the surface and is therefore almost impossible to resolve. Only in specific
cases, when low reflectivity rate arises not from material properties but from a large angle between
the laser beam and the surface normal, it can be increased by changing the beam incidence angle.
The second issue of reduced clarity, caused by insufficient resolution or distance measurement
error, can be solved using methods similar to image enhancement techniques, such as
softwarebased resolution enhancement through interpolation methods. It’s important to remember that
each pixel corresponds not to colour, but to distance. However, as with conventional images, this
method does not always accurately render intermediate details.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Software image quality improvement</title>
        <p>Existing images can be software enhanced. The corresponding methods of digital image quality
improvement are known and widely used in various fields related to image processing. Such
methods are implemented by various algorithms that analyze the image and edit it. However,
errors may occur while algorithm execution, which leads to certain image incorrect detailing. Note
that in the LiDAR image enhancement case, the distances are actually refined. In this regard, the
principles and approaches of images created by a LiDAR improvement differ significantly from
those used to work with images captured by optical cameras. The cameras of this kind can cause
the certain image fragments clarity loss since they go beyond the depth of field or due to
insufficient resolution or sensitivity to the photosensitive matrix available light.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Software and hardware image quality enhancement</title>
        <p>Enhanced quality can be achieved by compacting the beams during the scanning process. However,
this method requires appropriate structural changes in the LiDAR. It is a challenging method when
we use a ready-made device, given the maximum scanning density that has probably already been
achieved by the developers. Therefore, if a certain LiDAR type is available and it is impossible to
make changes to its structure, the image quality can be improved, without taking into account
traditional software methods, by additional space scanning that is located between consecutive
points that are irradiated by laser beams and for which distances are measured. This scanning
should be done without interfering with the internal LiDAR structure.</p>
        <p>Based on these considerations, we propose a method that, unlike the existing ones can solve the
second problem and partially the first described above. The main point of this method is to
mechanically move (rotate around the axis or optical centre) the LiDAR to the appropriate angle in
order to scan the surface unlit parts, and then to process (compensate) the corresponding
movement programmatically.</p>
        <p>Experimental studies have been conducted in order to verify the method's feasibility in practice.
In the absence of a LiDAR rotation mechanism, the LiDAR was placed on a horizontal surface in a
motionless state, and the objects to be observed were moved.</p>
        <p>The objects displayed by the LiDAR were a cylinder and a rectangular parallelepiped (Table 1).</p>
        <sec id="sec-3-3-1">
          <title>Rectangular parallelepiped</title>
        </sec>
        <sec id="sec-3-3-2">
          <title>Rectangular parallelepiped</title>
        </sec>
        <sec id="sec-3-3-3">
          <title>Height, mm Diameter (width), mm 170 51</title>
          <p>51
7
12
35</p>
        </sec>
        <sec id="sec-3-3-4">
          <title>The objects were placed vertically (Figure 2).</title>
          <p>The images were analysed by moving the objects horizontally along the surface (Figure 3).</p>
          <p>LiDAR
l</p>
          <p>Surface
</p>
          <p>Direction of the</p>
          <p>object
movement
Object</p>
          <p>The maximum distances to the objects li, i=1, 2, 3 were experimentally determined, so they are
detected by the LiDAR (Table 2).</p>
          <p>When objects have been moving in a direction perpendicular to the LiDAR direction, they
appeared and disappeared in the image sequentially. To more accurately determine the required
displacement dimensions at which the objects disappeared and reappeared, we moved to a distance
where the objects image disappeared and reappeared 10 times, and divided the resulting distance
by 10. The values of the shifts Δi were obtained (Table 2). Using the trigonometric formula
sin αi =
2
Δi ,
2 li
(1)
where li – is the distance to the surface along which object i moved, αi – is the smallest angle of
movement during which the object disappeared and reappeared, and Δi – is the length of
movement, we obtained the angles for the objects. The results are shown in Table 2.</p>
        </sec>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. The method and the algorithm of LiDAR image enhancement</title>
        <p>We propose the method for improving image quality and the algorithm for its implementation.
This algorithm describes one of the possible variants of the method implementation in practice. To
preserve the generality of the method, it is assumed that the distance to each point on the surface
scanned by the LiDAR deviates slightly from the average value.</p>
        <p>The distance matrix for frame p is</p>
        <p>p p
Lp=( lp11 l1pn )</p>
        <p>lm1 lmn
d1, j=l1p, j ,
~
p
dqj=lij ,
where lipj is a distance from LiDAR to the inlit area on the surface corresponding the pixel in the
column i and the row j. Depending on the distance to the surface, there is an interval between two
adjacent inlit areas that can contain one or more areas whose dimensions are equal to the area
irradiated by one pixel. The obtained capacity r , r ∈ℕ of the unirradiated interval determines the
multiplicity k = r + 1, which will be the angular division factor. By turning the camera left (right) k
times on an angle φ where φ is the angle between two adjacent beams, we measure the distances
k
and record them into a matrix D=(dqj) of size (kn + 1)×m, which we obtain by the formulas:
where q=k (i−1)+ s +1 ,~p= p+ s , 1⩽i⩽n , 1⩽s⩽k , 1⩽ j⩽m , s ∈ℕ. The obtained matrix D
described the high quality image frame. Next, we renumber the elements of the matrix D in reverse
order by q, take p + k instead of p, and using formulas (3), (4) we construct the next frame of a
highquality image.</p>
        <p>The algorithm for implementing the proposed method consists of two interrelated parts that
operate in parallel. The first part concerns the LiDAR's function and its movement (rotation), while
the second describes the high-quality image frame and its display on the screen:
І. Image formation by the LiDAR
(2)
(3)
(4)
1. As the beam moves within the pyramid field of view, the first frame of the image with a
resolution of n ×m pixels is obtained.
2. The average distance to the surface, the inlit areas at this distance diameter, and the
distance between adjacent surfaces are determined, the angular division factor k is then
calculated.
3. The next rotation direction (left or right) is selected.
4. The camera in the selected direction by an angle such that the angle between consecutive
beams is a multiple of k is rotated, inabling it to illuminate the nearest part of the surface
that wasn't inlit in the previous frame.
5. After the beam scans the LiDAR’s field of view, the next frame is obtained.
6. We continue performing steps 2 and 3 until the newly obtained frame shifts by 1 pixel in
the selected direction compared to the first frame.
7. The camera’s movement direction is chaneged, the last frame is designated as the first, and
than returned to step 2.
ІІ. Image construction and display І. Image formation by the LiDAR
1. In the image processing software, a matrix of (kn + 1)×m size is formed.
2. The shift’s direction (right or left, opposite to the initial rotation direction) is chosen and a
base column (first on the left or right, depending on the shift direction) is set.
3. Using the LiDAR's first frame data, the matrix columns values are set starting with the base
column, and each subsequent column with frame data is assigned into the matrix with an
offset of k – 1 columns.
4. In the matrix, one column is moved in the shift direction and taken as the base column.
5. The next frame is taken as the current one.
6. Using data from the current LiDAR frame, the values of the columns in the matrix are set,
starting from the current base, skipping k – 1 columns for each successive column.
7. Steps 4, 5, and 6 are continued until the final column in the matrix is filled.
8. The generated image on the screen is displayed based on the matrix data.
9. The direction is changed to the opposite, the last frame is considered to be the current
frame, the last column is considered to be the base. After this one may go to step 4.
Thus, continuous LiDAR surface scanning will enhance the resulting image quality.</p>
      </sec>
      <sec id="sec-3-5">
        <title>3.5. Discussion</title>
        <p>The result of this method will be an image that will have a horizontal size of kn +1 pixels. This will
give the possibility to achive greater clarity. However, due to the use of k + 1 frames to build one
frame of the enhanced image, this will lead to a lower frame rate and slower video. With a small
multiplicity (k = 2, 3), there will be no special problems. With greater k values, this disadvantage
can be compensated by the fact that in parallel with the formation of high-quality images, raw
frames directly formed by the LiDAR can be additionally viewed. In the case of this view, another
problem arises – the constant movement from left to right. Its solution can be achieved by software
image stabilization, making a synchronous shift in the opposite direction and limiting the frame
size on the left and right.</p>
        <p>The proposed method can also be used to improve the quality and increase the vertical size of
the image. In this case, the camera should be rotated around the horizontal axis that passes through
the optical center. If there is a need to increase the image quality both horizontally and vertically,
then the proposed method can be modified to obtain images of fragments with higher resolution
both horizontally and vertically. In this case, the camera should be rotated in the horizontal and
vertical directions so that the LiDAR beam would scan the corresponding unlit part of the surface
sequentially. The number of consecutive frames required to create a high quality image will be
significant, and therefore the real-time viewing will be difficult. However, for image analysis
(object recognition), this quality improvement will be appropriate.</p>
        <p>If the surface has a significant unevenness of heights, or if a part of the space is being scanned
with objects located at different distances from the LiDAR, a problem may arise due to the fact that
on the surfaces of objects located at close distances, a lower multiplicity should be selected
compared to those located at a far distance. The solution to this problem can be in selecting the
optimal multiplicity for both close and distant objects, or in selecting a fragment of an object at a
certain distance and improving its quality by selecting the multiplicity k based on the average
distance to it.</p>
      </sec>
      <sec id="sec-3-6">
        <title>3.6. Conclusions</title>
        <p>In order to improve the LiDAR image quality and display objects or fragments the size of which is
smaller than the distance between two consecutive pixels at the corresponding distance from the
LiDAR, a method is proposed that consists in turning the device from right to left by the angle
between two consecutive rays in the rotation direction, scanning the space in the intermediate
areas as well as performing a software generation of images with higher resolution in the direction
of LiDAR rotation. The possibility of implementing such a method and displaying small objects
that may not be included in the image formed by a fixed LiDAR has been proved by the
experimental research. Possible problems that may arise when implementing the proposed method
and ways to solve them are analyzed.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgements</title>
      <p>The authors express their gratitude to the Department of Artificial Intelligence Systems at Lviv
Polytechnic National University for the provided opportunity to use the LiDAR in their research.</p>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>E.</given-names>
            <surname>Guisado-Pintado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. W.</given-names>
            <surname>Jackson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Rogers</surname>
          </string-name>
          ,
          <article-title>3D mapping efficacy of a drone and terrestrial laser scanner over a temperate beach-dune zone</article-title>
          ,
          <source>Geomorphology</source>
          <volume>328</volume>
          (
          <year>2019</year>
          )
          <fpage>157</fpage>
          -
          <lpage>172</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.geomorph.
          <year>2018</year>
          .
          <volume>12</volume>
          .013.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Y. C.</given-names>
            <surname>Lin</surname>
          </string-name>
          , Y. T. Cheng, T. Zhou,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ravi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Hasheminasab</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. E.</given-names>
            <surname>Flatt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Troy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Habib</surname>
          </string-name>
          ,
          <article-title>Evaluation of UAV LiDAR for Mapping Coastal Environments</article-title>
          ,
          <source>Remote Sensing</source>
          <volume>11</volume>
          (
          <year>2019</year>
          )
          <article-title>2893</article-title>
          . doi:
          <volume>10</volume>
          .3390/rs11242893.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Su</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Guan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. C.</given-names>
            <surname>Coops</surname>
          </string-name>
          ,
          <article-title>Lidar boosts 3D ecological observations and modelings: A review and perspective</article-title>
          ,
          <source>IEEE Geoscience and Remote Sensing Magazine</source>
          <volume>9</volume>
          (
          <issue>1</issue>
          ) (
          <year>2020</year>
          )
          <fpage>232</fpage>
          -
          <lpage>257</lpage>
          . doi:
          <volume>10</volume>
          .1109/MGRS.
          <year>2020</year>
          .
          <volume>3032713</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>P.</given-names>
            <surname>An</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Quan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Liu</surname>
          </string-name>
          , J. Ma,
          <article-title>Survey of Extrinsic Calibration on LiDAR-Camera System for Intelligent Vehicle: Challenges, Approaches, and Trends</article-title>
          ,
          <source>IEEE Transactions on Intelligent Transportation Systems</source>
          (
          <year>2024</year>
          )
          <fpage>1</fpage>
          -
          <lpage>25</lpage>
          . doi:
          <volume>10</volume>
          .1109/TITS.
          <year>2024</year>
          .
          <volume>3419758</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>O.</given-names>
            <surname>Risbøl</surname>
          </string-name>
          , L. Gustavsen,
          <article-title>LIDAR from drones employed for mapping archaeology - Potential, benefits and challenges</article-title>
          .
          <source>Archaeological Prospection</source>
          <volume>25</volume>
          (
          <year>2018</year>
          )
          <fpage>329</fpage>
          -
          <lpage>338</lpage>
          . doi:
          <volume>10</volume>
          .1002/arp.1712.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.</given-names>
            <surname>Polidori</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>El Hage, Digital elevation model quality assessment methods: A critical review</article-title>
          ,
          <source>Remote sensing 12(21)</source>
          (
          <year>2020</year>
          )
          <article-title>3522</article-title>
          . doi:
          <volume>10</volume>
          .3390/rs12213522.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R. W.</given-names>
            <surname>Wolcott</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Eustice</surname>
          </string-name>
          ,
          <article-title>Robust LIDAR localization using multiresolution Gaussian mixture maps for autonomous driving</article-title>
          ,
          <source>The International Journal of Robotics Research</source>
          <volume>36</volume>
          (
          <issue>3</issue>
          ) (
          <year>2017</year>
          )
          <fpage>292</fpage>
          -
          <lpage>319</lpage>
          . doi:
          <volume>10</volume>
          .1177/0278364917696568.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , S. Singh,
          <article-title>Low-drift and real-time lidar odometry and mapping</article-title>
          ,
          <source>Auton. Robot</source>
          <volume>41</volume>
          (
          <year>2017</year>
          )
          <fpage>401</fpage>
          -
          <lpage>416</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10514-016-9548-2.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>