<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>International Journal of Applied Earth Observation and Geoinformation
89 (2020) 102093. doi:10.1016/j.jag.2020.102093.
[17] J. Geng</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.5194/isprs-archives-XLVIII-4-W6-2023-145-2023</article-id>
      <title-group>
        <article-title>Comparison of 3D data acquisition methods for wound size assessment in crisis situations</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jakub Osuchowski</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rafał Gasz</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Barbara Jantos</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michał Wierzbicki</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Opole University of Technology</institution>
          ,
          <addr-line>Prószkowska 76 St., Opole, 45-758</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>563</volume>
      <fpage>22</fpage>
      <lpage>24</lpage>
      <abstract>
        <p>This study evaluates the feasibility of using the LiDAR sensor of the iPhone 16 Pro Max with the Zappcha application for contactless wound size assessment. A synthetic wound was scanned using three acquisition methods at three distances (25 cm, 50 cm, and 100 cm), and the resulting 3D reconstructions were compared with manual measurements. The findings show that scanning distance and device orientation strongly affect accuracy, with the best results obtained at 25 cm using stable acquisition paths. These results highlight the potential of smartphone-based LiDAR for rapid and reliable wound measurement in medical training, telemedicine, and emergency scenarios, while underlining the need for further validation in clinical practice.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;LiDAR</kwd>
        <kwd>Crisis management</kwd>
        <kwd>3D modeling</kwd>
        <kwd>3D scanning</kwd>
        <kwd>Computer Vision1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Humans perceive the world in three dimensions; we take it for granted and don’t think about it.
Replicating this natural ability in the digital realm has revolutionised cultural preservation [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ],
enhanced urban planning [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], and allowed the creation of autonomous vehicles [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Light Detection
and Ranging (LiDAR) has emerged as one of the frontrunners in the field. LiDAR technology was first
introduced in the 1960s. It relies on shooting a laser on the target and measuring reflected light,
measuring differences in wavelength, and the time delta of the procedure [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Since then, the
technology has managed to scale down from military-grade equipment to a part of mobile devices.
Subsequently, it was employed in many more aspects of science, like robotics and everyday life, by
being present in modern mobile phones. Moreover, recent experiments found it to be an effective and
lightweight alternative to robust systems in the field of geomatics [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        Recent achievements and the availability of LiDAR technology have made it possible to
utilise it beyond industrial applications. Medicine is always looking for new ways to improve the care
of patients, employing technological developments on a regular basis [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. At the intersection of
robotics and medicine, there is a thriving field of medical robots, easing the work of healthcare
personnel. Recently, scientists experimented with a service robot with a 3D LiDAR system capable of
autonomous navigation and operating within a medical environment [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Nonetheless, applications
of LiDAR stretch even further into the medical domain. Thanks to mobile devices, the availability of
such powerful technology opens up the possibility of incorporation in emergency scenarios, where
robust equipment is unavailable.
      </p>
      <p>
        Crisis Informatics and Emergency Medical Technologies are branches of science focused on
actions, solutions, and equipment in extreme situations. In scenarios where assessment of the victims
has to be made on the spot, such developments can be life-saving. Recently, a team of researchers
from the Republic of Korea made progress in measuring the size of the wounds using LiDAR [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>Although it was done under a controlled environment in a hospital, they managed to showcase a
solution that is simple yet effective, especially in cases of wounds with irregular shapes.</p>
      <p>In this article, the Authors focus on evaluating methods and parameters of data acquisition
using LiDAR technology in smartphones in scenarios outside the hospital. The goal is to examine
whether or not this is a feasible way of estimating wound sizes outside hospital halls and under
uncertain, undetermined conditions. To achieve this goal, the authors have conducted experiments
testing three methods and three different approaches to performing scans. The obtained results were
compared and evaluated to estimate the best-performing one in terms of error rate in three
dimensions of the artificial wound - length, width, and depth. Experiments utilise a commonly
available smartphone with LiDAR capabilities, scanning an artificial wound in a non-sterile
environment, simulating crisis scenarios.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Non-invasive measurement technologies</title>
      <p>The development of measurement technologies in recent times has significantly increased the
accuracy, speed, and accessibility of systems for acquiring spatial data.</p>
      <p>In particular, there has been a dynamic increase in the use of non-contact methods that
enable rapid mapping of object geometry in the form of three-dimensional models. These
technologies are used in many fields – from reverse engineering, robotics, geoinformation and
augmented reality. Systems enabling real-time 3D data acquisition with high accuracy and the ability
to operate in various environmental conditions are of particular importance.</p>
      <p>
        The most critical modern methods of spatial data acquisition include: digital
photogrammetry, structured light, Time-of-Flight (ToF), and LiDAR [
        <xref ref-type="bibr" rid="ref10 ref5 ref9">9, 10, 5</xref>
        ]. Each of these
technologies has its own specificity, advantages, and limitations.
      </p>
      <p>
        Digital photogrammetry using the Structure from Motion (SfM) method allows 3D space
reconstruction based on a set of photographs taken from different viewpoints. This process involves
identifying characteristic points, matching them between images, and then estimating the positions
and orientations of the cameras, which leads to the generation of first a sparse and later a dense 3D
point cloud. Although this method is relatively inexpensive and can work using smartphones, its
accuracy depends heavily on lighting conditions and object texture, and it also requires significant
computational resources [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12, 13, 14</xref>
        ].
      </p>
      <p>
        In controlled conditions, SfM can achieve centimetre-level accuracy or better using a large
number of high-resolution images, although its reliability decreases when surfaces are poorly
textured or dimly lit, affecting point cloud density and reconstruction quality [
        <xref ref-type="bibr" rid="ref12">12, 15</xref>
        ]. Experiments
with smartphones have shown that practical mobile SfM methods can generate models with accuracy
comparable to professional scanners in applications such as terrain erosion monitoring or
architectural measurements [16, 14]. An advantage of this method is the ability to use standard digital
cameras and achieve high mapping accuracy, provided that proper lighting and well-varied surface
textures are ensured. The drawbacks remain the time-consuming process and high computational
demand, as well as susceptibility to errors in the case of homogeneous, shiny, or transparent surfaces.
      </p>
      <p>Structured light technology is based on projecting a light pattern onto the surface of an object
and analysing the deformation of that pattern using a camera that records the scene. Distortions in
the pattern structure, resulting from interaction with the 3D geometry of the object’s surface, enable
precise determination of the surface shape. Structured light systems are based on the principle of
geometric triangulation, where the positions of the projector and camera are known, and point depth
is determined based on the angular difference between emitted and received rays. An advantage of
this technology is the ability to achieve very high measurement precision, often in the micrometre
range, and to generate dense point clouds with high spatial resolution. Thanks to this, structured
light techniques are widely used in applications requiring high precision, such as manufacturing,
industrial metrology, quality control, cultural heritage reconstruction, or biomedicine [17, 18, 19].
Despite many advantages, structured light systems also have limitations. The most important is
sensitivity to lighting conditions – bright ambient light can disturb the visibility of the pattern,
negatively impacting reconstruction quality. Additionally, scanning dark or shiny surfaces can lead
to data loss or artefacts in the 3D model. This technology also typically has a limited operating range,
making it best suited for short-distance measurements in controlled environments [20, 21, 22].</p>
      <p>Time-of-Flight (ToF) camera technology is a dynamically developing method of depth data
acquisition, based on measuring the travel time of light between the sensor and the object’s surface.
In classical ToF systems, the detector measures the time it takes for an emitted light pulse (usually in
the near-infrared range) to travel to the object and back. In practical versions, sinusoidally modulated
continuous light is most commonly used, and the distance is determined based on the phase shift
between the sent and received signal [23]. One of the key advantages of ToF cameras is the ability to
generate depth data in real-time, making them an attractive solution in mobile, robotic, and
interactive applications. Thanks to their compact design, they can be easily integrated with portable
devices such as smartphones or augmented reality systems [24]. ToF cameras show low dependency
on object texture and colour, allowing their use in visually diverse environments, including
uniformly colored or poorly lit surfaces. However, this technology is not without limitations. High
sensitivity to photon noise and light interference can affect data quality, especially under intense
lighting or at long distances [23, 25]. An additional problem is the low spatial resolution compared to
other 3D acquisition methods, such as photogrammetry or structured light, which limits the ability to
reconstruct fine details [25]. Moreover, in the case of highly reflective or transparent objects, ToF
systems may produce distorted depth data, requiring additional correction algorithms [23].</p>
      <p>
        LiDAR technology is currently one of the most advanced solutions in the field of spatial
measurements. Its operating principle is based on the emission of light pulses, most often in the
nearinfrared range, which, after reflecting from objects, return to the detector. The time of flight of the
pulse, measured with high precision, allows the distance to the object to be determined [26]. Thanks
to the very high frequency of pulse emission, LiDAR systems can generate millions of points per
second, creating extremely dense and accurate point clouds. An advantage of this technology is its
independence from lighting conditions and the ability to operate both day and night [23].
Additionally, LiDARs handle mapping complex, irregular shapes and structures well, including in
natural and urban environments. In recent years, there has been a miniaturisation of these systems
and their integration with mobile devices, significantly increasing the accessibility of this technology
for field applications [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. The miniaturisation of LiDAR systems has enabled their integration with
portable devices (e.g., iPhone Pro, iPad Pro, some Android models), as well as with drones and
inspection robots [
        <xref ref-type="bibr" rid="ref5">5, 23</xref>
        ]. As a result, measurements can be taken in real time without the need for
heavy equipment. Despite many advantages, LiDAR technology is associated with higher equipment
costs, the need to process large amounts of data, and the requirement for precise calibration in more
complex measurement configurations.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. LiDAR-Based Wound Assessment</title>
      <p>During crisis situations such as rescues after natural disasters, terrorist attacks, or on the battlefield,
it is crucial to assess wounds quickly, objectively, and contactlessly. Traditional methods of wound
assessment based on rulers and subjective staff judgment are time-consuming, prone to errors, and
not always feasible, especially in field conditions. In response to these needs, there is active
development of systems that use 3D scanning with LiDAR technology, often supported by artificial
intelligence and computer vision methods, which allow quick and precise reproduction of wound
structure without the need for physical contact.</p>
      <p>Before the technology can be deployed in the field, it has to be tested in labs and hospitals. Such
clinical trials were conducted by Liu et al. [27]. Using LiDAR technology implemented in devices such
as the iPhone 12 Pro or iPad Pro to measure the area of wounds, they compared its accuracy to
measurement results obtained using the ruler method and digital analyses done with ImageJ. The
results achieved with the app using LiDAR surpassed the traditional method for pressure injury
assessment. However, it is underlined that deep wounds could be mismeasured.</p>
      <p>
        Similarly, another limitation was pointed out in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], where surgical wounds smaller than 2.0
× 2.0 cm² were excluded from the study due to the technical limitations of LiDAR in smartphone
devices. However, it was still possible to develop a ready-to-use app that measures the wound area.
The correlation between the results achieved by the app and ImageJ analysis was 0.995, which is a
promising result for practitioners, enabling them to measure wounds contactlessly and thus lowering
the risk of wound infection.
      </p>
      <p>Next example of LiDAR technology usage is the WounDAR platform (Wound Detection and
Assessment using Range and Vision) developed by Akeboshi et al. [28]. It is also available on
consumer-level mobile devices, which presents great potential for field operations. The method uses
a LiDAR sensor, an RGB camera, and deep neural networks for wound analysis – it not only focuses
on size but also considers wound localisation, segmentation, and tissue classification. The system was
developed and tested on photos of injuries at different stages and of various origins, including ulcers,
surgical wounds, and pressure ulcers. However, the results showed that, for the proposed platform,
using the iPhone 12 LiDAR camera does not guarantee achieving the required sensor accuracy to
achieve an acceptable error level.</p>
      <p>The significance of depth measurement is highlighted in [29], where the authors presented a
system combining a deep learning model with LiDAR technology for assessing the area and depth of
burns—Burn Evaluation Network. Thanks to LiDAR sensors, a 3D map of the wound can be
reconstructed while preserving wound geometry. The data, including the depth of injuries, was
manually annotated by clinicians for further model training. The authors point out that 2D
measurements tend to underestimate wound area; therefore, it is crucial to include 3D measurements
in contactless wound sizing.</p>
      <p>In [30], a mobile LiDAR system (Time-of-Flight sensor) combined with an RGB camera was
proposed for wound assessment. The study utilised both synthetically generated images and images
captured from wound phantoms designed to replicate surgical wounds and ulcers. The
Time-ofFlight sensor was used to measure the distance to the skin surface, while the RGB camera provided
information about the shape and colour of the wounds. Although the authors developed a regression
model that achieved an error of less than 5% relative to the actual wound size, the method was not
evaluated in real-world clinical scenarios.</p>
      <p>LiDAR-based technologies, especially when combined with AI models, hold great promise
for innovation in contactless wound size assessment, particularly under challenging conditions.
However, issues such as the accurate measurement of deep wounds and poor lighting conditions still
need to be addressed. Moreover, these solutions must be validated in clinical settings using large,
real-world datasets.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Experiment Description</title>
      <p>The aim of this experiment is to evaluate the accuracy and consistency of different 3D acquisition
strategies for capturing the geometry of a synthetic wound using a mobile LiDAR sensor. The study
was conducted with the iPhone 16 Pro Max equipped with a LiDAR scanner, in combination with the
Zappcha mobile application for real-time point cloud generation.</p>
      <p>To assess the influence of acquisition technique on the quality of wound reconstruction, a
total of nine point clouds were collected. The scanning process was performed in continuous mode by
rotating the joint around the wound to capture its shape from all angles. Three distinct acquisition
methods were applied, each differing in the orientation and rotation path of the device relative to the
wound surface. Additionally, each method was executed at three different distances: 25 cm, 50 cm,
and 100 cm.</p>
      <p>The collected data was processed in CloudCompare, where key wound dimensions—length
(x), width (y), and depth (z)—were extracted and compared. The goal of the experiment is to
determine which combination of orientation and distance yields the most precise and reliable 3D
wound representation.</p>
      <p>Figure 1 presents the synthetic wound used throughout the study, with the three key spatial
dimensions annotated: length (x), width (y), and depth (z). These measurements serve as the
reference values for evaluating the accuracy of the 3D reconstructions obtained through various
acquisition methods.</p>
      <p>Table 1 provides a visual summary of the three device orientation strategies applied during
the scanning process. The diagrams illustrate how the phone was positioned and rotated around the
wound in each method.</p>
      <p>In all methods, the scanning was performed in continuous mode by rotating the joint around the
wound, while keeping the phone orientation fixed. The methods differ in the angle and path of device
rotation relative to the wound:
 Method 1 involves holding the phone at a 45-degree angle to the wound.
 Method 2 also keeps the phone perpendicular, without phone rotation.
 Method 3 keeps the phone perpendicular to the wound, with rotation performed horizontally
around the target.</p>
      <p>These approaches were designed to evaluate how different spatial orientations during acquisition
affect the accuracy and completeness of the resulting 3D reconstructions.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Results</title>
      <p>The main objective of this study was to assess the influence of different acquisition methods and
distances on the accuracy of 3D wound reconstruction. After collecting a total of nine point clouds
using the three predefined acquisition strategies at varying distances (25 cm, 50 cm, and 100 cm), the
resulting data were analyzed using CloudCompare software.</p>
      <p>The point clouds obtained during the study are presented in Table 2.</p>
      <p>The key spatial dimensions of the wound—length (x), width (y), and depth (z)—were
extracted for each configuration. The results were then compared against the reference
measurements obtained directly from the physical wound model. This comparison allows for an
evaluation of how closely each 3D reconstruction matches the true dimensions and which
configuration yields the most reliable results.</p>
      <p>Table 3 presents the dimensional values (in centimeters) extracted from each of the nine
acquired point clouds. The results are organized by acquisition method and scanning distance. The
first row includes the manually obtained reference values, serving as the ground truth for all
subsequent comparisons.</p>
      <sec id="sec-5-1">
        <title>Method 1</title>
      </sec>
      <sec id="sec-5-2">
        <title>Method 2</title>
      </sec>
      <sec id="sec-5-3">
        <title>Method 3</title>
      </sec>
      <sec id="sec-5-4">
        <title>Method 1</title>
      </sec>
      <sec id="sec-5-5">
        <title>Method 2</title>
      </sec>
      <sec id="sec-5-6">
        <title>Method 3</title>
      </sec>
      <sec id="sec-5-7">
        <title>Method 1</title>
      </sec>
      <sec id="sec-5-8">
        <title>Method 2</title>
      </sec>
      <sec id="sec-5-9">
        <title>Method 3</title>
        <p>The results in Table 3 indicate that none of the tested acquisition methods produced a perfect
match with the reference measurements across all three dimensions. However, certain trends can be
observed:
 Method 1 generally produced stable length (x) measurements, especially at 25 cm and 50 cm,
though some overestimation occurred at 100 cm.
 Method 2 yielded relatively consistent results across distances, with moderate accuracy in all
three dimensions. Notably, depth (z) was underestimated in all cases.
 Method 3 showed the greatest variability, especially at 100 cm, where all dimensions deviated
significantly from the reference, particularly width (y).</p>
        <p>Overall, the 25 cm distance yielded the most accurate and visually interpretable results across
methods, likely due to higher point density and better surface detail capture.
Among all scans, the wound was most clearly visible and easiest to interpret in the point clouds
obtained using Method 1 and Method 3 at 25 cm.</p>
        <p>Figure 2 presents the measured values of the wound's length (x), width (y), and depth (z) obtained
from 3D point clouds acquired using different methods and distances. Each bar represents a specific
acquisition configuration, and the red dashed line indicates the manually measured reference value
for that dimension.</p>
        <p>The results presented in Figure 2 show that most acquisition methods led to minor underestimation
or overestimation of the wound's actual dimensions. Notably, Method 3 at a distance of 100 cm
resulted in the largest deviations in both length and width. In contrast, measurements taken at a
distance of 25 cm were consistently closer to the reference values. Among these, Method 1 and
Method 3 yielded the most accurate results, suggesting that closer proximity to the wound, combined
with specific device orientations, positively influences dimensional accuracy. Method 2 exhibited
moderate stability across all distances but did not outperform the others at any point.</p>
        <p>Measured Value− Reference Value
Relative Error = × 100 %</p>
        <p>Reference Value</p>
        <p>As illustrated in Figure 3, the depth dimension (z) was generally the worst reconstructed
across all methods and distances. The width (y) showed considerable variability, particularly in the
case of Method 1 at 100 cm, where the error reached its peak. The lowest relative errors were
observed for Method 1 and Method 3 at a scanning distance of 25 cm, indicating that close-range
acquisition not only improves measurement accuracy but also minimizes variability. In contrast,
Method 3 at 100 cm consistently produced high relative errors across all dimensions, demonstrating
the limitations of distant scanning with that orientation.</p>
        <p>Total Relative Error =(
|xm− xr|+|ym− yr|+|zm− zr|
xr yr zr
)× 100 %
(2)</p>
      </sec>
      <sec id="sec-5-10">
        <title>Where:</title>
        <p>

xm,ym,zm – values of the wound’s length, width, and depth obtained from the 3D point cloud
(measured values)
xr,yr,zr– manually measured reference values for the wound’s length, width, and depth</p>
        <p>The heatmap in Figure 4 provides a comprehensive overview of total reconstruction error
across all acquisition strategies. The best-performing configuration was Method 3 at 25 cm, closely
followed by Method 1 at the same distance. These configurations yielded the lowest cumulative
errors in all three dimensions. Conversely, Method 1 at 100 cm resulted in the highest total error,
primarily due to substantial overestimation of wound width. Overall, the data strongly indicate that
shorter acquisition distances lead to more precise and consistent 3D reconstructions, regardless of
the specific method employed.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>This study evaluated the accuracy of 3D wound reconstruction using the LiDAR sensor of the iPhone
16 Pro Max in combination with the Zappcha application. Three acquisition methods, each applied at
three distances, were tested by capturing point clouds of a synthetic wound. The dimensional
measurements extracted from these reconstructions were compared against reference values
obtained through manual measurement.</p>
      <p>The results clearly indicate that both the acquisition distance and the device orientation play
a significant role in the quality of 3D data. The most accurate and visually interpretable
reconstructions were obtained at a distance of 25 cm, particularly with Method 1 and Method 3. In
contrast, increased distance — especially 100 cm — significantly reduced reconstruction fidelity, often
leading to high relative errors, especially in wound width.</p>
      <p>Overall, close-range scanning combined with stable device orientation proved to be the most
effective strategy. These findings can guide future applications of smartphone-based 3D scanning in
medical simulation, training, or telemedicine. Further work will focus on optimizing the scanning
path and evaluating reconstruction quality in real-world clinical scenarios.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Future Works</title>
      <p>Future work should systematically test a more comprehensive array of methods and corresponding
parameters. Current findings should be expanded by conducting tests on a broader range of wound
types, anatomical locations, and patient populations, which would ensure the robustness of the data
collected. Furthermore, a comparison between different devices is also advised to achieve a sense of
universality of the results. Finally, developing a standardized procedure that would ensure the best
results with ease of performance should become a long-term goal.</p>
    </sec>
    <sec id="sec-8">
      <title>8. Limitations</title>
      <p>This study has several important limitations that should be addressed. First of all, tests were
conducted using a fake wound. While it is understandable to use a prop, it still does not reflect the
variety of other types of wounds or the inherent realness of the injury. The experiment setup
considered only a few of the possible configurations, which again might not correlate to a real-world
setting. Subtle changes in lighting, angle, movement speed, or distance can greatly affect the
outcome. All data comes from one device, which might hinder the application of the results in the
general case.</p>
    </sec>
    <sec id="sec-9">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Crisan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pepe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Costantino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Herban</surname>
          </string-name>
          ,
          <article-title>From 3d point cloud to an intelligent model set for cultural heritage conservation</article-title>
          ,
          <source>Heritage</source>
          <volume>7</volume>
          (
          <year>2024</year>
          )
          <fpage>1419</fpage>
          -
          <lpage>1437</lpage>
          . URL: https://www.mdpi.com/2571-9408/7/3/68. doi:
          <volume>10</volume>
          .3390/heritage7030068.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>T. R.</given-names>
            <surname>Andersen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. E.</given-names>
            <surname>Poulsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Pagola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. B.</given-names>
            <surname>Medhus</surname>
          </string-name>
          ,
          <article-title>Geophysical mapping and 3d geological modelling to support urban planning: A case study from vejle, denmark</article-title>
          ,
          <source>Journal of Applied Geophysics</source>
          <volume>180</volume>
          (
          <year>2020</year>
          )
          <article-title>104130</article-title>
          . URL: https://www.sciencedirect.com/science/article/pii/S0926985120300781. doi:https://doi.org/10.1016/j.jappgeo.
          <year>2020</year>
          .
          <volume>104130</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ghasemieh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kashef</surname>
          </string-name>
          ,
          <article-title>3d object detection for autonomous driving: Methods, models, sensors, data, and challenges</article-title>
          ,
          <source>Transportation Engineering</source>
          <volume>8</volume>
          (
          <year>2022</year>
          )
          <article-title>100115</article-title>
          . URL: https://www.sciencedirect.com/science/article/pii/S2666691X22000136. doi:https://doi.org/10.1016/j.treng.
          <year>2022</year>
          .
          <volume>100115</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>N.</given-names>
            <surname>Mehendale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Neoge</surname>
          </string-name>
          , Review on lidar technology,
          <source>Preprint, SSRN Electronic Journal</source>
          (
          <year>2020</year>
          ). URL: https://ssrn.com/abstract=3604309. doi:
          <volume>10</volume>
          .2139/ssrn.3604309, preprint, not peerreviewed.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Zollini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Marconi</surname>
          </string-name>
          ,
          <article-title>Evaluation of positioning accuracy using smartphone rgb and lidar sensors with the vidoc rtk rover</article-title>
          ,
          <source>Sensors</source>
          <volume>25</volume>
          (
          <year>2025</year>
          ). URL: https://www.mdpi.com/1424- 8220/25/13/3867. doi:
          <volume>10</volume>
          .3390/s25133867.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Haleem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Javaid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. Pratap</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Suman</surname>
          </string-name>
          ,
          <article-title>Medical 4.0 technologies for healthcare: Features, capabilities, and applications, Internet of Things and Cyber-Physical Systems 2 (</article-title>
          <year>2022</year>
          )
          <fpage>12</fpage>
          -
          <lpage>30</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/S2667345222000104. doi:https://doi.org/10.1016/j.iotcps.
          <year>2022</year>
          .
          <volume>04</volume>
          .001.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ibrayev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ibrayeva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Amanov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Tolenov</surname>
          </string-name>
          ,
          <article-title>Development of a service robot for hospital environments in rehabilitation medicine with lidar-based simultaneous localization and mapping</article-title>
          ,
          <source>International Journal of Advanced Computer Science and Applications</source>
          <volume>15</volume>
          (
          <year>2024</year>
          ). URL: http://dx.doi.org/10.14569/IJACSA.
          <year>2024</year>
          .
          <volume>01511102</volume>
          . doi:
          <volume>10</volume>
          .14569/IJACSA.
          <year>2024</year>
          .
          <volume>01511102</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>B.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kwon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-H.</given-names>
            <surname>Oh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-H.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <article-title>Smartphone-based lidar application for easy and accurate wound size measurement</article-title>
          ,
          <source>Journal of Clinical Medicine</source>
          <volume>12</volume>
          (
          <year>2023</year>
          )
          <article-title>6042</article-title>
          . URL: https://doi.org/10.3390/jcm12186042. doi:
          <volume>10</volume>
          .3390/jcm12186042.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Sheshtar</surname>
          </string-name>
          , et al.,
          <article-title>Comparative analysis of lidar and photogrammetry for mobile-based mapping technologies</article-title>
          ,
          <source>Applied Sciences</source>
          <volume>15</volume>
          (
          <year>2025</year>
          ). doi:
          <volume>10</volume>
          .3390/app15031085.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>R.</given-names>
            <surname>Maskeliūnas</surname>
          </string-name>
          , et al.,
          <article-title>Fusing lidar and photogrammetry for accurate 3d data</article-title>
          ,
          <source>Remote Sensing</source>
          <volume>17</volume>
          (
          <year>2025</year>
          ). doi:
          <volume>10</volume>
          .3390/rs17030443.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Shan</surname>
          </string-name>
          , et al.,
          <article-title>Democratizing photogrammetry: an accuracy perspective</article-title>
          ,
          <source>International Journal of Applied Earth Observation and Geoinformation</source>
          (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .1080/10095020.
          <year>2023</year>
          .
          <volume>2178336</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Nielsen</surname>
          </string-name>
          , et al.,
          <article-title>Quantifying the influence of surface texture and shape on reconstruction accuracy in structure from motion</article-title>
          ,
          <source>Sensors</source>
          <volume>23</volume>
          (
          <year>2022</year>
          )
          <article-title>178</article-title>
          . doi:
          <volume>10</volume>
          .3390/s23010178.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>