<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Indoor Navigation: A Comparative Study of Traditional and Machine Learning Algorithms</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Erisa Bekteshi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Claudio Pascarelli</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Salento, Department of Engineering for Innovation</institution>
          ,
          <addr-line>Piazza Tancredi, 7, 73100 Lecce</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Indoor navigation has turned into a critical area for research on account of its applications across multiple fields, along with robotics, healthcare, as well as smart buildings. Compared with outdoor navigation, interior settings pose some special problems such as fewer GPS signals, detailed arrangements, and moving impediments. This paper provides a detailed look at multiple methods used for finding routes in indoor spaces, like graph-based methods, probabilistic methods, and machine learning methods. We evaluate these algorithms based on their own accuracy, computational efficiency, scalability, and robustness across multiple indoor scenarios. The paper discusses the strengths and limitations with each approach and provides understandings into future research directions within the field.</p>
      </abstract>
      <kwd-group>
        <kwd>Indoor navigation</kwd>
        <kwd>graph-based methods</kwd>
        <kwd>probabilistic methods</kwd>
        <kwd>machine learning methods</kwd>
        <kwd>1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Nowadays navigation is an important task that is embedded in every aspect of our daily routines.
Navigation is generally defined as the process of directing the movements of a vehicle, nave, plane,
and people from one place to another [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Although there are different types of navigation, one
that faces the most challenges is indoor navigation due to the non – existence of the GPS signal,
complexity in indoor layouts, moving obstacles like people and furniture, and the need for alternative
technologies like Wi-Fi, Bluetooth, and sensors.
      </p>
      <p>
        Indoor navigation i.e., localization, is the process of determining the location and orientation of a
person or object inside a building and guiding them to a place of interest in that location. Applications
for this technology range from autonomous robots to assistance technologies for visually impaired
people to navigation in smart buildings [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        With satellite-based devices currently available for precise position data, outside navigation is
far easier. However, indoor navigation presents a unique set of challenges that call for a different
approach. For instance, while GPS performs exceptionally well outdoors, it may not work well indoors
because to elements including signal attenuation, multipath, and dynamic spillway development [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        Thus, indoor navigation has received much interest, especially in the scenarios of smart
buildings, healthcare facilities, and commercial spaces. Due to the failure of GPS-based localization
to work indoors, different solutions have been developed, using Wi-Fi fingerprinting, Bluetooth, and
sensors [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>The subject of the study was to compare different algorithm types and summarize their pros
and cons, as well as best practices in environments suitable for different solutions. To solve these
6th International Conference Recent Trends and Applications in Computer Science and Information Technology
∗ Corresponding author.
† These authors contributed equally.</p>
      <p>erisa.bekteshi@unisalento.it (E. Bekteshi); claudio.pascarelli@unisalento.it (C. Pascarelli)
0000-0002-0678-979X (E. Bekteshi); 0000-0002-9854-7703 (C. Pascarelli)</p>
      <p>© 2025 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
problems, indoor navigation systems were evaluated using a systematic approach that incorporates
advanced localization technologies and algorithmic frameworks, as outlined in the subsequent
section.</p>
    </sec>
    <sec id="sec-2">
      <title>1. Methodology</title>
      <p>This study adopts a narrative review approach to explore and synthesize the current state of
research concerning traditional and machine learning algorithms for indoor navigation. A narrative
review is a recognized methodology for providing a comprehensive and critical overview of a research
topic without the rigid procedural constraints typical of systematic reviews or bibliometric analyses.
It is particularly suited to fields where the literature is heterogeneous in terms of methodologies,
outcomes, and technological focus, as is the case for indoor navigation.</p>
      <p>
        The literature considered in this review was identified through searches conducted in major
academic databases, including Google Scholar, ScienceDirect, and Web of Science [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ][
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Keywords
such as “indoor navigation,” “traditional algorithms,” “SLAM,” “deep reinforcement learning,” and
“machine learning for localization” guided the selection process. No formal systematic protocol—
such as PRISMA guidelines or a predefined inclusion/exclusion matrix—was applied, given the
exploratory nature of the investigation [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ][
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The objective was not to exhaustively catalog all
available studies but rather to capture the main trends, strengths, and limitations emerging from
significant and representative contributions to the field.
      </p>
      <p>
        While care was taken to prioritize peer-reviewed journal articles and recent conference
proceedings [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], this narrative review does not claim to be exhaustive or to eliminate all potential
selection biases. Instead, it aims to offer an informed, critical, and structured discussion of the topic
based on a purposive selection of relevant literature, in line with the objectives and constraints of
narrative reviews.
      </p>
      <p>Through this methodological lens, the study seeks to compare the traditional indoor navigation
methods—such as SLAM-based approaches and classical path-planning algorithms—with more recent
machine learning-based techniques, particularly deep reinforcement learning (DRL), highlighting
the relative advantages, limitations, and future research directions.</p>
    </sec>
    <sec id="sec-3">
      <title>2. Results and discussion</title>
      <p>
        To achieve indoor navigation that is as seamless as outdoor navigation, the integration of advanced
technologies is essential. The primary methods adopted are Wi-Fi-based positioning systems, LiDAR,
visual and deep learning enhanced SLAM (Simultaneous Localization and Mapping). At large, these
technologies increase the accuracy, robustness, and adaptability of systems in indoor environments,
improving overall user experience [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>
        To enhance the user experience, these improvements focus on accuracy, robustness, and
adaptability in complex indoor environments. For instance, many 3D deep learning methods today
aim to leverage technical progress from robotics and autonomous driving to consume less energy
and perform in real-time by processing raw point cloud data [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>
        Deep Learning neural networks (DNNs) have excelled at the understanding and extraction of
a high level of intelligence for outlandish datasets, such as point clouds, which can achieve tasks
such as object detection, semantic segmentation, and scene reconstruction [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Multimodal fusion
techniques (e.g., BubblEX) integrate information across multiple modalities in point cloud data to
enhance feature learning and further facilitate understanding of how neighboring points (e.g., cells
surrounding a treated patch) contribute to the KM feature extraction process. [feasibility of 5G,
lightweight data acquisition strategies in geomatics, smart urbanism etc].
      </p>
      <p>These technologies serve as the foundation for the algorithmic approaches compared in the next
section, ranging from traditional SLAM to data driven DRL methods.</p>
    </sec>
    <sec id="sec-4">
      <title>3. Algorithms</title>
      <p>Indoor navigation presents several challenges; nonetheless, certain algorithms can assist in
overcoming these obstacles. These algorithms are categorized into traditional and deep reinforcement
learning (DRL)-based algorithms.</p>
      <sec id="sec-4-1">
        <title>4.1. Traditional Algorithms</title>
        <p>
          Traditional methods include SLAM, global planning, and local planning. SLAM (Simultaneous
Localization and Mapping) is one of the most common approaches employed in robotic indoor
applications, where algorithms create maps of the environment and localize a robot at the same
time, using the position information gathered by LiDAR, cameras, Wi-Fi, or Ultra-Wideband (UWB)
[
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] For instance, LiDAR SLAM is recognized for its accuracy as well as real-time data processing,
however it might have problems in closed environments due to its expense as well as its problems
with reflections [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
        </p>
        <p>
          Visual SLAM, using camera systems, is more affordable and widely used in aerial vehicles and
mobile robots, but can suffer from reduced performance in low light or with reflective surfaces
[
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. Global planning involves creating a complete map of the environment and then calculating
the optimal path from start to finish, while local planning focuses on making decisions based on the
immediate surroundings of the robot [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. These methods are often computationally intensive and
may not perform well in dynamic environments. In this manner, SLAM algorithms require significant
processing power to handle sensor data and update the map in real-time, and global planning needs a
complete and accurate map, which is difficult to maintain in changing environments [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
        </p>
        <p>
          Traditional autonomous navigation often employs Simultaneous Localization and Mapping
(SLAM) to build a map of the environment while simultaneously estimating the robot’s pose within
that map. Algorithms like Karto-SLAM, which is based on graph optimization, are used in this process.
Global planning then uses this map to find an optimal route from a starting point to a goal, often
using algorithms like Dijkstra’s algorithm as implemented in the Navfn planner. Local planning,
such as with the Dynamic Window Approach (DWA) or Timed-Elastic-Band (TEB), then adjusts this
global plan in real-time to avoid obstacles and account for dynamic changes in the environment [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
In contrast, DRL-based methods replace these individual components with a single agent that learns
to navigate directly from sensor inputs to motor outputs, effectively learning a navigation policy.
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>4.1.1. Strength and limitation of traditional algorithm</title>
        <p>
          Traditional path – planning algorithms, such as Dynamic Window Approach (DWA) and Times
Elastic Band (TEB), offer several strengths in indoor navigation. These strengths excel in path planning
and efficiency, with DWA providing high temporal efficiency and shorter routes, particularly in
environments where line-of-sight (LOS) conditions are sufficient and lastly can provide a quick path
calculation in simpler environments [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
        </p>
        <p>
          Another strength in traditional algorithms is in the safety features where TEB can generate a
route with the least number of collision while maintaining safe distances from obstacles and making
it highly effective in static or well-mapped environments [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
        </p>
        <p>
          From an implementation point of view, these algorithms are easier to implement than AI-based
algorithms, as they do not require extensive training data or high computational resources, ensuring
predictable behavior in structured scenarios [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ].
        </p>
        <p>
          They have also been used effectively with pre – existing maps, resulting in reliable performance
when precise environmental data are available [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ].
        </p>
        <p>These strengths make traditional methods ideal for controlled environments, but their strict
dependence on the static maps and limited adaptability in dynamic environments highlight the need
for complementary approaches, such as machine learning in more complex scenarios.</p>
        <p>
          Despite their strength, traditional algorithms like DWA and TEB offers limitations in real
world environments. One of the most important limitations is the environmental adaptability: these
algorithms perform poorly in dynamic environments, struggling with unpredictable obstacles (e.g.
sudden pedestrian movement), and cannot generalize well to new situations [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
        </p>
        <p>
          Sensor Dependencies further deepens these algorithms to not be more reliable because they are
heavily reliant on sensor precision and accuracy, wheel odometer accumulates errors due to slipping,
LiDAR suffers in featureless corridors (“corridor effect”), necessitating redundant sensor arrays to
maintain line-of-sight (LOS) conditions [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
        </p>
        <p>Notably, performance degrades in sufficiently complex scenarios: for example, while performing
DWA, collision rates increase by ~40% in cluttered environments; and while performing TEB,
computational latency increases exponentially with the number of obstacles in a situation.</p>
        <p>
          These algorithms do not learn; they remain inflexible to changes in their environment without
being manually recalibrated. Compounding these problems is an infrastructure burden, where
specific pre-mapping and regular upkeep increase deployment costs by an order of magnitude of
2–3× versus data-driven alternatives. These constraints highlight the reason why modern systems
are moving toward hybrid architectures, merging the interpretability of traditional methods with the
flexibility that AI affords to create a balance between stability and flexibility [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ][
          <xref ref-type="bibr" rid="ref17">17</xref>
          ][
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. Similarly,
[
          <xref ref-type="bibr" rid="ref19">19</xref>
          ] observed TEB’s struggles with actuator constraints in maritime HIL testing, further motivating
the exploration of adaptive learning methods like DRL.
4.2
        </p>
      </sec>
      <sec id="sec-4-3">
        <title>Deep Reinforcement Learning Algorithms</title>
        <p>
          DRL-based approaches employ agents to acquire optimal navigation policies via interaction
with the environment, providing adaptability to dynamic changes and intricate circumstances. In
mobile robotics, a Deep Reinforcement Learning (DRL) agent can acquire navigation skills within a
warehouse setting by getting feedback, either incentives or penalties, contingent upon its actions, such
as advancing, turning, or halting [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. This enables the robot to adjust to environmental alterations,
including new impediments or layout modifications, without requiring explicit reprogramming [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ].
        </p>
        <p>
          This adaptability is especially beneficial in intricate situations where conventional rule-based
navigation systems may struggle, such as congested surroundings or regions with erratic human
behavior [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ][
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. Imagine a scenario in which a robot is tasked with bringing packages across a
crowded office building. For example, a robot that has been trained in deep reinforcement learning
would be able to autonomously avoid collisions with other robots it crosses paths with, navigate
through narrow hallways, or even prioritize delivery requests based on their urgency—without
requiring explicit instructions to program it for each new case it encounters.
        </p>
        <p>This is distinct from classic approaches that require extensive manual tuning and reprogramming
whenever the environment changes. Furthermore, DRL algorithms can utilize transfer learning
approaches to expedite the learning process in novel situations. A robot taught to navigate one
warehouse can swiftly adjust to a different layout by refining its existing policies instead of starting
from the beginning.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.2.1. Strength and limitation of machine learning algorithm</title>
        <p>
          Deep reinforcement learning algorithms offers several strengths in indoor navigation systems,
such as environmental adaptability by learning optimal navigation policies through continuous
interaction with both static and dynamic environments without relying on pre – existing maps or
precision sensors [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
        </p>
        <p>
          A distinguish feature of this algorithm is its ability to effectively integrate and execute complex
decision-making task through their trial – and – error, capabilities, utilizing policies, reward, and
value function to maximize performance with approaches like Soft Actor-Critic (SAC) showing
particularly efficient sample usage and lower collision rates compared to traditional methods [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
        </p>
        <p>
          Another feature that deep reinforcement learning algorithms has that the traditional algorithms
does not have is the benefit of the performance such as SAC demonstrating efficient sample usage
and lower collision rate, has better computational efficiency, achieves higher rewards in testing
scenarios, can function effectively in maples environments. [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
        </p>
        <p>
          The configurability of DRL architectures allows for both model – free and model – based
configurations along with the capability to handle continuous or discrete action spaces and hence
allows developers a great deal of flexibility when devising their architectures, depending on their
specific application requirements. These qualities allow DRL to perform well in complex, real-world
navigation tasks, especially when environmental unpredictability is a major factor [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
        </p>
        <p>
          However, while being powerful and adaptable to the environment, DRL algorithms face major
obstacles such as exceedingly virtual trial and error paths, unavoidable collision happening in the
training process, along with high requirement of calculating resources, resulting in difficulty to
implement it in real world problems [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ][
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
        </p>
        <p>
          Moreover, their significant reliance on the large, heterogeneous datasets and the need to collect
location-related sensitive information raises privacy and security issues receiving the attention
of scholars and practitioners [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. To alleviate these limitations, researchers recommend the
implementation of hybrid systems that integrate DRL with traditional methods, federated learning
for the retention of privacy, and constant updates of the system to ensure accuracy [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ][
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. For
systems functioning in dynamic environments with movable obstacles or where facts regarding maps
are false DRL is optimal for this [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. Continuous research into privacy-preserving technologies as
well as hybrid methods will likely continue to improve the applicability and robustness of these
DRLbased navigational systems as the field matures.
        </p>
        <p>
          The flexibility of DRL architectures supports both model – free and model – based designs, with
the ability to manage continuous or discrete action spaces, offering developers a broad spectrum of
configurations based on specific application needs. These attributes make DRL particularly suitable
for complex, real-world navigation tasks, especially where environmental unpredictability is a
challenge [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
        </p>
        <p>
          Despite their strength and adaptability of the environment, DRL algorithms face significant
challenges, including extensive virtual training requirements, inevitable collisions during the
training phase, and substantial computational resource demands, which complicate real-world
implementation [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ][
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
        </p>
        <p>
          Additionally, their performance depends heavily on large, heterogeneous datasets and raises
privacy and security concerns due to the collection of sensitive location information [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. To mitigate
these limitations, researchers advocate for hybrid systems that combine DRL with traditional
methods, federated learning to preserve privacy, and regular system updates to maintain accuracy
[
          <xref ref-type="bibr" rid="ref11">11</xref>
          ][
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. DRL is particularly well-suited for dynamic environments with changing obstacles or where
map information is unreliable, making it a powerful tool for adaptive navigation in complex,
realworld settings [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. As the field evolves, ongoing advancements in privacy-preserving techniques
and hybrid approaches promise to further enhance the practicality and robustness of DRL-based
navigation systems.
        </p>
        <p>In a summarized way comparison of indoor navigation algorithms discussed are presented in
table 1.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>4. Conclusion</title>
      <p>Indoor navigation remains a complex and evolving field, demanding solutions that can navigate
challenges such as dynamic environments, occluded signals, and diverse spatial layouts. This study
has compared traditional algorithms—such as SLAM, DWA, and TEB—with machine
learningbased approaches, particularly Deep Reinforcement Learning (DRL), to highlight their respective
strengths and limitations. Traditional methods excel in controlled, static environments with high
predictability, offering low-cost, easy-to-implement solutions with high interpretability. However,
their dependency on precise mapping and limited adaptability restricts their effectiveness in
realworld, dynamic scenarios.</p>
      <p>In contrast, DRL-based algorithms demonstrate significant advantages in adaptability and
autonomous decision-making by learning from real-time interactions with the environment. Their
ability to operate without pre-mapped data and adapt policies through trial-and-error makes them
highly suitable for complex and unpredictable settings. Nevertheless, DRL faces considerable
implementation challenges, including high training costs, computational demands, and privacy
concerns related to data acquisition.</p>
      <sec id="sec-5-1">
        <title>Static Costly; environments struggles with reflections</title>
      </sec>
      <sec id="sec-5-2">
        <title>Affordable robotics</title>
      </sec>
      <sec id="sec-5-3">
        <title>Large buildings</title>
      </sec>
      <sec id="sec-5-4">
        <title>Fails in low light</title>
      </sec>
      <sec id="sec-5-5">
        <title>Requires premapping</title>
      </sec>
      <sec id="sec-5-6">
        <title>Dynamic Needs large</title>
        <p>environments training
dataset</p>
        <p>The findings emphasize the growing need for hybrid navigation architectures that blend the
robustness and interpretability of traditional algorithms with the flexibility and learning capabilities
of AI-based methods. As indoor navigation continues to expand across domains such as smart
infrastructure, robotics, and assistive technologies, future research should focus on optimizing hybrid
solutions, improving training efficiency, and addressing data privacy through federated learning and
secure data management practices. Such advancements will be critical for enabling reliable, scalable,
and context-aware indoor navigation systems in the years to come.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used X-GPT-4 and Gramby in order to: Grammar
and spelling check. After using these tool(s)/service(s), the author(s) reviewed and edited the content
as needed and take(s) full responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Darken</surname>
            ,
            <given-names>Rudolph</given-names>
          </string-name>
          &amp; Peterson,
          <string-name>
            <surname>Barry.</surname>
          </string-name>
          (
          <year>2001</year>
          )
          <article-title>: Spatial Orientation, Wayfinding, and Representation</article-title>
          .
          <source>Handbook Virtual Environ</source>
          .
          <volume>10</volume>
          .1201/b17360-
          <fpage>24</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Klein</surname>
            <given-names>LC</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Braun</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mendes</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pinto</surname>
            <given-names>VH</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Martins</surname>
            <given-names>FN</given-names>
          </string-name>
          ,
          <string-name>
            <surname>de Oliveira</surname>
            <given-names>AS</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wörtche</surname>
            <given-names>H</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Costa</surname>
            <given-names>P</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lima</surname>
            <given-names>J.</given-names>
          </string-name>
          <article-title>A Machine Learning Approach to Robot Localization Using Fiducial Markers in RobotAtFactory 4.0 Competition</article-title>
          . Sensors.
          <year>2023</year>
          ;
          <volume>23</volume>
          (
          <issue>6</issue>
          ):
          <fpage>3128</fpage>
          . https://doi.org/10.3390/s23063128
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Abacı</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Seçkin</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Ç</surname>
          </string-name>
          .
          <article-title>Mobile Robot Positioning with Wireless Fidelity Fingerprinting and Explainable Artificial Intelligence</article-title>
          .
          <source>Sensors</source>
          <year>2024</year>
          ,
          <volume>24</volume>
          , 7943. https://doi.org/10.3390/s24247943
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Cha</surname>
            <given-names>K-J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            <given-names>J-B</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ozger</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee W-H. When Wireless Localization Meets Artificial Intelligence: Basics</surname>
          </string-name>
          , Challenges, Synergies, and Prospects.
          <source>Applied Sciences</source>
          .
          <year>2023</year>
          ;
          <volume>13</volume>
          (
          <issue>23</issue>
          ):
          <fpage>12734</fpage>
          . https://doi. org/10.3390/app132312734
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Ellegaard</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Wallin</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>The bibliometric analysis of scholarly production: How great is the impact?</article-title>
          <source>Scientometrics</source>
          ,
          <volume>105</volume>
          (
          <issue>3</issue>
          ),
          <fpage>1809</fpage>
          -
          <lpage>1831</lpage>
          . doi:
          <volume>10</volume>
          .1007/s11192-015-1645-z
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Öztürk</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kocaman</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Kanbach</surname>
            ,
            <given-names>D.K.</given-names>
          </string-name>
          <year>2024</year>
          :
          <article-title>How to design bibliometric research: an overview and a framework proposal</article-title>
          .
          <source>Review of Managerial Science18</source>
          ,
          <fpage>3333</fpage>
          -
          <lpage>3361</lpage>
          doi.org/10.1007/s11846- 024-00738-0
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Tanya</given-names>
            <surname>Millard</surname>
          </string-name>
          , Anneliese Synnot, Julian Elliott, Sally Green,
          <string-name>
            <surname>Steve</surname>
            <given-names>McDonald</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Tari Turner</surname>
          </string-name>
          ,
          <year>2019</year>
          :
          <article-title>Feasibility and acceptability of living systematic reviews: results from a mixed-methods evaluation</article-title>
          .
          <source>Syst Rev</source>
          .
          <year>2019</year>
          ;
          <volume>8</volume>
          (
          <issue>1</issue>
          ):
          <fpage>325</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Kitchenham</surname>
            <given-names>B</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Charters</surname>
            <given-names>S.</given-names>
          </string-name>
          <article-title>Guidelines for performing systematic literature reviews in software engineering</article-title>
          .
          <source>(EBSE</source>
          <year>2007</year>
          -
          <volume>001</volume>
          ). Keele university and Durham university joint report,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Khan</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Plopski</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fujimoto</surname>
            <given-names>Y</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kanbara</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jabeen</surname>
            <given-names>G</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            <given-names>Y</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            <given-names>X</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kato H</surname>
          </string-name>
          .
          <article-title>Surface remeshing: A systematic literature review of methods and research directions</article-title>
          .
          <source>IEEE Trans Vis Comput Graphics</source>
          <year>2020</year>
          ;
          <fpage>1</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Khan</surname>
          </string-name>
          , Z. Cheng, H. Uchiyama e tal.,
          <article-title>Recent advances in vision-based indoor navigation: A systematic literature review</article-title>
          .
          <source>Computers &amp; Graphics</source>
          (
          <year>2022</year>
          ), https://doi.org/10.1016/j. cag.
          <year>2022</year>
          .
          <volume>03</volume>
          .005.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Arce</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Solano</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ; Beltrán,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>A Comparison Study between Traditional and Deep-ReinforcementLearning-Based Algorithms for Indoor Autonomous Navigation in Dynamic Scenarios</article-title>
          .
          <source>Sensors</source>
          <year>2023</year>
          ,
          <volume>23</volume>
          , 9672. https://doi.org/10.3390/s23249672
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Afif</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ayachi</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Said</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          et al.
          <article-title>An indoor scene recognition system based on deep learning evolutionary algorithms</article-title>
          .
          <source>Soft Comput</source>
          <volume>27</volume>
          ,
          <fpage>15581</fpage>
          -
          <lpage>15594</lpage>
          (
          <year>2023</year>
          ). https://doi.org/10.1007/s00500- 023-09177-7
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Matrone</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Paolanti</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frontoni</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Pierdicca</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2024</year>
          ).
          <article-title>Enhancing explainability of deep learning models for point cloud analysis: a focus on semantic segmentation</article-title>
          .
          <source>International Journal of Digital Earth</source>
          ,
          <volume>17</volume>
          (
          <issue>1</issue>
          ). https://doi.org/10.1080/17538947.
          <year>2024</year>
          .2390457
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cai</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          et al.
          <article-title>An improved SLAM algorithm for substation inspection robot based on the fusion of IMU and visual information</article-title>
          .
          <source>Energy Inform</source>
          <volume>7</volume>
          ,
          <issue>86</issue>
          (
          <year>2024</year>
          ). https://doi.org/10.1186/ s42162-024-00390-8
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>SY.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>CM.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liong</surname>
          </string-name>
          , ST. et al.
          <article-title>AGV indoor localization: a high fidelity positioning and map building solution based on drawstring displacement sensors</article-title>
          .
          <source>J Ambient Intell Human Comput</source>
          <volume>15</volume>
          ,
          <fpage>2277</fpage>
          -
          <lpage>2293</lpage>
          (
          <year>2024</year>
          ). https://doi.org/10.1007/s12652-024-04755-5
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Talaat</surname>
            ,
            <given-names>F.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>El-Shafai</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Soliman</surname>
            ,
            <given-names>N.F.</given-names>
          </string-name>
          et al.
          <article-title>Intelligent wearable vision systems for the visually impaired in Saudi Arabia</article-title>
          .
          <source>Neural Comput &amp; Applic</source>
          (
          <year>2025</year>
          ). https://doi.org/10.1007/s00521-025- 10987-z.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Stahlke</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Feigl</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kram</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ott</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Seitz</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mutschler</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2024</year>
          ).
          <article-title>Data-driven Wireless Positioning</article-title>
          . In: Mutschler,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Münzenmayer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Uhlmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            ,
            <surname>Martin</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . (eds) Unlocking
          <source>Artificial Intelligence</source>
          . Springer, Cham. https://doi.org/10.1007/978-3-
          <fpage>031</fpage>
          -64832-8_
          <fpage>10</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Deng</surname>
            ,
            <given-names>Mingyao.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Robot navigation based on multi-sensor fusion</article-title>
          .
          <source>Journal of Physics: Conference Series. 2580. 012020. 10</source>
          .1088/
          <fpage>1742</fpage>
          -6596/2580/1/012020.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Tornese</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Polimeno</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pascarelli</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Buccoliero</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Carlino</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sansebastiano</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Sebastiani</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>2022</year>
          ).
          <article-title>Hardware-in-the-loop testing of a maritime autonomous collision avoidance system</article-title>
          .
          <source>Proceedings of MED22</source>
          . Fincantieri NexTech
          <string-name>
            <surname>S.p.A.</surname>
          </string-name>
          &amp; University of Salento
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2024</year>
          ).
          <article-title>Vision SLAM algorithm for wheeled robots integrating multiple sensors</article-title>
          .
          <source>PloS one</source>
          ,
          <volume>19</volume>
          (
          <issue>3</issue>
          ),
          <year>e0301189</year>
          . https://doi.org/10.1371/journal.pone.0301189
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Cha</surname>
            <given-names>K-J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            <given-names>J-B</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ozger</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee W-H. When Wireless Localization Meets Artificial Intelligence: Basics</surname>
          </string-name>
          , Challenges, Synergies, and Prospects.
          <source>Applied Sciences</source>
          .
          <year>2023</year>
          ;
          <volume>13</volume>
          (
          <issue>23</issue>
          ):
          <fpage>12734</fpage>
          . https://doi. org/10.3390/app13231273
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>