<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Academic Journal of Science and Technology 14 (2025) 328-334. doi:10.54097/dnbnkp47.
[24] C.</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1109/ICRA57147.2024.10611530</article-id>
      <title-group>
        <article-title>Adaptive algorithm for visual positioning of UAVs in the local environment</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Kostiantyn Dergachov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksii Hurtovyi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Elshan Hashimov</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Azerbaijan Technical University</institution>
          ,
          <addr-line>H. Javid Ave., 25, Baku, AZ1073</addr-line>
          ,
          <country country="AZ">Azerbaijan</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <volume>12950</volume>
      <fpage>363</fpage>
      <lpage>377</lpage>
      <abstract>
        <p>This paper proposes an adaptive localization algorithm for unmanned aerial vehicles (UAVs) operating in indoor, GPS-denied environments. The approach is based on the analysis of visual features extracted from the surrounding environment and employs a cascade structure that integrates four classical local positioning techniques-Proximity, Centroid, Weighted Centroid, and Lateration - into a unified decision-making module. The system adaptively switches between these methods based on real-time signal characteristics and estimated positioning error. An embedded accuracy control mechanism monitors localization quality and triggers a transition to a map-based visual positioning mode when necessary, ensuring high robustness in dynamic or cluttered spaces. The algorithm is lightweight and does not require extensive pre-mapping or environmental calibration, making it well-suited for rapid deployment in unknown indoor areas. The proposed system was experimentally validated using the DJI Tello platform in real-world indoor settings. Results demonstrate that the method achieves reliable and accurate localization with minimal computational requirements, ofering a practical solution for indoor UAV navigation, inspection, and monitoring tasks.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;visual localization</kwd>
        <kwd>adaptive algorithm</kwd>
        <kwd>indoor positioning</kwd>
        <kwd>UAV</kwd>
        <kwd>Tello</kwd>
        <kwd>Proximity</kwd>
        <kwd>Lateration</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Unmanned aerial vehicles (UAVs) have become increasingly important in a wide range of civilian and
industrial applications, including indoor inspection, surveillance, inventory tracking, search-and-rescue
operations, and infrastructure monitoring. A critical enabler for the autonomous operation of UAVs in
such tasks is the ability to navigate reliably without relying on external satellite-based systems like
GPS, which are often unavailable or unreliable in enclosed, obstructed, or densely built environments.</p>
      <p>Traditional navigation systems designed for outdoor use lose their efectiveness indoors due to signal
attenuation, multipath efects, and interference from walls and structures. To compensate for the
absence of GNSS, researchers have explored a variety of local positioning solutions such as
LiDARbased mapping, visual SLAM, inertial odometry, and hybrid sensor fusion methods. Although these
approaches provide high precision, they come with significant limitations — LiDAR and depth sensors
are expensive and bulky; visual SLAM demands high computational power and detailed scene mapping;
deep learning–based methods require large annotated datasets and often lack real-time responsiveness
on low-power platforms.</p>
      <p>This study is motivated by the need to design a localization system that ofers a practical trade-of
between accuracy, eficiency, and afordability. Our proposed solution is an adaptive visual positioning
algorithm that leverages classical signal-based localization techniques in a cascading fashion.
Specifically, the system integrates Proximity, Centroid, Weighted Centroid, and Lateration methods into
a single adaptive decision module capable of switching strategies dynamically based on real-time
signal characteristics and error thresholds. The lightweight nature of the proposed methods allows
for fast deployment and real-time operation on embedded platforms without the need for extensive
environmental calibration or pre-mapped datasets.</p>
      <p>To further enhance reliability, an error monitoring mechanism is embedded into the algorithm,
enabling the system to switch to a more advanced visual map-based localization mode if the accumulated
error exceeds an acceptable threshold. This hybrid approach ensures both adaptability in unknown
environments and suficient precision in cluttered or dynamic indoor spaces.</p>
      <p>The algorithm has been implemented and tested on a low-cost UAV platform — DJI Tello — to
evaluate its performance in real-world indoor scenarios. Experimental results confirm that the system
maintains a high level of localization accuracy while keeping computational demands minimal, making
it well-suited for small-scale autonomous aerial robots.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>Current indoor positioning systems often use Wi-Fi fingerprinting, Visual SLAM, or sensor integration.
However, they require pre-training, complex calibration, or high computing power. The proposed
approach aims to provide acceptable positioning accuracy with minimal overhead by using classical
visual landmark-based methods with adaptive logic.</p>
      <p>Achieving reliable and accurate localization of unmanned aerial vehicles (UAVs) in indoor
environments, where satellite-based navigation systems are inaccessible, has been a long-standing research
challenge. Over the years, a variety of strategies have emerged, leveraging wireless infrastructure,
vision-based tracking, and sensor fusion to approximate or estimate UAV position in real time.</p>
      <p>
        One of the foundational techniques is Wi-Fi fingerprinting, introduced by Bahl and Padmanabhan
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], which uses signal strength maps created from received signal strength indicators (RSSI) at various
known locations. This method enables low-cost localization using existing Wi-Fi networks and is still
widely used in commercial indoor navigation applications. However, it sufers from inherent limitations:
the requirement for extensive site surveys, sensitivity to environmental dynamics (furniture
rearrangement, human movement and other), and the need for frequent recalibration reduce its scalability and
adaptability to dynamic real-world conditions.
      </p>
      <p>
        In contrast, visual odometry and visual SLAM (Simultaneous Localization and Mapping) methods
have become a major focus in robotic navigation. MonoSLAM [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] was among the first systems to
perform real-time 3D tracking using a single camera. Subsequent advancements, such as Visual–LiDAR
Odometry and Mapping (V-LOAM) [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], ofered higher robustness and lower drift by combining camera
and LiDAR data. These methods have proven their value in structured indoor environments and
autonomous driving; however, they require high processing power, finely tuned sensor fusion, and
often do not scale well to low-cost UAVs with limited onboard computational capacity [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        Several studies propose hybrid systems where navigation is achieved by combining multiple types of
sensory input. Kulik and Dergachev [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], and later Dergachov et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], presented computer vision–based
navigation and autonomous landing algorithms that utilized environmental patterns. These techniques
are particularly efective when landmarks or floor patterns are clearly visible. Nonetheless, they often
rely on stable lighting and high camera resolution, both of which may vary during indoor flights.
Additional works explored object identification using polarization signals [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] and real-time adaptive
control systems driven by swarm-based metaheuristics [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ], underlining the multidisciplinary nature
of modern UAV navigation.
      </p>
      <p>
        More recently, the integration of UAVs into networked systems has led to the development of
cooperative localization and communication strategies. Examples include LiFi-enabled swarm deployment [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ],
and onboard adaptive management of UAV clusters for autonomous task distribution in constrained
environments [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. These architectures are complex but open the door to large-scale aerial coordination
in GPS-denied contexts.
      </p>
      <p>
        As the demand for scalable and computationally light systems increases, attention has turned toward
methods that minimize environmental dependence and pre-configuration. Ivanenko and Petrov [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ],
Sydorenko and Hrytsenk [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], and Melnyk and Romanenko [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] proposed lightweight visual
landmarkbased localization algorithms suitable for rapidly changing layouts and unknown environments. These
models do not require extensive ofline training, which makes them attractive for time-critical and
resource-constrained missions.
      </p>
      <p>
        Machine learning–based approaches also continue to expand, with Bondarenko and Chernenko [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]
demonstrating how visual cues can be interpreted by adaptive models to improve flight trajectory
corrections in response to environmental obstacles. Such systems balance adaptability with limited
computational overhead, providing a middle ground between deep SLAM and classical geometry-based
methods.
      </p>
      <p>
        Parallel to this, high-complexity AI models such as AGCosPlace [
        <xref ref-type="bibr" rid="ref16 ref17">16, 17</xref>
        ] use transformer-based
architectures to analyze spatial visual scenes. These methods can generalize across environments
with minimal domain-specific training and are promising in terms of scalability. Likewise, control
architectures based on control barrier functions (CBFs) have been used to manage UAV path constraints
under uncertainty [
        <xref ref-type="bibr" rid="ref18">18, 19</xref>
        ]. Yet their computational intensity and hardware demands make them more
suited for research-grade UAV platforms than for embedded commercial drones [20, 21].
      </p>
      <p>Event-based odometry [22], visual-inertial navigation systems (VINS) [23], and adaptive SLAM
pipelines [24] represent continued efort to optimize navigation accuracy in degraded visual conditions.
Each of these systems introduces improved robustness to motion blur, lighting changes, or textureless
areas but at the cost of increased complexity in parameter tuning and system calibration.</p>
      <p>Other researchers explored global-to-local hybrid strategies. For example, Wei et al. [25] combined
satellite imagery with local UAV sensor data to generate adaptive maps. Although useful for large-scale
or transition spaces (e.g., semi-outdoor environments), these methods lack the responsiveness needed
for fully enclosed dynamic spaces like warehouses or industrial interiors.</p>
      <p>Vasarhelyi et al. [26] provide a comprehensive survey of visual odometry techniques in GPS-denied
scenarios, emphasizing the need for scalable, hardware-agnostic approaches. However, the field
continues to lack simple, lightweight systems that can adaptively trade of accuracy and complexity in real
time without preloaded maps or neural models.</p>
      <p>In this context, the proposed solution — an adaptive cascade of classical localization algorithms
(Proximity, Centroid, Weighted Centroid, and Lateration) — seeks to bridge the gap between accuracy
and eficiency. Unlike heavyweight methods requiring map generation or neural inference, our approach
dynamically switches between fast geometric estimators depending on signal quality and accumulated
error. If the system detects significant drift or inconsistency, it triggers fallback to more robust visual
reference–based modules. This logic enables practical, scalable deployment of indoor navigation on
lightweight UAVs without complex setup or calibration procedures.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Purpose and objectives of the research</title>
      <p>The main goal of this research is to develop and experimentally verify an adaptive algorithm for visual
positioning of UAVs based on the cascade use of inexpensive methods.</p>
      <p>The main objectives of the research are:
1. Analyze classic methods of visual positioning.
2. Design a cascade algorithm with dynamic error estimation.
3. Implement a system based on the DJI Tello drone.
4. Evaluate positioning accuracy and processing eficiency.</p>
      <p>5. Compare the results with other approaches (Fingerprinting, Visual SLAM).</p>
    </sec>
    <sec id="sec-4">
      <title>4. Methodology</title>
      <p>The proposed visual localization algorithm for UAVs is designed to operate in GPS-denied indoor
environments by sequentially applying a cascade of classical geometric methods based on
cameracaptured landmarks. The algorithm leverages lightweight computation and adaptivity to dynamically
select the most appropriate localization technique depending on signal quality and environmental
complexity.</p>
      <sec id="sec-4-1">
        <title>4.1. Overview of the cascade structure</title>
        <p>This architecture ofers a balanced trade-of between computational eficiency and localization accuracy.
By attempting the simplest methods first and escalating only when necessary, the system minimizes
latency and energy consumption, making it well-suited for real-time use on lightweight UAV platforms
with limited processing capacity.</p>
        <p>The principle of operation of the cascade mechanism for the drone’s perception of visual information
by the onboard camera is shown in Figure 1.</p>
        <p>In real time, image processing and recognition of objects around the UAV based on their texture, size,
and shape are performed. Preliminary localization is based on the use of classical methods, when the
system estimates the optimal location of the UAV among four methods used sequentially:
1. Proximity selects the closest object in distance from the surrounding objects, then calculates its
coordinates and assigns them as the current location of the UAV. The advantages of the method are
simplicity in software implementation, as well as speed of data processing, along with the ability to
work with a limited set of landmarks.</p>
        <p>2. Centroid is the process of finding the coordinates of the center point of a shape formed based on a
ifxed number of landmarks. It is used when there are two or more objects around.</p>
        <p>3. Weighted Centroid is an improved version of the previous method due to weighting coeficients
that significantly afect the coordinates of the position in space by taking into account distance values.</p>
        <p>4. Lateration calculates the coordinates of the UAV position based on distances and sets of coordinates,
and the method works only when data is available for three or more landmarks. Higher positioning
accuracy is achieved by solving a system of nonlinear equations using the least squares method.</p>
        <p>During the flight mission, each of the methods is calculated several times every second without
delay. This allows the acquired values to be averaged and normalized when artifacts appear, caused by
a sharp change in lighting or an error in the display of pixel values. The system is quite flexible when
interacting with diferent sequences of captured objects, their number or sizes.</p>
        <p>Adaptive accuracy assessment is a cycle during which each of the presented methods provides its
calculation result, after which the error is estimated and compared. The algorithm stops working and
provides a result if the determined position of the UAV passes the acceptable verification threshold, or
is redirected to the next method in the cascade.</p>
        <p>In the event that none of the methods ofered an accurate result, or if there are not enough landmarks
or they are unreliable, the system is redirected to a backup current position localization module that
works on the basis of a pre-created map or landmark position records and uses pattern matching and
visual recognition methods.</p>
        <p>The advantages of the algorithm are adaptability to environmental conditions and dynamic selection
of the required method, which shows higher accuracy and is easier to implement, as it does not require
deep learning models, too powerful resources (sensors or additional equipment for computing), as well
as additional calibration.</p>
        <p>The application on a small-sized quadcopter model such as the DJI Tello EDU shows high eficiency
and performance when used on platforms with limited hardware capabilities. Moreover, all this occurs
in conjunction with pre-processing of video frames, a process that also requires resources for calculation
and analysis.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Experimental setup and evaluation procedure</title>
        <p>To evaluate the performance of the baseline indoor positioning algorithms under controlled conditions,
a test rig was implemented using a DJI Tello drone equipped with a 720p camera. The experiment was
conducted in a closed 5x5 meter room with four visual markers mounted on the walls and stands, and a
1x1 meter reference grid painted on the floor. The entire setup was designed to reproduce a simplified
but realistic indoor positioning scenario, allowing for consistent evaluation of positioning accuracy
using diferent methods. Data on the UAV position, rotation, and object parameters served to calculate
the actual test results and further calibration of methods during of development.</p>
        <p>The software stack included Python, the TelloPy API for communicating with the drones, and OpenCV
for image processing and marker detection. The drone was sequentially moved to predefined static
positions in the room. At each position, a set of positioning algorithms were independently applied to
estimate the drone’s location based on visual landmarks and environmental constraints. The estimation
procedure involved comparing the algorithmically estimated position with the known true location
determined using the reference floor grid.</p>
        <p>Modeling the conditions of the absence of GPS and taking measurements of the drone’s presence
space allowed not only to accurately compare the results of actual measurements with the original
calculation data, but also to calculate the localization error, which is described as the Euclidean distance
between the estimated and actual coordinates.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Research results</title>
      <p>To properly evaluate the proposed approach, a set of experiments was conducted in diferent situations
and numbers of objects around the UAV. All acquired values during real-time video stream processing
were recorded in data arrays, which were collected and exported to .xslx and .txt files for parsing and
further analysis at each stage of development and application.</p>
      <p>The Radar Chart in Figure 2 shows the localization error of the quadcopter in an unknown space
based on four positioning algorithms: Proximity, Centroid, Weighted Centroid and Lateration. The
evaluation was carried out on the basis of eleven static control points in the room.</p>
      <p>As a result, the Proximity method has quite large impulse deviations, reaching 1.6 m. Despite the
high speed of calculation and ease of integration, this indicates limited accuracy, but it is advisable to
use it as a preliminary approximation or in situations with a limited number of landmarks. Unlike the
previous one, the centroid demonstrates more stable error values, however, due to the simplicity of the
relations used, in some cases the error reaches 0.44 meters.</p>
      <p>The presence of more information describing the recorded objects increases the probability of accurate
identification of the UAV position. Thus, the use of weighting factors in Weighted Centroid provides an
advantage in the accuracy of calculations, because the coeficients are calculated, the values of which
are inversely proportional to the distance to the objects. The largest indicator of this method reaches
0.25m.</p>
      <p>At the same time, Lateration has a low level of error (0.24 m) due to the least squares method level
calculation system and a suficient set of data (position coordinates, distance, dimensions) according to
each landmark, and the accuracy of the method can only increase with an increase in the number of
objects around.</p>
      <p>In general, the choice of adaptive algorithm methods is based on a balance between the desired
accuracy, the number of available landmarks, and computational resources. A comparative analysis of
the error of the methods confirms that Centroid and Weighted Centroid are suitable for scenarios with
limited information, while Real-time Lateration provides the highest accuracy of drone positioning in a
complex local environment.</p>
      <p>The line chart on Figure 3 shows the standard deviation versus height (in meters) for four standard
positioning algorithms and the proposed adaptive method. The evaluation was based on finding fifteen
test positions.</p>
      <p>The results of Proximity have sharp impulses, reaching up to 0.1 m. The centroid showed a fairly
stable, but at the same time high level of error, about 0.08 m. Due to the weighting factors, the error rate
of Weighted Centroid is lower than previous methods and is 0.06. Lateration demonstrates a fairly static
trend of 0.04 meters. The adaptive algorithm not only adopts the values obtained through calculations
by one or another method, which provides impulse loads at the beginning, but also provides data
interpolation, due to which its error decreases over time and in some places reaches less than 0.04 m.</p>
      <p>The results of comparison the accuracy of the Proximity and Centroid methods in the horizontal
OXY plane are shown in the scatter plot (Figure 4), where each point represents the determined position
of the UAV after a second-by-second calculation cycle. The result shows, Proximity has a larger area
of distribution of values than Centroid, which still stabilizes the distribution of values over time and
creates a local cluster between the extreme values.</p>
      <p>Based on the fact that the Lateration method is possible if there are at least 3 objects on the scatter
diagram (Figure 5), a comparison of the estimation of the current position of the quadcopter in a room
is shown, where each of the ten steps represents both the recognition of landmarks around and the
instantaneous determination of the position in real time.</p>
      <p>Weighted Centroid generalizes the possible position area, creating a cluster of values that almost do
not change over time. Moreover, the larger the number of clusters, the less likely they are to change
further in the process of acquiring new values. Lateration, on the contrary, takes into account data
about each object and shows the exact position of the UAV at each moment of time, taking into account
possible deviations in the acquisition or calculation of values.</p>
      <p>The result of the adaptive algorithm is shown in Figure 6, where the quadcopter position points are
placed on the horizontal OXY plane during the determination of optimal coordinates by comparison
the results of the main visual navigation methods. The spatial relationship scale is indicated in meters.</p>
      <p>The use of an adaptive method based on interpolation reduces the spread of values, since significant
noise is eliminated both at the beginning of processing and during the calculations, when empirically
selected kernel parameters are applied to the matrices, which reduce the influence of random variables
on the final result. Thus, the adaptive algorithm has undergone a shift from the initial position mainly
within the range of ±0.003 meters, which indicates its stability and accuracy in the complete absence of
external signals.</p>
      <p>Each subsequent method is more complex than the previous one, as it contains additional calculation
and verification steps and is autonomous, and the system itself operates in switch mode, which allows
for modular implementation of calculations, multiple parallel threads, and flexibility in settings.</p>
      <p>To conduct a content analysis of the eficiency during each stage and determine the speed, an
additional calculation speed determination module was integrated into each of the methods. Based on
the export results, a line chart was constructed, that presented in Figure 7, showing the change in the
speed value of the three visual navigation methods during the full cycle of the program.</p>
      <p>Since the Lateration calculation requires at least three landmarks and is an iterative calculation of
a nonlinear system of equations for each frame, it is worth noting that as the number of landmarks
increases, the processing time also increases. Thus, the diagram in Figure 8 shows the speed of the
method when recognizing landmarks every second and determining the current location of the UAV.</p>
      <p>As a result, the calculation speed is the advantage of Proximity, Centroid and Weight Centroid, while
Lateration brings greater accuracy due to complex computational processes, so the calculation time is
significantly longer. In both cases, impulse fluctuations are usually caused by changing one object in
the frame to another or by sensitivity to lighting, but they normalize over time.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Directions for future research</title>
      <p>So far, our work has focused on the architecture of a cascade mechanism that combines methods with
varying degrees of complexity. From faster in calculations to more complex ones that require additional
checks to ensure a reliable basis for mechanisms for detecting the current position of the quadcopter in
an unknown space. Each of the methods is focused on high accuracy and optimal loading, taking into
account the state of the platform on which the testing is performed. However, several open questions
remain on our path that require additional attention.</p>
      <p>The first area for improvement is working with dynamic environments. Most of the algorithms
tested are focused mainly on interaction with static objects, while methods for dynamic environments
use comparisons with databases, machine learning, etc. Future work will be aimed at finding optimal
real-time data fusion algorithms with low computational requirements and performance characteristics
in dynamic conditions (moving, layering, and rotating landmarks).</p>
      <p>The next steps will be integration with visual odometry and SLAM methods, which can significantly
increase positioning accuracy, especially in the absence of navigation signals or in situations with a
limited number of landmarks.</p>
      <p>In addition, we plan to expand the available APIs for DJI Tello EDU, which will allow for more
lfexible settings based on user-defined conditions. We also plan to consider combining the capabilities
of small quadcopters with the involvement of machine learning in positioning tasks and scenario
determination, which takes into account modeling of uncertainty situations in which the system can
operate autonomously, minimizing manual intervention and independently predicting the efectiveness
of the system based on error calculations.</p>
      <p>The general advantages of the improvement are increased accuracy in localizing the current location
of small drones in an unknown space, as well as flexibility in system configuration and availability.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusions</title>
      <p>The proposed cascade approach allows to significantly reduce computational costs and energy
consumption when localizing a drone without losing accuracy. Unlike approaches based on Fingerprinting,
the implementation does not require prior training or the creation of a complex database, which greatly
simplifies the implementation of the system in a new environment.</p>
      <p>The algorithm demonstrates the ability to adapt to diferent levels of complexity of conditions: more
complex positioning methods are activated only when simple ones are not accurate enough. The
transition to cartographic positioning is initiated in cases when its necessary, which optimizes the use
of resources.</p>
      <p>The obtained experimental results prove the efectiveness of the approach for tasks of autonomous
UAV navigation in closed rooms and open up prospects for further integration with other sensor
channels and flight stabilization algorithms.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>P.</given-names>
            <surname>Bahl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Padmanabhan</surname>
          </string-name>
          ,
          <article-title>Radar: an in-building rf-based user location and tracking system</article-title>
          ,
          <source>in: Proceedings IEEE INFOCOM 2000. Conference on Computer Communications. Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies (Cat. No.00CH37064)</source>
          , volume
          <volume>2</volume>
          ,
          <year>2000</year>
          , pp.
          <fpage>775</fpage>
          -
          <lpage>784</lpage>
          vol.
          <volume>2</volume>
          . doi:
          <volume>10</volume>
          .1109/INFCOM.
          <year>2000</year>
          .
          <volume>832252</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Davison</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. D.</given-names>
            <surname>Reid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. D.</given-names>
            <surname>Molton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Stasse</surname>
          </string-name>
          , Monoslam:
          <article-title>Real-time single camera slam</article-title>
          ,
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          <volume>29</volume>
          (
          <year>2007</year>
          )
          <fpage>1052</fpage>
          -
          <lpage>1067</lpage>
          . doi:
          <volume>10</volume>
          .1109/ TPAMI.
          <year>2007</year>
          .
          <volume>1049</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , S. Singh,
          <article-title>Visual-lidar odometry and mapping: low-drift, robust, and fast</article-title>
          ,
          <source>in: 2015 IEEE International Conference on Robotics and Automation (ICRA)</source>
          ,
          <year>2015</year>
          , pp.
          <fpage>2174</fpage>
          -
          <lpage>2181</lpage>
          . doi:
          <volume>10</volume>
          .1109/ ICRA.
          <year>2015</year>
          .
          <volume>7139486</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>DJI</surname>
          </string-name>
          ,
          <article-title>Oficial dji tello sdk documentation</article-title>
          ,
          <year>2025</year>
          . URL: https://tellopilots.com.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kulik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Dergachev</surname>
          </string-name>
          ,
          <source>Intelligent Transport Systems in Aerospace Engineering</source>
          , Springer International Publishing, Cham,
          <year>2016</year>
          , pp.
          <fpage>243</fpage>
          -
          <lpage>303</lpage>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>319</fpage>
          -19150-
          <issue>8</issue>
          _8. doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -19150-
          <issue>8</issue>
          _
          <fpage>8</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>K.</given-names>
            <surname>Dergachov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bahinskii</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Piavka</surname>
          </string-name>
          ,
          <article-title>The algorithm of UAV automatic landing system using computer vision</article-title>
          , in: 2020
          <source>IEEE 11th International Conference on Dependable Systems, Services and Technologies (DESSERT)</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>247</fpage>
          -
          <lpage>252</lpage>
          . doi:
          <volume>10</volume>
          .1109/DESSERT50317.
          <year>2020</year>
          .
          <volume>9124998</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Popov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Tserne</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Volosyuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhyla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pavlikov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ruzhentsev</surname>
          </string-name>
          , et al.,
          <article-title>Invariant polarization signatures for recognition of hydrometeors by airborne weather radars</article-title>
          , in: O.
          <string-name>
            <surname>Gervasi</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Murgante</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Taniar</surname>
            ,
            <given-names>B. O.</given-names>
          </string-name>
          <string-name>
            <surname>Apduhan</surname>
            ,
            <given-names>A. C.</given-names>
          </string-name>
          <string-name>
            <surname>Braga</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Garau</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Stratigea (Eds.),
          <source>Computational Science and Its Applications - ICCSA 2023. Lecture Notes in Computer Science</source>
          , vol.
          <volume>13956</volume>
          , Springer Nature Switzerland, Cham,
          <year>2023</year>
          , pp.
          <fpage>201</fpage>
          -
          <lpage>217</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -36805-9_
          <fpage>14</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>T.</given-names>
            <surname>Nikitina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kuznetsov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Averyanova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Sushchenko</surname>
          </string-name>
          , I. Ostroumov,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kuzmenko</surname>
          </string-name>
          , et al.,
          <article-title>Method for design of magnetic field active silencing system based on robust meta model</article-title>
          , in: S. Shukla,
          <string-name>
            <given-names>H.</given-names>
            <surname>Sayama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. V.</given-names>
            <surname>Kureethara</surname>
          </string-name>
          , D. K. Mishra (Eds.),
          <source>Data Science and Security. IDSCS 2023. Lecture Notes in Networks and Systems</source>
          , vol.
          <volume>922</volume>
          , Springer Nature Singapore, Singapore,
          <year>2024</year>
          , pp.
          <fpage>103</fpage>
          -
          <lpage>111</lpage>
          . doi:
          <volume>10</volume>
          .1007/
          <fpage>978</fpage>
          -981-97-0975-
          <issue>5</issue>
          _
          <fpage>9</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>T.</given-names>
            <surname>Nikitina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kuznetsov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ruzhentsev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Havrylenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Dergachov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Volosyuk</surname>
          </string-name>
          , et al.,
          <article-title>Algorithm of robust control for multi-stand rolling mill strip based on stochastic multi-swarm multi-agent optimization</article-title>
          , in: S. Shukla,
          <string-name>
            <given-names>H.</given-names>
            <surname>Sayama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. V.</given-names>
            <surname>Kureethara</surname>
          </string-name>
          , D. K. Mishra (Eds.),
          <source>Data Science and Security. IDSCS 2023. Lecture Notes in Networks and Systems</source>
          , vol.
          <volume>922</volume>
          , Springer Nature Singapore, Singapore,
          <year>2024</year>
          , pp.
          <fpage>247</fpage>
          -
          <lpage>255</lpage>
          . doi:
          <volume>10</volume>
          .1007/
          <fpage>978</fpage>
          -981-97-0975-5_
          <fpage>22</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>K.</given-names>
            <surname>Leichenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Fesenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kharchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Illiashenko</surname>
          </string-name>
          ,
          <article-title>Deployment of a UAV swarm-based LiFi network in the obstacle-ridden environment: Algorithms of finding the path for UAV placement</article-title>
          ,
          <source>Radioelectron. Comput. Syst</source>
          .
          <volume>1</volume>
          (
          <issue>109</issue>
          ) (
          <year>2024</year>
          )
          <fpage>176</fpage>
          -
          <lpage>193</lpage>
          . doi:
          <volume>10</volume>
          .32620/reks.
          <year>2024</year>
          .
          <volume>1</volume>
          .14.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>K.</given-names>
            <surname>Dergachov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Havrylenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pavlikov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhyla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Tserne</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Volosyuk</surname>
          </string-name>
          , et al.,
          <article-title>GPS usage analysis for angular orientation practical tasks solving</article-title>
          ,
          <source>in: Proceedings of 2022 IEEE 9th International Conference on Problems of Infocommunications, Science and Technology, Kharkiv, Ukraine</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>187</fpage>
          -
          <lpage>192</lpage>
          . doi:
          <volume>10</volume>
          .1109/PICST57299.
          <year>2022</year>
          .
          <volume>10238629</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>I.</given-names>
            <surname>Ivanenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. O.</given-names>
            <surname>Petrov</surname>
          </string-name>
          ,
          <article-title>Adaptive methods of visual localization for mobile robots in indoor environments</article-title>
          ,
          <source>Artificial Intelligence Systems</source>
          <volume>1</volume>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>L. V.</given-names>
            <surname>Sydorenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O. M.</given-names>
            <surname>Hrytsenko</surname>
          </string-name>
          ,
          <article-title>Adaptive drone motion control based on visual landmark analysis</article-title>
          ,
          <source>Artificial Intelligence Systems</source>
          <volume>3</volume>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Melnyk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. P.</given-names>
            <surname>Romanenko</surname>
          </string-name>
          ,
          <article-title>Application of hybrid algorithms for drone localization in confined spaces</article-title>
          ,
          <source>Artificial Intelligence Systems</source>
          <volume>4</volume>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>N. O.</given-names>
            <surname>Bondarenko</surname>
          </string-name>
          ,
          <string-name>
            <surname>V. I. Chernenko,</surname>
          </string-name>
          <article-title>Machine learning for adaptive UAV navigation in dynamic environments</article-title>
          ,
          <source>Artificial Intelligence Systems</source>
          <volume>1</volume>
          (
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <article-title>Agcosplace: A uav visual positioning algorithm based on transformer</article-title>
          ,
          <source>Drones</source>
          <volume>7</volume>
          (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .3390/drones7080498.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>A.</given-names>
            <surname>Gonzalez-Garcia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Miranda-Moya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Castañeda</surname>
          </string-name>
          ,
          <article-title>Robust visual tracking control based on adaptive sliding mode strategy: Quadrotor UAV - catamaran USV heterogeneous system</article-title>
          ,
          <source>in: 2021 International Conference on Unmanned Aircraft Systems (ICUAS)</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>666</fpage>
          -
          <lpage>672</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICUAS51884.
          <year>2021</year>
          .
          <volume>9476707</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>V. N.</given-names>
            <surname>Sankaranarayanan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Saradagi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Satpute</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Nikolakopoulos</surname>
          </string-name>
          ,
          <article-title>A cbf-adaptive control architecture for visual navigation for uav in the presence of uncertainties</article-title>
          ,
          <source>in: 2024 IEEE International Conference on Robotics and Automation (ICRA)</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>13659</fpage>
          -
          <lpage>13665</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>