<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Developments (APUAVD). Vol.</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.22059/JITM.2021.80738</article-id>
      <title-group>
        <article-title>Automation of UAV Navigation Support Based on SIFT-</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Pylyp Prystavkа</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>60 Volodymyrska Street, 01033 Kyiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>The State University “Kyiv Aviation Institute”</institution>
          ,
          <addr-line>1 Lubomyr Huzar Avenue, 03058 Kyiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2017</year>
      </pub-date>
      <volume>53</volume>
      <issue>2017</issue>
      <fpage>0000</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>This paper presents an approach to supporting the navigation of an unmanned aerial vehicle (UAV) via an optical channel in cases where satellite navigation signals are unavailable. The proposed algorithmic technology is based on pre-planned flight routes and a reference dataset of landmark images. Landmarks are selected according to their visual characteristics, ensuring high recognition reliability. Image processing employs algorithms for detecting and describing local features, including SIFT, ORB, and neural network architectures such as SuperPoint and LightGlue. The proposed approach was tested under simulated flight conditions using hardware platforms Raspberry Pi 4 Model B and Orange Pi 5 Pro. The results confirm the effectiveness of the proposed method for automating UAV navigation support through optical channels, enabling autonomous UAV localization in GPS-denied environments.</p>
      </abstract>
      <kwd-group>
        <kwd>UAV</kwd>
        <kwd>optical navigation</kwd>
        <kwd>SIFT</kwd>
        <kwd>ORB</kwd>
        <kwd>image descriptors</kwd>
        <kwd>landmarks</kwd>
        <kwd>density distribution 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The increasing role of unmanned aerial vehicles (UAVs) in both civil and military applications—such
as surveillance, reconnaissance, monitoring, and data collection—requires continuous improvement
in navigation and positioning technologies. With the advancement of onboard hardware and
software, determining the UAV’s spatial position becomes critical for effective flight management
and mission success.</p>
      <p>Accurate localization is essential for tactical operations, autonomous navigation, collision
avoidance, and real-time mission execution, particularly in complex terrains or combat scenarios.</p>
      <p>
        Optical channels—cameras [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], video sensors, and other visual systems—are of particular
importance as the primary source of information for determining UAV coordinates in environments
with limited or unstable GPS access, including urban areas and zones of electronic interference.
      </p>
      <p>
        Localization methods based on image analysis—particularly those using algorithms for detecting
and matching key features, such as SIFT [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]—can achieve high positioning accuracy even under
varying scale, illumination, and perspective. These approaches are indispensable for aerial
photography [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], terrain monitoring, search and rescue operations, environmental assessment, and
precision agriculture.
      </p>
      <p>In</p>
      <p>military contexts, SIFT-based techniques facilitate reliable object recognition, terrain
orientation, and navigation automation during reconnaissance and strike missions, especially under
adversarial countermeasures.</p>
      <p>Research in this domain enables the development and enhancement of computer vision
algorithms, spatial analysis models, and software tools that improve UAV autonomy and operational
capabilities.</p>
      <p>Therefore, determining UAV position using optical channels and keypoint detection methods—
especially SIFT—holds substantial scientific and practical value across a broad range of applications,
from civilian to military domains, and remains a priority area in modern autonomous systems
research.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Review of existing solutions and literature sources</title>
      <p>
        Modern UAV navigation systems are the subject of active research in both academia and industry.
Efforts are underway worldwide to enhance the reliability and accuracy of UAV positioning,
especially in environments where GPS usage is limited or impossible [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Particular attention is paid
to developing alternative navigation solutions for scenarios with obstacles or complete satellite
signal loss.
      </p>
      <p>Study [5] presents a multi-level localization module combining GNSS, inertial navigation, and
visual depth data. This approach improves the robustness and accuracy of navigation in complex
conditions. Similarly, [6] proposes a new GNSS/INS/LiDAR integration scheme that ensures
continuous and precise navigation where satellite access is restricted.</p>
      <p>An alternative strategy involves the development of low-cost navigation solutions for GPS-denied
environments [7], based on new-generation sensors and visual scene analysis methods. Such systems
increasingly employ computer vision techniques, including keypoint detection and description
algorithms, with SIFT, SURF, and ORB being among the most prominent.</p>
      <p>Before the rise of deep learning classifiers, object detection and matching on images relied on
rotation- and scale-invariant methods. One of the most recognized was the patented SIFT
(ScaleInvariant Feature Transform) algorithm, introduced by David G. Lowe in 1999. SURF (Speeded-Up
Robust Features), developed by Herbert Bay in 2006, built upon SIFT's principles. Both algorithms
became foundational for various navigation and recognition applications.</p>
      <p>An in-depth analysis of local feature-based methods is provided in the survey by B.H.
Kukharenko, "Image Analysis Algorithms for Detecting Local Features and Recognizing Objects and
Panoramas", which explores both the advantages and limitations of these approaches. Further
advancements were made by researchers like T. Lindeberg, who developed the concept of
scalespace, and Brown, Hua, and Winder, who investigated discriminative descriptor training and
realtime object tracking using contour levels.</p>
      <p>Work [8] explores a multi-layer architecture for autonomous UAVs addressing diverse challenges
such as adaptive environmental interaction and decision-making. Optical navigation using SIFT
facilitates landmark recognition and enhances positioning accuracy in visual odometry tasks.</p>
      <p>To mitigate GPS loss, researchers recommend employing additional optical sensors integrated via
Kalman filters and their modifications [9–10]. SIFT-based methods in such systems enhance image
alignment and reduce localization errors during motion.</p>
      <p>Several reviews emphasize the role of artificial intelligence. In [11], an LSTM-based model is
proposed to improve visual odometry, using SIFT-derived features as input. Work [12] discusses
adaptive learning with step-size regulation for enhanced inertial navigation, while [13] introduces an
ensemble deep learning method for GPS spoofing detection as part of a broader navigation system.</p>
      <p>SIFT-based approaches remain integral to visual localization algorithm development due to their
robustness against geometric and lighting variations. These qualities are vital for real-time UAV
operation in complex or dynamic environments.</p>
      <p>
        Notable contributions to the field have been made by Ukrainian researchers [19–21] and by the
authors of this study [
        <xref ref-type="bibr" rid="ref2">2, 16–18</xref>
        ], who explore visual feature analysis and SIFT-based localization
algorithms..
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Objectives of the study and research questions</title>
      <p>This study aims to develop and test an information technology system for automated UAV
navigation support based on SIFT-like methods. In scenarios where GPS signals are unavailable, the
proposed system enables UAV localization through the identification of pre-defined visual landmarks
using onboard surveillance cameras.</p>
      <p>To address this challenge, we examine several SIFT-like methods along with neural network
architectures such as LightGlue and SuperPoint. A key feature of these algorithms is their ability to
return only those descriptor sets and coordinates of keypoints that show mutual similarity. Typically,
they automatically match descriptors between two images. However, this can result in false positives
when descriptors from reference and test images coincidentally align. To mitigate this effect and
properly interpret the matching outcomes, we propose a custom method based on evaluating the
spatial density distribution of keypoint coordinates [22].</p>
      <p>For experimental validation, a custom software application was developed to operate in two
modes: object detection and coordinate estimation. The solution was tested on computational
platforms that emulate UAV onboard systems using aerial imagery data.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Research materials and methods</title>
      <p>Feature detection and description algorithms are fundamental for building visual navigation systems
that function without GPS. The typical operation of such algorithms involves several key stages.
First, local features are detected—these are distinctive elements in the image that remain invariant
under changes in lighting, scale, orientation, and geometric transformation. Next, descriptors are
generated—numeric representations of image fragments surrounding each keypoint. These
descriptors are then matched between image frames or samples to establish spatial correspondences.
Finally, erroneous matches are filtered using criteria such as vector distance thresholds or a
minimum required number of matches.</p>
      <p>In this study, we selected three approaches for performance comparison in GPS-denied UAV
navigation scenarios: SIFT (Scale-Invariant Feature Transform), ORB (Oriented FAST and Rotated
BRIEF), and neural architectures LightGlue and SuperPoint. The selection was based on prior
benchmarking results [9], showing that ORB offers superior speed, while SIFT delivers high-quality
matches in most conditions. In experiments, these algorithms were tested on images subjected to
various transformations: brightness changes, scaling, rotation, geometric distortions (e.g., fisheye
effect), and noise addition.</p>
      <p>LightGlue stands out among neural solutions for its compatibility with different types of
descriptors, including SuperPoint, DISK, ALIKED, and even classical SIFT. According to recent
studies [10], SuperPoint performs well in real-time tasks but is less robust to scale changes compared
to ALIKED, particularly the ALIKED (16, MS) variant designed for scale-sensitive applications.</p>
      <p>Despite this, SuperPoint was chosen as one of the main algorithms in this work due to its ability to
simultaneously detect keypoints and compute descriptors over full-resolution images in real time
[12]. Notably, it is self-supervised and does not require large labeled datasets for training.</p>
      <sec id="sec-4-1">
        <title>4.1. Pre-flight preparation</title>
        <p>The proposed system includes a pre-flight preparation stage involving route planning and the
formation of a reference set of landmark images. Landmarks are considered to be objects with clearly
defined visual characteristics—such as shape, contour, or texture—that can be reliably identified in
images. These may include buildings with distinctive geometry, open industrial structures, power
infrastructure (e.g., substations), hydraulic facilities (e.g., dams), and others.</p>
        <p>Flight routes are planned to pass through areas where such landmarks are located. This enhances
the accuracy of visual UAV localization, even if the route is longer than a direct alternative. For
example, as shown in Fig. 1(a), the pink route “A–B” is longer than the red one but traverses areas
with higher visual information density.</p>
        <p>Each image in the reference dataset contains a specific landmark captured during the pre-flight
phase (Fig. 1(b)). For every image, keypoints are detected and georeferenced to real-world
coordinates, which are later used during visual matching and localization in flight.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.3. Formal formulation of the problem</title>
        <p>Let us consider a reference dataset of images</p>
        <p>I ={i1 , i2 , i3 , … , ik }, k =1 , N
(1)
with various characteristic landmark objects, we define a set of local features that are described by
their location (keypoint pixel coordinates) and descriptors - unique vectors of fixed length. Each
image ik contains a different number of keypoints and, accordingly, descriptors:</p>
        <p>dk , pk , pk=1 , N k ,
dk , pk ,=((dk , pk)1 , (dk , pk)2 , … , (dk , pk)l) , l=1 , n
where N – is the number of keypoints for image ik, n– vector dimention.</p>
        <p>The images were aligned to a digital map, meaning that each pixel was assigned real-world
geographic coordinates</p>
        <p>Therefore, for each descriptor dk , pk , we obtain coordinates</p>
        <p>C k , pk={xk , pk , yk , pk }, k =1 , N , pk=1 , N k,
belonging to the set C k , pk ϵ Coord
The following data set is generated (ND):</p>
        <p>S={dk , pk , C k , pk , k =1 , N , pk=1 , N k }.</p>
        <p>A tabular representation of this dataset is shown below (Table 1).</p>
        <p>For each input image, the task is to determine whether it contains an object from the reference set
І. If so, the system must estimate its coordinates.</p>
        <p>A key feature of algorithms for detecting, describing, and matching local features is that they
return only those descriptors and keypoints that are mutually similar. These algorithms typically
establish correspondences between two sets of descriptors. However, random matches between
descriptors in reference and test images can lead to false positives.</p>
        <p>To address this issue and improve interpretation accuracy, we propose a method based on
evaluating the spatial density function of keypoint coordinates [22]</p>
        <p>Let the coordinate space be divided ∆sx, sy into rectangular regions:</p>
        <p>xk ,l , yk ,l , l=1 , M ,
where M – number of identified keypoint matches in the frame, on rectangular areas
∆ht ,hq : ( xmin+ii ∙ sx , ymin+ jj ∙ s y ) , ii=0 , M x−1 , jj=0 , M y−1 ,
where M x , M y – is the number of rectangular partitions;
xmin= min {xk ,l }; ymin= min { yk ,l },</p>
        <p>l=1, M l=1, M
which determine the empirical probability of the appearance of a particular point from the array
in a specific local partition region ∆sx , sy, we will get
where
f ii , jj=
I k={
1 , if ii=[ xl− xmin ]та jj=[ yl− ymin
sx s y
0 , otherwise
]
where [ ] - whole part function.</p>
        <p>Let Matches be the set containing only those descriptors that match the descriptors of the input
image .I flight. It is assumed that if the target object is indeed present in the input image, then the
keypoints corresponding to the descriptors from the set Matches will lie compactly, that is, within a
single histogram partition cell with the maximum frequency value at partition class i*, j*:
i* , j*=arg max {f ii , jj }.</p>
        <p>ii , jj</p>
        <p>After identifying i* , j*, the coordinates of the target landmark can be chosen either as the center
of this region or as the arithmetic mean of the coordinates of the descriptors located within it..
5. Testing of the developed technology
To test the developed technology, a custom software application was created. It supports two
operational modes: object detection and coordinate estimation.</p>
        <p>During flight, if a GPS signal is available, the software enables the extraction of descriptor sets for
the target image from the onboard camera, allowing the inclusion of newly detected landmarks into
the reference dataset. Landmark identification can be performed automatically or with the assistance
of an operator. These landmarks are subsequently used to determine the UAV’s position.</p>
        <p>If necessary—such as in the event of GPS signal loss—the system initiates the algorithm for UAV
geolocation estimation. For each image captured by the onboard camera, preprocessing operations
can be applied to increase image processing speed for feature-based methods. These transformations
may include resizing, image smoothing with various algorithms (using different hyperparameters),
intensity adjustment, and reduction in the number of color channels. Such preprocessing reduces the
number of keypoints, thereby decreasing the execution time of feature matching algorithms.</p>
        <p>Following preprocessing, keypoints and their descriptors are extracted from the image acquired
by the UAV’s target payload camera. These descriptors are then matched with those in the reference
dataset. A set of matched descriptors is obtained, corresponding to specific keypoints.</p>
        <p>Next, the spatial density function of the matched keypoints is evaluated using a frequency
histogram. A uniform partitioning .∆sx , sy, is introduced, and the class (cell) with the highest
frequency is selected. If this frequency does not exceed a defined threshold, the system proceeds to
process the next frame.</p>
        <p>Otherwise, for each matched point within the selected class, geographic coordinates are retrieved
from the reference dataset. The most probable position of the target landmark is then estimated by
analyzing the distribution of these coordinates—specifically, by identifying the center of the
histogram bin with the maximum frequency.</p>
        <p>The estimated coordinates of the landmark within the camera’s field of view can subsequently be
used to determine the UAV’s position.</p>
        <p>Figure 2 illustrates the algorithm implemented in the software application.</p>
        <p>The block diagram shown illustrates the following steps:
1. Start – acquiring a frame of the terrain from the UAV camera. It is assumed that a reference
dataset is available on the onboard computer.
2. Keypoint detection in the image/video frame – detecting keypoints in the image. The
application can be tested on a target image simulating a map (with a pre-formed reference
dataset) or on a video frame (captured by the UAV camera or extracted from a video file).
3. Descriptor computation – generating descriptors for the detected keypoints.
4. Matching descriptors with the reference dataset – searching for matches between descriptors
from the input image and those in the reference dataset, and applying filtering rules if
necessary.
5. Building a frequency histogram of matched keypoints – constructing a frequency histogram
based on the number of matched keypoints falling into coordinate partition cells.
6. Is the maximum frequency ≥ threshold? – checking whether the number of matched
keypoints in the most populated partition cell exceeds a predefined threshold. If not, the
system proceeds to process the next input image.
7. Coordinate search? – verifying user-defined parameters to determine the next step. If the
application is being tested only for object detection, proceed to step 8. Otherwise, continue to
step 9.
8. Draw a bounding box around the detected object and save the frame – drawing a frame
around the detected object and saving the image with the bounding box to the user's file
system.
9. Estimate the average coordinate value – calculating the central coordinate of the histogram
cell containing the target object.</p>
        <sec id="sec-4-2-1">
          <title>Video file</title>
        </sec>
        <sec id="sec-4-2-2">
          <title>VideoFile1</title>
        </sec>
        <sec id="sec-4-2-3">
          <title>VideoFile 2</title>
        </sec>
        <sec id="sec-4-2-4">
          <title>VideoFile 3</title>
        </sec>
        <sec id="sec-4-2-5">
          <title>VideoFile 4</title>
        </sec>
        <sec id="sec-4-2-6">
          <title>VideoFile 5</title>
          <p>Total
24
25
35
9
24
117
25
31
33
5
27
121
10. Display detected coordinates on the map image – marking the estimated UAV position on the
map-simulated target image.
11. End – terminating the application when the video stream ends (either from a file or a live
feed) or when all images in the simulation have been processed.</p>
          <p>Each algorithm used in the software includes several hyperparameters. These were selected based
on publicly available technical documentation (SIFT, ORB) or through experimental tuning
(SuperPoint). For the SuperPoint method, the recommended maximum number of detected keypoints
is 2048, while 1024 is advised for improved performance. A comparison of performance between 1024
and 380 keypoints is shown in Table 2.</p>
          <p>The comparative analysis of this network showed that the difference in object recognition
accuracy between using 1024 and 380 keypoints is not significant. Moreover, using a smaller number
of keypoints provides a slight advantage in recognition accuracy. Therefore, the number of keypoints
was set to 380.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>6. Comparison of object search methods performance</title>
      <p>We compare the performance of object detection methods depending on the specific hardware used.
The following tables present averaged testing results of the information technology (IT) system on
video footage recorded in various terrain conditions. Table 4 shows the performance of the IT system
in object recognition on the microcomputer “Raspberry Pi 4 Model B” using different methods.
Methods based on neural networks require a significant amount of processing time (more than 2
minutes) to handle a 1920×1080 video frame on this microcomputer and were therefore excluded
from the comparison in Table 4.</p>
      <p>The results of testing on the “Orange Pi 5 Pro” are contained in Table 5.</p>
    </sec>
    <sec id="sec-6">
      <title>7. Accuracy of finding object coordinates</title>
      <p>The accuracy of coordinate estimation was evaluated by simulating flight over large images with a
resolution of 8256 × 5504 pixels, representing a conceptual map (Fig. 3).</p>
      <p>a)
b)</p>
      <p>After estimating the coordinates of the landmarks, the relative errors were calculated (Table 6) for
seven landmarks in each type of terrain near Horishni Plavni, Ukraine (“Terrain 1” – Fig. 3a, “Terrain
2” – Fig. 3b).</p>
    </sec>
    <sec id="sec-7">
      <title>8. Conclusions</title>
      <p>A technology for automating UAV position support has been proposed, based on optical
channel processing and the identification of landmark objects. The approach includes three
methods for comparative evaluation under GPS-denied conditions: SIFT (Scale-Invariant
Feature Transform), ORB (Oriented FAST and Rotated BRIEF), and neural network
architectures LightGlue and SuperPoint.
2. The proposed implementation combines these methods with a custom approach based on
evaluating the spatial density function of the coordinates of detected landmark keypoints.
3. A comparative analysis of the accuracy and performance of the selected methods was
conducted across different types of hardware by simulating UAV flights over large images
with a resolution of 8256 × 5504 pixels, representing various terrain conditions. Seven
landmarks were evaluated per terrain. The maximum coordinate estimation error reached
approximately 10% for two landmarks in one of the terrains. For all other landmarks, the error
was approximately 1%.
4. The feasibility of applying landmark detection methods on “Orange Pi 5 Pro” microcomputers
was assessed and compared with “Raspberry Pi 4 Model B.” The latter is not recommended
due to its low computational speed.
5. Test results showed that the processing speed of the proposed methods significantly depends
on prior image smoothing. After smoothing, the number of detected keypoints is reduced,
which accelerates feature matching. Furthermore, smoothing improves keypoint stability,
retaining only the most robust features for processing.
6. It is recommended to use the SIFT method on the “Orange Pi 5 Pro” when recognition
accuracy is more important than speed in a particular flight task. SIFT has higher robustness
to image scaling and demonstrates good recognition accuracy even after image downscaling.
According to test results, SIFT achieved 84% recognition accuracy with image smoothing (66%
without), compared to approximately 58% for ORB.
7. It is recommended to use the ORB method on both “Raspberry Pi 4 Model B” and “Orange Pi 5</p>
      <p>Pro,” as it demonstrated near real-time processing speeds during testing.
8. Future research will focus on testing the proposed technology on the NVIDIA Jetson Nano
microcomputer, which is expected to provide near real-time performance. Additional
development will aim to enhance UAV navigation support by determining the UAV’s
position based on the coordinates of identified landmark objects.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used GPT4o in order to translate research notes and
results from Ukrainian to English. After using this tool, the authors reviewed and edited the content
as needed and take full responsibility for the content of the publication.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Buryi P.</given-names>
            , Pristavka P.,
            <surname>Sushko</surname>
          </string-name>
           V.
          <article-title>Automatic definition the field of view of camera of unmanned aerial vehicle</article-title>
          .
          <source>Science-intensive technologies</source>
          ,
          <volume>2</volume>
          (
          <issue>30</issue>
          ) (
          <year>2016</year>
          ), pp.
          <fpage>151</fpage>
          -
          <lpage>155</lpage>
          . URL: https:// ouci.dntb.gov.ua/en/works/4Lq0DAVl/
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Nixon</surname>
            <given-names>M. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aguado</surname>
            <given-names>A. S. Feature</given-names>
          </string-name>
          <string-name>
            <surname>Extraction</surname>
            and
            <given-names>Image</given-names>
          </string-name>
          <string-name>
            <surname>Processing</surname>
          </string-name>
          . Oxford, Auckland, Boston, Johannesburg, Melbourne, New Delhi: Newnes. 2nd edn. (
          <year>2008</year>
          ). URL: https://www.cl72.org/090imagePLib/books/book2-Nixon,Aguado-feachureExtractionImageProcessing-.
          <source>pdf</source>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Prystavka</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , 
          <string-name>
            <surname>Dukhnovska</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          , 
          <string-name>
            <surname>Kovtun</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          , 
          <string-name>
            <surname>Cholyshkina</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          , 
          <string-name>
            <surname>Semenov</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <article-title>Recognition of aerial photography objects based on data sets with different aggregation of classes</article-title>
          .
          <source>Eastern-European Journal of Enterprise Technologies</source>
          .
          <fpage>2</fpage>
          -
          <lpage>121</lpage>
          (
          <year>2023</year>
          ), pp.
          <fpage>6</fpage>
          -
          <lpage>13</lpage>
          . doi.org/10.15587/
          <fpage>1729</fpage>
          -
          <lpage>4061</lpage>
          .
          <year>2023</year>
          .
          <volume>272951</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.Gallo E.</given-names>
            ,
            <surname>Barrientos</surname>
          </string-name>
          <string-name>
            <given-names>A</given-names>
            .
            <surname>Long-Distance</surname>
          </string-name>
          <string-name>
            <surname>GNSS</surname>
          </string-name>
          -
          <article-title>Denied Visual Inertial Navigation for Autonomous Fixed-Wing Unmanned Air Vehicles: SO(3) Mainifold Filter Based on Virtual Vision Sensor</article-title>
          . Aerospace.
          <volume>10</volume>
          (
          <issue>8</issue>
          ):
          <volume>708</volume>
          (
          <year>2023</year>
          ). URL: https://www.mdpi.com/2226-4310/10/8/708
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>