<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>M. Y. Arafat, M. M. Alam, S. Moh, Vision-based navigation techniques for unmanned aerial
vehicles: Review and challenges, Drones</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.3390/drones7020089</article-id>
      <title-group>
        <article-title>A research platform for vision-based UAV autonomy: Architecture and implementation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yurii Lukash</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pylyp Prystavka</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>State University "Kyiv Aviation Institute"</institution>
          ,
          <addr-line>Liubomyra Huzara Ave., 1, Kyiv, 03058</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <volume>7</volume>
      <issue>2023</issue>
      <fpage>235</fpage>
      <lpage>244</lpage>
      <abstract>
        <p>This paper presents a practical research platform that combines UAV hardware with flexible client-server software, designed for testing and development of visual autonomy algorithms. The system allows researchers to quickly integrate their own video analysis procedures with minimal efort, using a simple interface. The architecture supports multiple types of onboard computers (such as Raspberry Pi 4 and Jetson Nano), and uses MAVLink via MAVSDK to communicate with flight controllers. A web interface provides real-time video streaming, manual control, and dynamic configuration of processing parameters. The platform includes telemetry logging synchronized with video frames, and supports both manual and automatic control based on video analysis results. Initial test runs with several diferent video processing methods, including object tracking, YOLO-based detection, and SIFT-based position holding, have confirmed the usability and flexibility of the proposed system for real UAV experiments.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;UAV</kwd>
        <kwd>visual autonomy</kwd>
        <kwd>computer vision</kwd>
        <kwd>MAVSDK</kwd>
        <kwd>onboard processing</kwd>
        <kwd>Raspberry Pi</kwd>
        <kwd>object tracking</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Modern unmanned aerial vehicles (UAVs) are increasingly used in a wide variety of areas - from
agriculture and environmental monitoring to security, reconnaissance and rescue operations, delivery
and logistics. In some applications, such as optical navigation and vision-based positioning, the use of
onboard visual sensors is critical [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. And in today’s conditions, the military sphere has also caused
rapid development, which, in addition to the variety of possible applications, adds many diferent
limitations and requirements, including bandwidth constraints, sensor availability, and computing
tradeofs in UAV deployments [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. These include the lack of any frequencies for communication,
unusually long distances of use, problems or the absence of standard sensors, or the impossibility of
their use. One of the key areas of UAV development is increasing the level of autonomy, in particular
through the integration of computer vision and intelligent video processing algorithms in real time.
      </p>
      <p>However, the implementation and testing of such algorithms is associated with a number of challenges.
First, the algorithms must be adapted to limited computing resources, typical of compact on-board
computers. Secondly, a convenient platform is needed that allows researchers to quickly change or
compare diferent approaches to image processing and decision-making without disrupting the overall
stability of the system. Thirdly, it is critically important to ensure the ability to transmit commands
from algorithms to the drone controller with minimal delay.</p>
      <p>This work presents a comprehensive solution that combines the UAV hardware platform with
clientserver software. The system allows researchers to explore, compare and validate new algorithms
and video processing procedures, and perform automatic drone control based on image analysis and
telemetry data from the device. Thanks to integration with MAVSDK and the use of logic separation
into client and server parts, a stable infrastructure for experiments with autonomous control in real
conditions is implemented. Additionally, the user is provided with a UI with real-time video streaming
and a corresponding set of control elements for sending commands directly to flight controller, as well
as tools for flexibly adjusting processing parameters or triggering specific processing stages as needed.</p>
      <p>The purpose of this article is to describe the architecture of the developed system, demonstrate
its capabilities in scenarios of autonomous control and object tracking, as well as justify the selected
technical solutions. Special attention is paid to the practical aspects of implementing the interaction
between the video processing modules and the UAV controller, which enables the use of this platform
as a universal tool for researching intelligent control of aircraft.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Definition of visual autonomy in UAV systems</title>
      <p>Unmanned Aerial Vehicle (UAV) autonomy typically refers to the capability of performing flight-related
tasks—ranging from navigation to interaction with the environment without real-time operator control.
In the context of visual autonomy, the drone relies primarily on onboard video data as the sensory input.
That is, the UAV analyzes image sequences in real time to make decisions, opening up a wide range of
research directions such as visual navigation, object tracking, detection, motion planning, and more.</p>
      <p>To enable such autonomy, the system architecture generally includes:
• Sensing subsystem: camera, GPS, compass, IMU;
• Onboard computing unit: a single-board computer capable of processing video streams (e.g.,</p>
      <p>Raspberry Pi or Jetson);
• Image analysis and decision-making module: software that implements detection, tracking,
mapping, or motion vector generation;
• Flight control system: an autopilot that receives control commands (e.g., via MAVLink);
• Feedback and/or logging subsystem: for evaluation, synchronization, and debugging.</p>
      <p>
        This structure provides a modular basis for experimental platforms where each component—from
frame acquisition to command generation — can be studied and substituted independently, which aligns
with contemporary modular approaches for implementing vision-based autonomy in UAV systems
[
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ].
      </p>
      <p>In recent literature, such architectures are seen as promising foundations for UAV autonomy in
environments with limited or unavailable GPS access [7, 8]. The significance of vision-based methods
for tasks like landing [9], object interaction, and visual localization is also frequently highlighted [10].
Additionally, recent work has demonstrated the feasibility of integrating object detection, tracking, and
obstacle avoidance in cost-constrained UAV platforms using AI-based visual pipelines [11].</p>
    </sec>
    <sec id="sec-3">
      <title>3. Hardware design and description</title>
      <p>To build a research platform that would allow for flexible configuration modification and various
experiments, a number of basic requirements for the hardware layout were formulated.</p>
      <p>The basis of the design was a quadcopter frame, which provides suficient flight stability and ease of
maintenance. An important design principle was the presence of backup communication channels
both the main and alternative connections to the drone were implemented to control the flight in the
event of a failure of the main channel during experiments.</p>
      <p>In order to be able to lift diferent types of single-board computers on board - such as OrangePi,
Raspberry Pi, or NVIDIA Jetson Nano - and compare their computing capabilities, more powerful engines
and an appropriate power system were used. The design provides for the possibility of connecting
cameras of diferent types, which allows testing algorithms with diferent quality and video stream
format. So far, launches have been carried out with 2 diferent CSI cameras, including those with
infrared illumination and 3 diferent USB cameras.</p>
      <p>The platform also includes a set of mandatory sensors for orientation in space and collection of
telemetric data: GPS module, magnetometer (compass), IMU. This provides not only autonomous</p>
      <p>Video
Core IV
Video</p>
      <p>Core VI
Raspberry
Pi 5</p>
      <p>Video
Core VII
GPU / Ac- RAM
cel.
navigation, but also allows you to integrate spatial information into the video analysis process or use it
for data synchronization in further research.</p>
      <p>The platform body is designed with suficient space for placing power supplies, cooling systems
and additional modules. This solution ensures long-term operation and flexibility in configuration at
diferent stages of development.</p>
      <p>One of the critical decisions in the design of the system was the selection of a single-board computer
that would meet the requirements for performance, power consumption, and compatibility with other
components. A number of alternatives available today were considered and the Raspberry Pi 4B was
selected for the current stage of research, as it provides a balance between energy eficiency and
computing capabilities, has CSI interfaces for the camera and GPIO for UART connection with the
controller. This model also has a stable ecosystem, availability, and a suficient number of modules
of this series are available at the department. This choice aligns with recent developments in
lowcost embedded vision systems for UAVs, which have demonstrated efective navigation and obstacle
avoidance capabilities using platforms like Raspberry Pi and Pixhawk flight controllers [12].</p>
      <p>Although experiments were conducted with the Raspberry Pi 5, which provides higher performance,
it has significantly higher power consumption, which is critical for the duration of autonomous flight.
In addition, newer models have new interfaces (PCIe, active cooling), which require more complex
integration into the on-board system.</p>
      <p>Benchmark comparisons indicate that while the Raspberry Pi 4’s ARM Cortex-A72 CPU delivers
strong single-thread performance, the Jetson Nano’s 128-core Maxwell GPU excels in parallel processing
tasks, making it more suitable for deep learning and computer vision applications [13]. However, GPU
acceleration is not currently used and will be considered as a future enhancement. In terms of power
consumption, the Raspberry Pi 4 operates between 3W and 7W, making it ideal for battery-powered
applications, whereas the Jetson Nano consumes between 5W and 10W [14].</p>
      <p>The platform architecture is designed in such a way that it is easy to replace the SBC. This allows you
to adapt the system to diferent research scenarios, as well as test compatibility with more powerful
modules, such as Jetson Nano, Orange Pi 5. It is also possible to add hardware computing accelerators,
such as Google Coral Edge TPU or Intel Movidius Neural Compute Stick, although they have not yet
been used in this project. And they will be tested in further research.</p>
      <p>Another important decision was the choice of a flight controller. To build a research platform, a
lfight controller must meet several key criteria. In particular, it must support the MAVLink protocol
for interaction with client software via MAVSDK, have access to UART for connecting the SBC, and
provide a flexible configuration environment using open firmware (ArduPilot or PX4, and it is better
to support both). Support for micro-USB or USB-C is also desirable, which allows you to vary the
connection options for the SBC, GroundStation, and power during testing. Also, to simplify the change
of components - sensors and communication are connected via connectors, do not require additional
soldering/desoldering. In our work, two controllers were used: Pixhawk 2.4.5 (selected due to its
availability in the department) and later Pixhawk 6C (selected for the final implementation due to its
modern characteristics). The selection of these controllers is also consistent with recent comparative
studies on Pixhawk-based architectures and open-source flight control platforms commonly adopted
in research UAV systems [15, 16]. Both support both PX4 and ArduPilot, which allowed us to test the
MAVSDK client-server system in diferent modes. The final integration was done with PX4.</p>
      <p>Schematically, the main elements of the platform are shown in Figure 1. The UAV directly contains
a companion computer with a program that provides video processing, telemetry reading, formation
and transmission of commands to the controller. Connection to the controller via UART. Cameras can
be CSI, USB. Not simultaneously, but the program provides several video-capturing classes for this
- so changing the camera can be done quickly. Connection to the operator - implemented via Wi-Fi
for video transmission, control elements. Connection and control from a Laptop, Smartphone, Tablet
are possible. At the same time, as backup channels - control of the remote control remains, as well
as an additional communication channel from the Laptop directly with QGroundControl. Software
implementation will be discussed in more detail in the next section.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Software architecture and implementation</title>
      <sec id="sec-4-1">
        <title>4.1. Overview</title>
        <p>The software architecture of the platform is designed as a modular and extensible system that provides
interaction between the video stream, frame analysis algorithms and telemetry, the flight controller
and the client interface. The main attention is paid to the possibility of quick integration of new video
analysis methods, client independence from settings, as well as automatic collection and synchronization
of telemetry data and their recording for post-analysis. The system consists of several independent
components that interact through clearly defined interfaces.</p>
        <p>For flexibility and ease of expansion and replacement of processing algorithms - Python was chosen
as the programming language. In addition to various standard modules - it should be noted the use of
the numpy, opencv, imutils, vidgear, picamera2, ultralytics, mavsdk packages to support various video
camera variants, processing of the video stream and frames directly, the use of CV and ML algorithms.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Video frame acquisition and preprocessing</title>
        <p>Video frame capture is implemented with support for both USB cameras and Raspberry Pi cameras
(rpicamera2). The capture component is encapsulated in a frame queue module, which allows bufering
the input stream for further processing or skipping frames if processing takes longer. A separate class is
responsible for pre-processing (resizing, normalization, format conversion), which allows standardizing
the input for diferent algorithms.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Video analyzer class</title>
        <p>The analysis module is implemented as a separate class, where the researcher is required to implement
at least one method:
def process_frame(frame):
# Analyze frame and apply any logic researcher wants
result = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
return result</p>
        <p>This approach allows you to easily connect new computer vision algorithms - for example, object
detection, segmentation, tracking, etc. The main program automatically connects the analysis class,
transmits frames in real time, and if necessary - the result is used to form commands.</p>
        <p>Additionally, optional methods are defined for this class, by implementing them, the re-searcher will
be able to transmit ROI from the client, change the parameters of his processing, and can implement
phasing - by reacting to pre-defined control elements.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Flight controller class</title>
        <p>A separate class is responsible for working with the flight controller. It implements the connection
to the controller, receives data from sensors and flight mode - about coordinates, orientation, speed
and flight status from the controller. The FlightController class implements basic commands (takeof,
landing, movement in a given direction with a given speed and time, rotation), which are sent via the
MAVLink protocol using MAVSDK. This integration follows the widely adopted MAVLink protocol,
which has become the de facto standard in UAV communication, as highlighted in [17]. Both PX4 and
ArduPilot are supported, which allows testing in diferent conditions. The connection to the controller
is also flexibly parameterized - and can connect via GPIO, microUSB or set UDP for debugging with the
simulator.</p>
      </sec>
      <sec id="sec-4-5">
        <title>4.5. Integrated web interface for remote control</title>
        <p>The program raises a built-in web server that broadcasts the video stream and provides an HTML
interface with control buttons. Shown in Figure 2. The user can connect to the drone via Wi-Fi from any
device (smartphone, tablet, laptop) without the need to install additional software. An interactive target
selection function is available - clicking on the video determines the coordinates and parameterized size
of the ROI, which can be used for tracking, landing or guidance or other types of use, as the researcher
implements in his analyzer. There are direct control buttons - which send commands about Arm,
Takeof, Velocity etc directly to the drone. Additionally implemented buttons for parameterizing the
analyzer algorithm - additional logic for their use can be used by the researcher in his processing class.</p>
        <sec id="sec-4-5-1">
          <title>4.5.1. Logging, synchronization, and experimentation tools</title>
          <p>During experiments, all telemetry data and processed frames are stored with timestamps, which allows
for ofline analysis, graphing, and evaluating algorithm behavior. Synchronization is provided through
a single time base. The system stores both input frames and processed frames with annotations or texts,
which allows for visual comparison of algorithm eficiency. This functionality is implemented separately,
allowing the researcher to not worry about its implementation and focus only on his processing class.
This is what the telemetry saved after the experimental flight looks like.</p>
        </sec>
        <sec id="sec-4-5-2">
          <title>4.5.2. Autonomous command generation (optional extension)</title>
          <p>To implement fully autonomous drone control, the researcher needs to implement another method in
his class:
def generate_vector_command(self):
#any compution for vector and time of move
return vx, vt, t</p>
          <p>This allows you to generate commands based on video analysis without the operator’s participation,
for example, to follow an object, fly to the center of the frame, etc. Thus, the system provides a full
cycle: from image to control.</p>
        </sec>
        <sec id="sec-4-5-3">
          <title>4.5.3. Example of data for post-analysis</title>
          <p>After the experiment, the researcher receives a telemetry database as shown in Figure 3. It contains the
telemetry available from the Flight Controller. Also, all these records are synchronized with video frames,
which are also stored and available for post-processing. Figure 4,5 shows examples of visualizations
that can be easily generated, for example, with the matplotlib package, or for any other analysis, since
the data is stored in the common .csv format.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Preliminary experiments and use cases</title>
      <p>For the initial assessment of the platform’s functionality, several trial runs were carried out with the
participation of several operator researchers, who implemented their algorithms exclusively in one
class with the required 2-3 functions, as indicated in section 4.</p>
      <p>From our side, we provided a unified class interface with clear documentation, specifying only the
minimal required methods. The internal structure of the class, any helper functions, or additional
private methods were left entirely at the discretion of each researcher.</p>
      <sec id="sec-5-1">
        <title>5.1. Object tracking with CSRT algorithm</title>
        <p>One of the initial experiments focused on the use of a classical object tracking algorithm — CSRT
[18]. The operator selected a region of interest (ROI) on the video stream via the web interface, and
the tracker maintained this object in the field of view. The platform automatically generated control
commands to make the UAV follow the object by adjusting its position relative to the moving target.</p>
        <p>This experiment demonstrated the capability of integrating standard OpenCV-based tracking
algorithms into the platform with minimal code — only two methods in the class had to be implemented. One
for frame processing and another one - for generating vectors to move. The real-time responsiveness
of the tracking loop and smooth command execution confirmed that the system could support simple
visual servoing scenarios.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Object detection with YOLO</title>
        <p>Another set of experiments involved integrating the YOLO (You Only Look Once) family of neural
networks, particularly using the Ultralytics implementation [19, 20]. The video analyzer class was
extended to include inference using a pre-trained model, with detected objects and bounding boxes
drawn in real time and optionally logged.</p>
        <p>Although no autonomous commands were issued in this case, the experiment validated that the
platform was capable of real-time deep learning inference on the onboard computer, and that visual
output could be transmitted to the operator. This opens up potential use cases for semantic detection,
object classification, and more complex logic based on scene understanding.</p>
      </sec>
      <sec id="sec-5-3">
        <title>5.3. Position holding using visual features</title>
        <p>Third experiment involved implementing a position holding function based purely on image features.
At the beginning of the flight, the drone captured a reference ROI and extracted SIFT (Scale-Invariant
Feature Transform) keypoints. During flight, each new frame was compared to this reference, and a
homography was computed to estimate the relative displacement of the UAV.</p>
        <p>Based on the displacement vector, a movement command was generated to compensate and return
the drone to its original visual location. This closed-loop control based on SIFT matches demonstrated
the platform’s ability to support advanced visual localization techniques and generate dynamic control
signals in real time.</p>
      </sec>
      <sec id="sec-5-4">
        <title>5.4. Summary of experimental validation</title>
        <p>These initial experiments demonstrated the flexibility of the proposed architecture and confirmed that it
efectively supports various modes of video-based UAV control — from traditional object tracking to deep
learning and visual localization. Although a full quantitative evaluation is reserved for future studies,
the platform demonstrated practical feasibility, ease of algorithm integration, and stable operation in
real-world UAV flights.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>The paper presents the development of a hardware-software platform focused on research and
experiments on video stream analysis and formation of movement commands for a drone based on such
analysis. The main hardware components are described, and their choice is justified. The constructed
system demonstrates flexibility at all levels — from the choice of single-board computers and flight
controllers to software components and video processing algorithms. A convenient interface for
integrating own algorithms has been implemented, which has a defined interface and allows researchers
not to think about all program classes, but to focus on their one analyzer class. Mechanisms for logging
telemetry and video frames, synchronization and autonomous execution of movement commands for
UAVs formed by the analyzer class implemented separately. Preliminary tests have confirmed the
functionality and ease of use of the platform in research conditions. The paper presents what set of
controls is available to the user, what kind of data the researcher receives after the flight, and how
typically this data can be visualized for analysis.</p>
      <p>In the future, it is planned to expand the system by improving the class interfaces, testing in more
complex scenarios. Preparing for flights other single-board computers and graphics accelerators, so
that it is also possible to assess their impact on the speed of the procedures being studied. Conducting
more experiments and studies in flight conditions of those procedures that have already been prepared
and described in non-flight simulations to confirm or correct the results already obtained.</p>
      <p>It is also considered to implement a similar platform with reuse of the software part based on another
UAV. The presence and availability of such a functioning platform opens up new opportunities for
experiments.</p>
      <p>Beyond the specific hardware and implementation details, the broader contribution of this work lies
in ofering a reproducible and modular baseline for vision-based UAV autonomy experiments. The
clear separation of roles between flight logic, video processing, and control interface ensures ease of
adaptation for various research directions — from academic investigations of perception pipelines to
applied prototyping of autonomous systems. By minimizing the barrier to entry, the platform encourages
more researchers to experiment, evaluate, and deploy intelligent UAV behaviors in real-world scenarios.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>P.</given-names>
            <surname>Prystavka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Cholyshkina</surname>
          </string-name>
          ,
          <article-title>Estimation of the aircraft's position based on optical channel data</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          <volume>3925</volume>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>N.</given-names>
            <surname>Ruzhentsev</surname>
          </string-name>
          , et al.,
          <article-title>Radio-heat contrasts of UAVs and their weather variability at</article-title>
          12 GHz, 20 GHz, 34 GHz, and
          <article-title>94 GHz frequencies</article-title>
          ,
          <source>ECTI Transactions on Electrical Engineering, Electronics, and Communications</source>
          <volume>20</volume>
          (
          <year>2022</year>
          )
          <fpage>163</fpage>
          -
          <lpage>173</lpage>
          . doi:
          <volume>10</volume>
          .37936/ecti-eec.
          <volume>2022202</volume>
          .246878.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y.</surname>
          </string-name>
          <article-title>He, Multi-object tracking meets moving uav</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>8866</fpage>
          -
          <lpage>8875</lpage>
          . doi:
          <volume>10</volume>
          .1109/CVPR52688.
          <year>2022</year>
          .
          <volume>00867</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S. A. H.</given-names>
            <surname>Mohsan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. Q. H.</given-names>
            <surname>Othman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.-S.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. H.</given-names>
            <surname>Alsharif</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Khan</surname>
          </string-name>
          ,
          <article-title>Unmanned aerial vehicles (uavs): practical aspects, applications, open challenges, security issues, and future trends</article-title>
          ,
          <source>Intelligent Service Robotics</source>
          <volume>16</volume>
          (
          <year>2023</year>
          )
          <fpage>109</fpage>
          -
          <lpage>137</lpage>
          . doi:
          <volume>10</volume>
          .1007/s11370-022-00452-4.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>T.</given-names>
            <surname>Hong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Fang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kadoch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cheriet</surname>
          </string-name>
          ,
          <article-title>A real-time tracking algorithm for multi-target uav based on deep learning</article-title>
          ,
          <source>Remote Sensing</source>
          <volume>15</volume>
          (
          <year>2023</year>
          )
          <article-title>2</article-title>
          . doi:
          <volume>10</volume>
          .3390/rs15010002.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>O.</given-names>
            <surname>Sushchenko</surname>
          </string-name>
          , et al.,
          <article-title>Airborne sensor for measuring components of terrestrial magnetic field</article-title>
          ,
          <source>in: 2022 IEEE International Conference on Electronics and Nanotechnology (ELNANO)</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>687</fpage>
          -
          <lpage>691</lpage>
          . doi:
          <volume>10</volume>
          .1109/ELNANO54667.
          <year>2022</year>
          .
          <volume>9926760</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>