<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Design Exploration of CNN Parameters for Multi-Altitude UAV Object Detection</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Michalis Piponidis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Theocharis Theocharides</string-name>
          <email>ttheocharides@ucy.ac.cy</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>KIOS Research and Innovation Center of Excellence, University of Cyprus</institution>
          ,
          <addr-line>1 Panepistimiou Avenue, 1678 Nicosia</addr-line>
          ,
          <country country="CY">Cyprus</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Object detection using Unmanned Aerial Vehicles (UAVs) introduces unique challenges compared to traditional methods, primarily due to the varying angles and altitudes from which images are captured. Conventional Convolutional Neural Network (CNN) implementations, while being the state-of-the-art in object detection, demand substantial memory and computational resources, posing dificulties for deployment on UAVs with limited onboard resources. Furthermore, these models typically generalize well within a limited distance range by training on images captured from various distances. However, UAVs capture images at a wide range of distances, complicating the model's ability to generalize efectively. In this work therefore, we present a design space exploration which aims to identify the efect of CNN parameters for UAV-based object detection at various altitudes by examining two critical parameters: input image resolution and network width (number of channels). We conduct extensive experiments to evaluate the efect of these parameters in terms of accuracy and computational eficiency across multiple altitudes. Lower resolutions reduce computational load but may compromise detection accuracy, while higher resolutions enhance accuracy at the expense of increased processing requirements. Similarly, varying the network width influences the balance between model complexity and detection performance. We showcase that the requirements vary significantly across diferent altitudes, demonstrating the potential of dynamic network structures that adjust parameters according to the altitude. Our findings provide insights into the optimal configuration of CNN parameters for UAV object detection across diferent altitudes, contributing to the development of more eficient and adaptable UAV vision systems. This research paves the way for more efective deployment of UAVs in various applications, from surveillance and search-and-rescue to environmental monitoring and beyond.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Unmanned Aerial Vehicles</kwd>
        <kwd>Object Detection</kwd>
        <kwd>Convolutional Neural Networks</kwd>
        <kwd>Dynamic Neural Networks</kwd>
        <kwd>Design Space Exploration</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Unmanned Aerial Vehicles (UAVs), commonly known as drones, are increasingly employed across a
broad spectrum of fields, including but not limited to search and rescue operations [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], emergency
management [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], infrastructure inspection [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], agricultural monitoring [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], and environmental monitoring
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Their versatility, characterized by their unmanned nature, ease of deployment, cost-efectiveness,
and capability to capture aerial imagery from various perspectives and altitudes, makes them an
indispensable tool in modern technology.
      </p>
      <p>Object detection on UAVs is a critical task due to its role in enhancing situational awareness, which
is pivotal for safe navigation, obstacle avoidance, and the execution of missions that necessitate the
identification of subjects of interest. However, several challenges are inherent to object detection
using UAVs. Firstly, due to their nature, most UAV applications require the object detection task to be
performed in real-time. On-board computational and power resources are limited, thereby requiring
the deployment of models that are computationally eficient while maintaining the required accuracy
for the task at hand. Secondly, unlike in traditional object detection tasks, UAV-based tasks encounter
images from a wide variety of angles and altitudes, potentially impacting the generalization capabilities
of the model. Furthermore, diverse environmental conditions such as reflections, smoke, and adverse
weather can obscure objects and degrade image clarity, further complicating the detection task.</p>
      <p>
        Existing research in UAV-based object detection demonstrates significant advancements in handling
various challenges posed by UAV imagery. These studies collectively emphasize the importance of
eficient detection methods that satisfy the constraints of UAV platforms. Techniques such as adaptive
feature extraction [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], multi-scale detection [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], and lightweight model designs have shown promise in
improving detection accuracy and eficiency. Convolutional Neural Networks (CNNs) represent the
state-of-the-art in object detection, which is why we have chosen to explore them. These networks
typically generalize well within a limited distance range by training on images taken from various
distances. However, UAVs capture images at a wide range of distances, complicating the model’s ability
to generalize efectively due to significant variations in optimal parameter values across large distance
ranges. This highlights a gap in research: the need for dynamic parameter adjustments tailored for UAV
object detection, which is crucial for enhancing real-time performance.
      </p>
      <p>In this study, we examine the efects of two critical parameters–input image resolution and
network width–on the performance and eficiency of CNN models for UAV-based object detection. We
hypothesize that higher input resolutions and wider networks can capture more details and features,
respectively; however, these advantages come with an increased computational load. Our objective
is to identify the optimal values for these parameters across diferent altitude ranges to optimize the
accuracy-to-performance trade-of. We aim to use these findings to develop a dynamic network
structure capable of adjusting these parameters based on a learnable dynamic decision point (rather than a
static threshold) tailored to each input image. This will be implemented using reconfigurable network
structures (either hardware or software), enabling us to enhance the accuracy-to-performance balance.</p>
      <p>Specifically, the contributions of this research are twofold:
• We perform extensive evaluation of the efect of changing the two parameters, input resolution
and network width (number of channels), on the accuracy, performance and memory requirements
of the object detection CNN (tiny YOLOv7) at various altitude ranges.
• We demonstrate that the optimal values of these parameters varies significantly with altitude,
highlighting the need for dynamic adjustments based on real-time altitude data.</p>
      <p>The remainder of this paper is structured as follows: Section 2 reviews the existing related work. In
Section 3, we detail our methodology. Section 4 presents the experimental results. Finally, Section 5
discusses some of the challenges and potential future work and Section 6 concludes our study.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>Over the years, numerous advancements have been made in the field of UAV-based object detection,
particularly focusing on improving accuracy and eficiency. Several studies have aimed to address
the unique challenges posed by UAV imagery, such as small object size, high density, and varying
viewpoints.</p>
      <p>
        A significant portion of the literature has focused on improving detection algorithms and network
architectures. Tan et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] proposed a multi-scale UAV aerial image detection method using adaptive
feature fusion to better detect small target objects. They introduced an adaptive feature extraction
module to the backbone network, enabling more accurate small target feature information extraction.
Similarly, Xiaohu et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] presented a dedicated object detector based on the FPN architecture,
incorporating Deformable Convolution Lateral Connection Modules (DCLCMs) and Attention-based
Multi-Level Feature Fusion Modules (A-MLFFMs) to enhance multi-scale object detection. The work
by Liu et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] also targeted small object detection in UAV imagery by optimizing YOLOv3 and the
darknet structure to improve spatial information capture and receptive fields, leading to performance
improvements on small object detection.
      </p>
      <p>
        Comprehensive reviews of deep learning techniques applied to UAV-based object detection have
highlighted the importance of lightweight models for deployment on UAVs with limited computational
resources. Wu et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] explored various CNN architectures like YOLO and Faster R-CNN, demonstrating
the trade-ofs between model complexity and computational eficiency. Furthermore, Mittal et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]
provided an extensive review of state-of-the-art deep learning-based object detection algorithms,
particularly focusing on low-altitude UAV datasets, and discussed research gaps and challenges in the
ifeld.
      </p>
      <p>
        Real-time object detection in UAV imagery has also gathered significant attention due to its importance
in scenarios like emergency rescue and precision agriculture. Cao et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] systematically reviewed
previous studies on real-time UAV object detection, covering aspects such as hardware selection,
realtime detection paradigms, and algorithm optimization technologies. They emphasized the importance
of lightweight convolutional layers and GPU-based edge computing platforms to meet the demands of
real-time detection.
      </p>
      <p>
        Several innovative methodologies have been introduced to enhance UAV object detection. Bazi et
al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] proposed a convolutional support vector machine (CSVM) network for UAV object detection,
leveraging SVMs as filter banks for feature map generation. This approach was particularly useful for
problems with limited training samples. Zhang et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] introduced a global density fused convolutional
network (GDF-Net) optimized for object detection in UAV images, using a Global Density Model to
refine density features and improve detection performance in congested scenes.
      </p>
      <p>Despite the extensive research in UAV object detection, our work is the first to examine dynamic
parameters for UAV object detection, exploring the need for adaptable and flexible detection methods.
By focusing on dynamic parameter adjustment, our approach aims to address the limitations of static
models in varying UAV operational conditions, enhancing detection accuracy and eficiency across
diverse environments and scenarios.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>The parameter values explored in this study are shown in Table 1. The input image resolutions were
varied from 1088x1088 pixels to 256x256 pixels, allowing us to investigate the impact of diferent levels
of detail and computational demands on the object detection performance. Additionally, the network
width was systematically adjusted using multipliers ranging from 1x, which represents the original
model structure, to 0,01x. This adjustment helped us evaluate how the model’s number of channels in
the convolutional layers influenced its detection capabilities and eficiency.</p>
      <p>For evaluation, we employed several metrics to provide a comprehensive assessment of the model’s
performance. Mean average precision (mAP) was utilized to quantify the accuracy of object detection
across the diferent parameter settings. To evaluate computational eficiency, we measured the
multiplyaccumulate (MAC) operations, which reflect the computational load required for processing, and the
network size in megabytes (MB), which indicates the model’s memory requirements.</p>
      <p>
        For the object detector, we selected the tiny YOLOv7 model [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], which is known for its balance
between detection accuracy and computational eficiency. This model is part of the YOLO (You Only Look
Once) family, which is renowned for real-time object detection capabilities. The tiny YOLOv7 variant is
specifically designed to be lightweight, making it particularly suitable for on-board implementation on
UAVs where computational resources and power are limited.
      </p>
      <p>
        As the dataset for our study, we utilized the Multi-Altitude Aerial Vehicles Dataset [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], which
focuses on single-class object detection specifically targeting cars. This dataset was chosen due to its
unique composition of images captured at various altitudes, ranging from 50 meters to 500 meters,
with increments of 50 meters. This range provides a comprehensive platform for experimenting with
diferent parameter values across diverse altitude levels.
      </p>
      <p>To comprehensively analyze the impact of varying parameters at diferent altitudes, we segmented
the original dataset into five distinct subsets, each corresponding to specific altitude ranges:
• 50–100 meters
• 150–200 meters
• 250–300 meters
• 350–400 meters
• 450–500 meters</p>
      <p>In addition to these altitude-specific subsets, we also utilized the entire dataset, which includes images
from all altitude ranges. This dataset, referred to as mix_alt, serves as a baseline for comparison against
the individual altitude-specific subsets.</p>
      <p>For each of these six datasets (the five altitude-specific subsets and the mix_alt dataset), we conducted
extensive training experiments. We systematically varied the input image resolution and network
width across all possible combinations while maintaining other parameters at their default values. This
approach allowed us to evaluate the influence of the two parameters on the performance metrics.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental Results</title>
      <sec id="sec-4-1">
        <title>4.1. Performance</title>
        <p>The results in terms of mean Average Percision (mAP) by setting the threshold of Intersection over
Union (IoU) to 0.5 for each dataset and parameter combination are illustrated in Figure 1. As expected,
lower altitude datasets require less computationally intensive models, with less parameters, to achieve
high accuracy. This is due to the larger size, greater clarity and detail available in lower altitude images,
which simplifies the task.</p>
        <p>Conversely, as the altitude increases, the detection accuracy tends to diminish unless higher parameter
values are utilized. This can be attributed to the increased complexity of identifying objects from higher
vantage points, where objects appear smaller and less distinct. Thus, higher altitudes necessitate models
with greater capacity to maintain comparable levels of accuracy, reflecting the need for more detailed
feature extraction and processing capabilities.</p>
        <p>Interestingly, mix_alt demonstrates an intermediate performance, performing better than the higher
altitude-specific models. This is because mix_alt is trained on the whole dataset, including the easier,
lower-altitude images, whereas the higher altitude-specific models are trained only on the hardest
images of the dataset.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Computational Eficiency</title>
        <p>MAC operations are a key indicator of computational eficiency in neural network models, representing
the total number of multiplications and additions needed to process an image. By reducing the number
of MAC operations, we can lower the computational load on the UAV, extending its operational time
and enabling real-time processing.</p>
        <p>In addition to MAC operations, the size of the model is another important factor afecting
computational eficiency. Model size refers to the amount of memory required to store the model parameters,
including weights, biases, and other configurations. This size directly influences the amount of onboard
storage needed for the UAV to operate.</p>
        <p>Figure 2 presents the computational eficiency results for each model configuration. As anticipated,
both MAC operations and model size increase with larger parameter values. Notably, MAC operations
scale uniformly with both parameters, whereas model size is more significantly afected by the width
multiplier compared to the resolution. This suggests that for prioritizing a smaller model size, a lower
width combined with a higher resolution is more efective than the reverse.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Challenges and Future Work</title>
      <p>The most significant challenge is the seamless transition between models during inference. Currently,
switching between models that are optimized for diferent altitude ranges can be computationally
intensive and may introduce latency, which is detrimental to real-time processing requirements. To
address this, we will explore the potential of dynamic networks that can reconfigure themselves
on-the-fly to adapt to changing altitudes.</p>
      <p>
        Reconfigurable hardware presents a promising solution to this challenge. By utilizing hardware
that can adapt its configuration based on the UAV’s altitude, it is possible to optimize the processing
pipeline for speed and eficiency. Field Programmable Gate Arrays (FPGAs) that support dynamic
reconfiguration could be investigated for this purpose [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
      </p>
      <p>Additionally, exploring algorithmic, non-manual methods for selecting the optimal parameters for
each altitude range will streamline the process and reduce the large computational time and resources
typically required for manual exploration, which scales exponentially as the network size and number
of parameters increase. By employing automated approaches, the system could eficiently determine the
best parameters based on the specific UAV’s capabilities and the task’s required accuracy. These methods
would minimize human intervention, allowing UAVs to dynamically adjust to varying conditions and
mission requirements while achieving an optimal accuracy-performance trade-of. This would enable
the UAVs to adapt in real time without requiring labor-intensive manual tuning, enhancing operational
eficiency.</p>
      <p>In addition to technical advancements in model adaptability, there is a critical need to expand
the datasets used for training and evaluation. Our current dataset has fixed altitudes and limited
environmental diversity, which does not fully capture the complexities encountered in real-world
scenarios. Therefore, we aim to create a comprehensive dataset that includes images captured at varying
altitudes and under diverse environmental conditions such as diferent weather patterns, times of day,
locations etc. This dataset would not only improve the robustness of the detection models but also
provide a more rigorous benchmark for future research in UAV-based object detection.</p>
      <p>Addressing the challenges of parameter optimization, seamless model transition, and dataset diversity
will be crucial for advancing the field of UAV-based object detection. Through a combination of
innovative algorithms, adaptable hardware solutions, and comprehensive data collection, we aim to
significantly enhance the performance and applicability of UAV object detection systems.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>Our study confirms that diferent altitude ranges necessitate distinct parameters to achieve optimal
accuracy levels in UAV-based object detection. We have demonstrated that input image resolution and
network width are critical factors that must be tuned according to the altitude from which images are
captured.</p>
      <p>The findings indicate that dynamic network structures, which adjust their parameters based on
realtime altitude data, can substantially enhance both the eficiency and performance of object detection
systems deployed on UAVs. Such an approach would not only optimize the accuracy-to-performance
trade-of but also ensure that the computational resources of UAVs are utilized more efectively. This is
particularly important given the limited processing power and battery life of most UAVs.</p>
      <p>In summary, our research highlights the importance of considering altitude-specific parameter
optimization in the design of UAV object detection systems. The use of dynamic network structures
that can adapt to altitude variations presents a promising avenue for developing more flexible, eficient,
and efective UAV vision systems.</p>
      <p>This approach paves the way for advancements in UAV technology, enabling more accurate and
eficient object detection, and ultimately enhancing the capabilities and applications of UAVs across
various fields.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>This work is supported by the European Union (Horizon 2020 Teaming, KIOS CoE, No. 739551) and
from the Government of the Republic of Cyprus through the Deputy Ministry of Research, Innovation,
and Digital Policy.</p>
      <p>Views and opinions expressed are however those of the author(s) only and do not necessarily reflect
those of the European Union. Neither the European Union nor the granting authority can be held
responsible for them.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Yeom</surname>
          </string-name>
          ,
          <article-title>Thermal image tracking for search and rescue missions with a drone</article-title>
          ,
          <source>Drones</source>
          <volume>8</volume>
          (
          <year>2024</year>
          ). URL: https://www.mdpi.com/2504-446X/8/2/53. doi:
          <volume>10</volume>
          .3390/drones8020053.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>O.</given-names>
            <surname>Bushnaq</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Mishra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Natalizio</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Akyildiz</surname>
          </string-name>
          ,
          <article-title>Unmanned aerial vehicles (UAVs) for disaster management</article-title>
          ,
          <year>2022</year>
          , pp.
          <fpage>159</fpage>
          -
          <lpage>188</lpage>
          . doi:
          <volume>10</volume>
          .1016/B978-0
          <source>-323-91166-5</source>
          .
          <fpage>00013</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Figuli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Gattulli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Hoterová</surname>
          </string-name>
          , E. Ottaviano,
          <source>Recent Developments on Inspection Procedures for Infrastructure Using UAVs</source>
          ,
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .3233/NICSP210003.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rejeb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Abdollahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Rejeb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Treiblmaier</surname>
          </string-name>
          ,
          <article-title>Drones in agriculture: A review and bibliometric analysis</article-title>
          ,
          <source>Computers and Electronics in Agriculture</source>
          <volume>198</volume>
          (
          <year>2022</year>
          )
          <article-title>107017</article-title>
          . URL: https://www. sciencedirect.com/science/article/pii/S0168169922003349. doi:https://doi.org/10.1016/j. compag.
          <year>2022</year>
          .
          <volume>107017</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Rani</surname>
          </string-name>
          <string-name>
            <surname>Hemamalini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Vinodhini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Shanthini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Partheeban</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Charumathy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Cornelius</surname>
          </string-name>
          ,
          <article-title>Air quality monitoring and forecasting using smart drones and recurrent neural network for sustainable development in chennai city</article-title>
          ,
          <source>Sustainable Cities and Society</source>
          <volume>85</volume>
          (
          <year>2022</year>
          )
          <article-title>104077</article-title>
          . URL: https://www.sciencedirect.com/science/article/pii/S221067072200395X. doi:https://doi.org/ 10.1016/j.scs.
          <year>2022</year>
          .
          <volume>104077</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Duan</surname>
          </string-name>
          ,
          <string-name>
            <surname>L.</surname>
          </string-name>
          <article-title>Pu, Multi-scale object detection in uav images based on adaptive feature fusion</article-title>
          ,
          <source>PLOS ONE 19</source>
          (
          <year>2024</year>
          )
          <fpage>1</fpage>
          -
          <lpage>21</lpage>
          . URL: https://doi.org/10.1371/journal.pone.0300120. doi:
          <volume>10</volume>
          . 1371/journal.pone.
          <volume>0300120</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D.</given-names>
            <surname>Xiaohu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Qin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Fu</surname>
          </string-name>
          , S. Liu,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ye</surname>
          </string-name>
          ,
          <article-title>Attention-based multi-level feature fusion for object detection in remote sensing images</article-title>
          ,
          <source>Remote Sensing</source>
          <volume>14</volume>
          (
          <year>2022</year>
          )
          <article-title>3735</article-title>
          . doi:
          <volume>10</volume>
          .3390/rs14153735.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ma</surname>
          </string-name>
          , C. Piao, Uav-yolo:
          <article-title>Small object detection on unmanned aerial vehicle perspective</article-title>
          ,
          <source>Sensors</source>
          <volume>20</volume>
          (
          <year>2020</year>
          ). URL: https://www.mdpi.com/1424-8220/20/8/2238. doi:
          <volume>10</volume>
          .3390/s20082238.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>X.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Tao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <article-title>Deep learning for unmanned aerial vehicle-based object detection and tracking: A survey</article-title>
          ,
          <source>IEEE Geoscience and Remote Sensing Magazine</source>
          <volume>10</volume>
          (
          <year>2022</year>
          )
          <fpage>91</fpage>
          -
          <lpage>124</lpage>
          . doi:
          <volume>10</volume>
          .1109/MGRS.
          <year>2021</year>
          .
          <volume>3115137</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>P.</given-names>
            <surname>Mittal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <article-title>Deep learning-based object detection in low-altitude uav datasets: A survey</article-title>
          ,
          <source>Image and Vision Computing</source>
          <volume>104</volume>
          (
          <year>2020</year>
          )
          <article-title>104046</article-title>
          . URL: https://www. sciencedirect.com/science/article/pii/S0262885620301785. doi:https://doi.org/10.1016/j. imavis.
          <year>2020</year>
          .
          <volume>104046</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kooistra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Valente</surname>
          </string-name>
          ,
          <article-title>Real-time object detection based on uav remote sensing: A systematic literature review</article-title>
          ,
          <source>Drones</source>
          <volume>7</volume>
          (
          <year>2023</year>
          ). URL: https://www.mdpi.com/2504-446X/ 7/10/620. doi:
          <volume>10</volume>
          .3390/drones7100620.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bazi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Melgani</surname>
          </string-name>
          ,
          <article-title>Convolutional svm networks for object detection in uav imagery</article-title>
          ,
          <source>IEEE Transactions on Geoscience and Remote Sensing</source>
          <volume>56</volume>
          (
          <year>2018</year>
          )
          <fpage>3107</fpage>
          -
          <lpage>3118</lpage>
          . doi:
          <volume>10</volume>
          .1109/TGRS.
          <year>2018</year>
          .
          <volume>2790926</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>R.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Shao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Object detection in uav images via global density fused convolutional network</article-title>
          ,
          <source>Remote Sensing</source>
          <volume>12</volume>
          (
          <year>2020</year>
          ). URL: https://www.mdpi.com/2072-4292/ 12/19/3140. doi:
          <volume>10</volume>
          .3390/rs12193140.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>C.-Y. Wang</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Bochkovskiy</surname>
          </string-name>
          , H.
          <string-name>
            <surname>-Y. M. Liao</surname>
          </string-name>
          ,
          <article-title>YOLOv7: Trainable bag-of-freebies sets new state-ofthe-art for real-time object detectors</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>R.</given-names>
            <surname>Makrigiorgis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kyrkou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kolios</surname>
          </string-name>
          ,
          <article-title>How high can you detect? improved accuracy and eficiency at varying altitudes for aerial vehicle detection</article-title>
          ,
          <source>in: 2023 International Conference on Unmanned Aircraft Systems (ICUAS)</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>167</fpage>
          -
          <lpage>174</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICUAS57906.
          <year>2023</year>
          .
          <volume>10156376</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>F.</given-names>
            <surname>Manca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ratto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Palumbo</surname>
          </string-name>
          ,
          <article-title>Onnx-to-hardware design flow for adaptive neural-network inference on fpgas</article-title>
          ,
          <year>2024</year>
          . URL: https://arxiv.org/abs/2406.09078. arXiv:
          <volume>2406</volume>
          .
          <fpage>09078</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>