<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>International Journal on Communications Antenna
and Propagation (IRECAP) 11 (2021) 19. doi:10.15866/irecap.v11i1.19341.
[18] Y. Li</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.15866/irecap.v11i1.19341</article-id>
      <title-group>
        <article-title>Analysis of YOLO neural network overfitting on precision of object detection</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Danyil Klokta</string-name>
          <email>klokta.danyil@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yuliya Averyanova</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yevheniia Znakovska</string-name>
          <email>evgeniya.znakovska@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Interregional Academy of Personnel Management</institution>
          ,
          <addr-line>Frometivska str. 2, 03039, Kyiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>WDA'26: International Workshop on Data Analytics</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <volume>3347</volume>
      <fpage>246</fpage>
      <lpage>255</lpage>
      <abstract>
        <p>Currently, detecting foreign object debris (FOD) on runways is a very important task for aviation to provide lfight safety, regularity and economy. This, in turn, requires efective systems that can recognize these objects in real time. The promising instrument for this is the single-stage YOLO detectors. In this paper, the single-stage YOLOv8s detector is studied. Although they work quite efectively, they have several problems that need to be solved. One of them is the overfitting efect, which can remember noise. This efect results in the deterioration of the ability to recognize and classify objects. In this research, the impact of overfitting efect on the ability to detect and classify objects in the YOLO neural network was empirically analyzed. To call the overfitting efect, the conditions were changed manually to make worse conditions than the usual operating conditions for such a YOLOv8s model. The results showed that the overfitting efect was achieved. The large diference is in the losses (box loss) on the training set and on the validation set were observed. The losses on the training set constantly decreased and were in the range of 0.2, while the losses on the validation set stops in the range of 0.6 after 400 epochs and moved chaotically in this range. The model also has shown problems in classifying complex classes, such as lower accuracy than in basic classes. In the "BoltNutSettrack" class, the accuracy was 0.825. The results confirm that despite the large number of epochs and achieving maximum accuracy on the training set, the efect of overfitting can lead to false results on the validation set. The requirements to the safety in the aviation industry require to reduce the efect of overfitting. Therefore, the methods of regularization, augmentation, suficient dataset size, and early stopping were considered.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;FOD detection</kwd>
        <kwd>innovative technologies</kwd>
        <kwd>industry innovation</kwd>
        <kwd>resilient infrastructure</kwd>
        <kwd>YOLOv8s</kwd>
        <kwd>overfitting</kwd>
        <kwd>neural networks</kwd>
        <kwd>sustainable transport</kwd>
        <kwd>aviation safety</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The problem of foreign object debris (FOD) on runways is still important for aviation safety [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. FOD
can cause damage to aircraft, lead to economic problems, and even accidents [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Traditional methods
of detecting FOD on the runway require a lot of human resources and time. Moreover, as it is indicated
in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] the FOD detection systems that are most often used in the airports including those that are
based on radar or visible light technology have limitations on the placement and can be susceptible to
the wind and wind-related phenomena. There are some solutions for small airports that consider the
placement of sensors onto the moving platform. In [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] it is indicated that there should be established
technical standards concerning key performance indicators. In papers [
        <xref ref-type="bibr" rid="ref4 ref5 ref6">4, 5, 6</xref>
        ], the design features of
control systems for moving platforms with equipment of diferent types are considered. The general
recommendations on foreign object debris detection systems FODDS were also discussed in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        The latest advances in technologies including drones, computer vision, and artificial intelligence as
well as fusion of data from diferent sensors becomes promising for task of FOD detection. In paper [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ],
the overview of the latest approaches to FOD detection is presented. It is indicated there that AI-based
methods can be successfully used to identify FOD and non-FOD [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. For example, paper [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] proposes
and considers the method of FOD detection on the base of YOLOX architecture. The use of AI-based
methods for object classification using aerial data also can be found in [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12, 13</xref>
        ]. Neural networks for
object detection (such as yolo) show themselves quite promising, they have high speed and quite good
precision for performing their tasks. It is necessary to note that the implementation of these innovative
technologies and systems in the runway environment is connected with the range of dificulties. The
ifrst is the size of the foreign objects. They are very small compared to the runway. The next problems
that should be taken into account when the tasks of FOD detection are the view of the runways and
the conditions of their inspection. The runways are significantly mixcoloured surfaces that consist of
diferent colors of the same gradient and have diferent covering textures. They are mostly unprotected
from diferent lighting and weather conditions.
      </p>
      <p>All these conditions create some restrictions to neural networks for foreign object detection, requiring
them to be extremely accurate.</p>
      <p>In aviation everything must be as accurate as possible. The neural network that can lead to such
errors can cause flight delays, trafic management complications, and even accidents. Therefore, the
reliability of a neural network is very important for the task of FOD detection. The model cannot
just only remember diferent examples. It must summarize and generalize to identify objects correctly
despite of the dificulties connected with the runway conditions.</p>
      <p>Therefore, it is important to consider and resolve the problems connected with utilizing novel
and innovative technologies, including AI-based technology, before the implementation in aviation
industry. The neural networks, as many other novel technologies aimed to serve for resilient and
sustainable infrastructures, have several disadvantages and problems that have to be solved. One of the
disadvantages is connected with the phenomenon of the neural network overfitting. Overfitting can
occur usually in cases of rather complex models and is revealed in the model remembering difering
noises or some specific characteristics. The result is poor ability of the model to generalize the data
correctly [14, 15]. As a consequence, the model produces much more errors on test compare to the
training set. This results in a higher level of false detections when using the FOD neural model.</p>
      <p>The purpose of this paper is the empirical analysis the overfitting of the You Only Live Once version
8 small (YOLOv8s) neural model on the dataset with special foreign objects on the runway. The analysis
of how overfitting efect influence the prediction errors and how this efect can be critical for tasks of
FOD detection on the runway is made by changing characteristics that are responsible for the overfitting
efect, metrics, and the number of photos in the dataset.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Theoretical foundations and methodology</title>
      <p>In this work, the You Only Look Once, version 8, small (YOLOv8s) model is used for object detection.
The model selection is based considering the speed and precision for the purpose of our research [16].
This is a very important indicator, because FOD detection on runway should be done in a short time to
satisfy the requirements of safety and taking into account the air trafic growth and airport capacity.</p>
      <p>YOLOv8s is a single-stage detector (SSD). Compare to the two-stage detectors, they perform the
detection task as a regression task [16]. Two-stage detectors also use a regression task. It is performed
in two phases, that lead to greater precision, but need more time. In our case of a single-stage detector,
the model simultaneously predicts classes and bounding box coordinates. Due to these peculiarities,
the single-stage detectors are significantly faster than two-stage detectors, but they have slightly lower
precision. This can become a problem when detection and classifying small objects that are located
near each other.</p>
      <p>Retinanet model was also considered as an option for object detection. The object detection using
RetinaNet model and the method for distance calculation was proposed and analyzed in [17]. Retinanet
model requires more time and computer resources. Our comparative analysis with RetinaNet [18]
proves that YOLOv8s is the best choice for our object detection study.</p>
      <p>Overfitting is appeared to be a deep problem in machine learning. It occurs with good adaptation of
a model to the noise and very special characteristics of the training set. As a result, the model ability to
generalize getting worse [19].</p>
      <p>Overfitting is detected when the loss function still decreases on the training set, but increases on the
validation set. We can define this diference with the formula (1). Let be the model parameters at epoch.
Overfitting is detected, when [16]:
(Θ()) &lt;  (Θ( − 1)), 
(Θ()) &gt;  (Θ( − 1)) + ,
(1)
where  is the training dataset precision loss,  is the validation dataset precision loss, is the
allowed threshold.</p>
      <p>This can be explained next way. The model continues to learn, and its precision improves on the
training set, but the generalization error continues to increase, exceeding the allowed threshold.</p>
      <p>This indicates that the model is starting to adapt to the training data, but at the same time its
performance and ability to analyze new real data are getting worse. This can lead to object gap errors
[19, 20].</p>
      <p>To prevent overfitting, the ability to early-stop was added starting with YOLOv4. Tests showed that
the number of epochs decreased, while precision did not become the worse [21].</p>
      <p>The several studies in this field confirms the relevance of the result of overfitting for YOLO. The
ability of the model to remember background noise is connected with a complex background [21]. This
is typical for the common runways, especially when the task is to identify the dark objects on a dark
background. The data augmentation methods are used to avoid this. Data augmentation can be defined
as a set of methods to change the characteristics of an object by transformation the photographs and
objects on it.</p>
      <p>The problem of overfitting also arises when the FOD object is small. Detection of small objects is
probably the weakest point of YOLO. It requires careful control of the training parameters and the
separation of the dataset [22]. This is necessary in order to generalize the ability to analyze and avoid
noise holding in the training data. Therefore, overfitting analysis is an important part of using the
YOLO model in FOD object detection, as well as for all neural networks in general.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Materials and methods</title>
      <p>We used a specialized dataset taken from the Roboflow platform to study and evaluate the modeling of
foreign objects (FOD) [23]. The dataset comprises of runways images and foreign objects on them. The
total number of images in the dataset is 3560, which were divided into three subsets: training (3214
images), validation (172 images), and testing (approximately 174 images). Each image was signed and
annotated for 11 specific FOD classes. The images mostly have a resolution optimized for training
the models, which allows for eficient detection of even small objects. The dataset was reduced to 300
images to achieve the overfitting efect.</p>
      <p>
        The images have the typical complexity of the runway background, including asphalt surfaces and
markings. The dataset covers diferent training conditions, including daylight or low light. This impaired
the generalizability of the model. There is also diferent variation in the size of FOD objects. This
is important for evaluating the ability of models to detect small objects or objects of diferent sizes.
Annotations for FOD objects are provided in the corresponding folders. This dataset has realistic photos
that is important for FOD research. All images were scaled to a resolution of 416x416 pixels for training
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>The YOLOv8s model (YOLOv8 Small) was selected and adapted for training. The model was firstly
trained on a large dataset called Common Objects in Context (COCO) which has 80 classes of objects
[24]. This allowed us to use already trained features and parameters for our images as well to save time
and continue with the training process.</p>
      <p>The realization and training of the YOLOv8s model were performed using the PyTorch framework
and the Ultralytics YOLOv8 library. The training process was performed in the Google Collaboratory
cloud environment. For faster training we used a 16 GB NVIDIA Tesla T4 Graphics Processing Unit
(GPU). The stochastic gradient descent (SGD) algorithm was used to optimize the network weights.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conditions for provoking overfitting</title>
      <p>In order to empirically demonstrate the efect of overfitting, the YOLOv8s model was purposely trained
under conditions that should cause overfitting, such as:
1. Increased number of epochs: The number of epochs was increased to 1000 epochs, accordingly,
which usually exceeds the optimal number required for training. This allows us to fix the
beginning of the model mistakes on the validation set. After this, it is possible to observe the
learning process with a higher number of mistakes on the validation set than on the training set.
2. Disabled augmentation: Any data augmentation methods were disabled. Augmentation usually
allows the model to increase its generalization ability; its disabling gives us faster memorization
of specific features of the training set.
3. Disabled the built-in Ultralytics early-stopping function: The early-stopping function was disabled
in our training process. The YOLOv8s model can stop training if the model stops improving for a
certain number of epochs (100 epochs).
4. Dataset reduction: To induce the overfitting efect, the dataset was reduced to 300 images and
their labels were reduced accordingly. This reduction of the number of images leads to an almost
exact one-hundred percent achievement of the overfitting efect, which allows us to conduct
further analysis.
5. Reduction of the penalty for large weights: The penalty for large weights was reduced to 0, which
also has an influence on the overfitting efect, since under normal conditions, this parameter
allows the model to optimize its weights a little better. In our case, we have no penalty for large
weights, which allows the model to create a very complex structure. This, in turn can lead to
good weights for 300 images from the dataset, but at the same time to quite a lot of errors on the
validation set. This is because the model remembered the noise and features of the training data.
The next metrics were used for learning process analysis and overfitting detection: Bounding Box Loss,
object classification loss, precision, average precision, mean average precision, the normalized error
matrix and the precision-Recall Curve.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Empirical demonstration and results</title>
      <p>For a correct estimation of the learning process of the YOLOv8s model and empirical demonstration of
overfitting detection, the main components were analyzed on the FOD training and validation datasets.
These include: three main loss metrics, precision, average precision, mean average precision, the
normalized error matrix and the precision-Recall Curve.</p>
      <p>The YOLOv8 loss function commonly has three main loss metrics.
1. Box Loss evaluates the precision of detecting the space location and size of the objects.
2. Distributed Focal Loss is a special loss function used for accurate frame location. It is responsible
for centering the frame in the middle of the object.
3. Class Loss is the metric that is responsible for classifying objects (e.g., a nut or a screw). It
compares the data with the corresponding labels provided at input.</p>
      <p>To evaluate the learning process  (2), we will use the loss function used in YOLOv8, which is
calculated as a weighted sum of these three metrics [16]:
 =   +   +  ,
(2)
where  is the Complete Intersection over Union (CIoU). CIoU aims to reduce the error between
the predicted frame and the real one;  is the classification loss for correct object classification; 
means (Distributed Focal Loss) and defines the frame borders.</p>
      <p>Theoretically, minimization of  leads to accurate detection. However, as we demonstrate below,
minimizing  does not give us guaranteed performance on new data.</p>
      <p>As the results of our experiments show the largest error in determining the object’s bounding box,
it is important to determine the Box Loss function () [16]. YOLOv8 uses a CIoU to reduce the
geometric error (2):
 2(, )
 = 1 − IoU + 2 + , (3)
where IoU is the Intersection over Union ratio;  and  indicate the center points of the predicted box
and the ground truth box correspondingly; (·) is the Euclidean distance between these center points; 
is the diagonal length of the smallest box that covers both boxes; and  is a penalty for incorrect aspect
ratio recognition.</p>
      <p>The diference of this metric on the validation set tells us that the model cannot determine the precise
location of our objects.</p>
      <p>In Figure 1 the overall estimation of our training is shown. The graphs show the relationship between
the epochs and loss precision. The first graph demonstrates the box losses on the training part of our
dataset. It is possible to find in Figure 1, that Box Loss constantly decreases in case of the training data,
but in case of the validation data this indicator moves to a value of 0.6 then stops and continues to
move very chaotically. Then, it demonstrates the very jumpy movements during all training process.
Moreover, in the end part of the validation box loss graph (first one in the second row) on the validation
set we have the losses that begin to grow, while on the training set (first graph in the first row) they
still continue to decrease. As a result, we have a large gap between 0.2 (training part) and 0.6 (test part).
The model predicts good results on familiar data, but on new data it shows frames 3 times worse.</p>
      <p>Let us to consider Distributed Focal Loss (third graph in the first and the second row). We can
observe the similar result. On the training data, the Distributed Focal Loss constantly decreases. But
the validation data after about 300 epochs demonstrates the chaotic movement with significant vertical
lfuctuations (from 0,5 to 1,2) at the beginning. The diference between the validation and training data
can be seen as the smoothed value of the losses.</p>
      <p>All other metrics precisions, precision and mean Average Precision (mAP) 50-95 show the similar
chaotic movement, but still identify the object correctly in most cases. The results show that the
YOLOv8s model is quite accurate in ideal situation identifying objects even with considered parameters.</p>
      <p>The training performance of the YOLO model is also evaluated using Precision and Recall methods.
The formulas themselves look like this [16]:</p>
      <p>Precision =</p>
      <p>+</p>
      <p>+  
(4)
where TP means True Positives, FP means False Positives, and FN means False Negatives.</p>
      <p>The Precision-Recall curve is given in Figure 2. It shows the relationship between these indicators
through calculated values. The overall performance is 0.980. This value demonstrates rather good
result for our considered situation. But there is one class that shows weakness, which is
BoltNutSettrack_idkeyframe. It is marked by the orange line, and has a much lower average Precision (AP) (0.825)
than the other classes (almost all 0.995).</p>
      <p>The curve for this class drops of sharply when Recall exceeds about 0.8. This means that the model
starts making a lot of false positive forecasts (Fps), trying to find the last 20% of true objects in this class.
This means that the model is not identifying this class well enough.</p>
      <p>Let us analyze the normalized error matrix. It is presented in Figure 3. It shows us quite good
precision considering our parameters. The majority of values equals 1. This indicates accurate
classification of objects. Despite this, we have problems in some classes. These are clearly seen in the
Wiretrack_idkeyframe and Hummertrack _idkeyframe classes, which give us a precision error of 37%.
We also have a precision error of 11-12% for BoltNutSettrack_idkeyframe, ClampParttrack_idkeyframe
and Pentrack_idkeyframe.</p>
      <p>We can conclude from the analysis of the normalized error matrix that in the most cases the overall
precision is high and equals 100%. But for some classes it has a performance of 70-80%. Commonly, this
is not bad precision. But for the task of FOD detection on a runway, this is not a suficient indicator as
the correctness of FOD detection directly influence the aviation safety. Especially, when we can achieve
the better precision by disabling the parameters which can lead to overfitting.</p>
      <p>Figure 4 shows the results.csv file and demonstrates the proof of overfitting. In Figure 4 the Box Loss
metric on the training data is constantly decreasing. The relatively low values in Figure 4, such as:
0.255–0.265 mean rather good results. However, the Box Loss metric on the validation data after 400
epochs begins to growing up and reaches the values of 0.58 – 0.62. It is approximately 2,5 times higher
than on the training set.</p>
      <p>The box loss means that the drone can find some foreign object debris, but it can be false and
inaccurate in determination of their location. It can result in the necessity to repeat the runway check
by the ground crews or to make additional check with the drone in the same area. Obviously it is the
waste of resources and time.</p>
      <p>Figure 5 shows the cls loss. The results of Figure 5 are seemed as not efective again. The validation
of cls loss is also reduces training cls loss, then it is stopped about the values of 0.35 – 0.39. While the
training cls loss has the trend to decrease, the validation cls loss stops decreasing after 700 epochs. This
can mean that some objects are not classified correctly. The same can be seen in the Figure 2 and Figure
3. Therefore, we can conclude that some objects, especially small ones, cause some dificulties for the
neural model.</p>
      <p>The discussed indicator can be considered as direct empirical proof of overfitting. It shows that the
model adaption to noise and the specific peculiarities of the training set. But the generalization on the
validation data continues to deteriorate. The research on loss precision demonstrates that the model
overfitting means worse predictions of the actual position and classification of FOD objects. In case of
use in the aviation industry, this is unacceptable as means decrease of safety.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>In this research, we studied the efect of overfitting on the YOLOv8s neural model applied to foreign
object detection on the runway. The overfitting was achieved empirically. The obtained results show
the relationship between overfitting and quality classification. The analysis of the real data was made.
To study the efect of overfitting, the parameters of the YOLOv8s model were changed. These include:
a huge number of epochs (up to 1000 epochs), turning of augmentation and early stopping (Early
Stopping), and absence of regularization (reducing the weight penalty to 0). The dataset was reduced to
300 images. The main results confirm the overfitting hypothesis. The main metrics loss has shown a
steady decrease of error on the training set. It also shows the chaotic movement and cease of the error
at values (0.58-0.62) on the validation part of the dataset. This can be considered as the direct proof of
the loss of model generalization skill and the fact that model remembers the noise and specific features
of the training dataset.</p>
      <p>The other important metrics were studied. They include: the precision-Recall Curve, the normalized
error matrix, precision, average precision, mean average precision. Their study and analysis demonstrate
that the overfitting efect has been achieved. The obtained result indicates that it is important to control
the parameters of the neural model and use methods that allow to avoid overfitting (for example, early
stopping and augmentation). This is critically important for the safety of aviation when FOD detection.</p>
      <p>Also, we emphasize that training after the optimal number of epochs (about 150-200) does not give
any better results. At the same time, longer training may require larger machine resources. This
confirms the fact that determining the optimal number of epochs is also very important for optimizing
the time and energy spent on training. Furthermore, overfitting influences the classification of complex
classes such as “BoltNutSettrack” in rather poor manner. As it is shown in the Precision-Recall curve
analysis, we have an average precision of 0.825. This indicates that even the overtraining efect, such
as memorizing noise and very fine details of objects, does not allow to recognize correctly a suficient
number of details of the complex class.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>ICAO</surname>
          </string-name>
          ,
          <source>Icao doc 10004: Global aviation safety plan 2023-2025</source>
          ,
          <year>2023</year>
          . URL: https://www2023.icao.int/ safety/GASP/Documents/10004_en.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Shan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Miccinesi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Beni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pagnini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cioncolini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pieraccini</surname>
          </string-name>
          ,
          <article-title>A review of foreign object debris detection on airport runways: Sensors and algorithms</article-title>
          ,
          <source>Remote Sensing</source>
          <volume>17</volume>
          (
          <year>2025</year>
          ). URL: https://www.mdpi.com/2072-4292/17/2/225. doi:
          <volume>10</volume>
          .3390/rs17020225.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>ICAO,</surname>
          </string-name>
          <article-title>The application of fod detection equipment on airport pavement, 2025</article-title>
          . URL: https://www. icao.int/sites/default/files/Meetings/a42/Documents/WP/wp_598_en.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>O.</given-names>
            <surname>Sushchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bezkorovainyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Solomentsev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zaliskyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Holubnychyi</surname>
          </string-name>
          , I. Ostroumov,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Averyanova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ivannikova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kuznetsov</surname>
          </string-name>
          , I. Bovdui,
          <string-name>
            <given-names>T.</given-names>
            <surname>Nikitina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Voliansky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Cherednichenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Sokolova</surname>
          </string-name>
          ,
          <article-title>Algorithm of determining errors of gimballed inertial navigation system</article-title>
          , in: O.
          <string-name>
            <surname>Gervasi</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Murgante</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Garau</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Taniar</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. M. A. C. Rocha</surname>
            ,
            <given-names>M. N. Faginas</given-names>
          </string-name>
          <string-name>
            <surname>Lago</surname>
          </string-name>
          (Eds.),
          <source>Computational Science and Its Applications - ICCSA 2024 Workshops</source>
          , Springer Nature Switzerland, Cham,
          <year>2024</year>
          , pp.
          <fpage>206</fpage>
          -
          <lpage>218</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Zaliskyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Solomentsev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Holubnychyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Ostroumov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Sushchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Averyanova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bezkorovainyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Cherednichenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Sokolova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ivannikova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Voliansky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kuznetsov</surname>
          </string-name>
          , I. Bovdui,
          <string-name>
            <given-names>N.</given-names>
            <surname>Tatyana</surname>
          </string-name>
          ,
          <article-title>Methodology for substantiating the infrastructure of aviation radio equipment repair centers</article-title>
          , in: I.
          <string-name>
            <surname>Ostroumov</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Slimani</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Zaliskyi</surname>
            ,
            <given-names>T. L</given-names>
          </string-name>
          . (Eds.),
          <source>CEUR Workshop Proceedings</source>
          , volume
          <volume>3732</volume>
          ,
          <string-name>
            <surname>CEUR-WS</surname>
          </string-name>
          ,
          <year>2024</year>
          , pp.
          <fpage>136</fpage>
          -
          <lpage>148</lpage>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3732</volume>
          /paper11.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>O.</given-names>
            <surname>Solomentsev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zaliskyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Holubnychyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Ostroumov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Sushchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bezkorovainyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Averyanova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ivannikova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kuznetsov</surname>
          </string-name>
          , I. Bovdui,
          <string-name>
            <given-names>T.</given-names>
            <surname>Nikitina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Voliansky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Cherednichenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Sokolova</surname>
          </string-name>
          ,
          <article-title>Eficiency analysis of current repair procedures for aviation radio equipment</article-title>
          , in: I. Ostroumov, M. Zaliskyi (Eds.),
          <source>Proceedings of the 2nd International Workshop on Advances in Civil Aviation Systems Development</source>
          , Springer Nature Switzerland, Cham,
          <year>2024</year>
          , pp.
          <fpage>281</fpage>
          -
          <lpage>295</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>ICAO</surname>
          </string-name>
          ,
          <article-title>Enhancing aviation safety and eficiency: Recommendation for foreign object debris detection systems (fodds</article-title>
          ),
          <year>2024</year>
          . URL: https://www.icao.int/sites/default/files/APAC/Meetings/2024/ 2024%
          <article-title>20AP-</article-title>
          <string-name>
            <surname>ADO-</surname>
          </string-name>
          TF-
          <volume>5</volume>
          /3-Working%20Papers/
          <fpage>WP14</fpage>
          -AI-8
          <string-name>
            <surname>-AAPA-FSF-FODDS-</surname>
          </string-name>
          Paper-final.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>A. G V</surname>
          </string-name>
          , P. R,
          <article-title>The modern approaches for identifying foreign object debris (fod) in aviation</article-title>
          ,
          <source>in: 2024 International Conference on Integrated Circuits and Communication Systems (ICICACS)</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICICACS60521.
          <year>2024</year>
          .
          <volume>10498909</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.</given-names>
            <surname>Klokta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Znakovska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Averyanova</surname>
          </string-name>
          ,
          <article-title>Unmanned technologies and drones in runway service</article-title>
          , in: O.
          <string-name>
            <surname>Lytvynov</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Pavlikov</surname>
          </string-name>
          , D. Krytskyi (Eds.), Integrated Computer Technologies in Mechanical Engineering - 2024, Springer Nature Switzerland, Cham,
          <year>2025</year>
          , pp.
          <fpage>230</fpage>
          -
          <lpage>241</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Taupik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Alamsyah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Wulandari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. U.</given-names>
            <surname>Armin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hikmaturokhman</surname>
          </string-name>
          ,
          <article-title>Airport runway foreign object debris (fod) detection based on yolox architecture</article-title>
          ,
          <source>in: 2023 International Conference on Computer Science, Information Technology and Engineering (ICCoSITE)</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>40</fpage>
          -
          <lpage>43</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICCoSITE57641.
          <year>2023</year>
          .
          <volume>10127676</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>P.</given-names>
            <surname>Prystavka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Dukhnovska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kovtun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Leshchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Cholyshkina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <article-title>Recognition of aerial photography objects based on data sets with diferent aggregation of classes</article-title>
          ,
          <source>EasternEuropean Journal of Enterprise Technologies</source>
          <volume>1</volume>
          (
          <year>2023</year>
          )
          <fpage>6</fpage>
          -
          <lpage>13</lpage>
          . URL: https://journals.uran.ua/eejet/ article/view/272951. doi:
          <volume>10</volume>
          .15587/
          <fpage>1729</fpage>
          -
          <lpage>4061</lpage>
          .
          <year>2023</year>
          .
          <volume>272951</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>V.</given-names>
            <surname>Zivakin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kozachuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Prystavka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Cholyshkina</surname>
          </string-name>
          ,
          <article-title>Training set AERIAL SURVEY for data recognition systems from aerial surveillance cameras</article-title>
          , in: A.
          <string-name>
            <surname>Anisimov</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Snytyuk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Chris</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Pester</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Mallet</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Tanaka</surname>
            , I. Krak,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Henke</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Chertov</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Marchenko</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Bozóki</surname>
            ,
            <given-names>V. V.</given-names>
          </string-name>
          <string-name>
            <surname>Tsyganok</surname>
          </string-name>
          , V. Vovk (Eds.),
          <source>Selected Papers of the IX International Scientific Conference "Information Technology and Implementation" (IT&amp;I-</source>
          <year>2022</year>
          ).
          <source>Conference Proceedings Kyiv, Ukraine</source>
          , November
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>