<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Training set AERIAL SURVEY for Data Recognition Systems From Aerial Surveillance Cameras</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Valery Zivakin</string-name>
          <email>Zivakin1993@gmai.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksandr Kozachuk</string-name>
          <email>oleksandrkozachukk@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pylyp Prystavka</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Olha Cholyshkina</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Interregional Academy of Personnel Management</institution>
          ,
          <addr-line>Frometivska St. 2, Kyiv, 03039</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>National Aviation University</institution>
          ,
          <addr-line>Liubomyra Huzara ave. 1, Kyiv, 03058</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <fpage>246</fpage>
      <lpage>255</lpage>
      <abstract>
        <p>Recognition of terrain elements and objects from aerial image data wich obtained from aircraft air surveillance cameras, UAVs and high-resolution satellite images is a complex task and requires numerical factors to be taken into account. The created training set for intelligent systems AERIAL SURVEY allows solving a wide range of scientific and practical problems in the field of machine learning and neural network recognition of terrain elements. The purpose of this paper is to show the dynamics of the conducted research on the creation of the training set AERIAL SURVEY, identify problem areas and identify promising vectors for further development of this area. The training set already now allows solving a number of video analytics tasks, including dual-use ones: various aerial monitoring of roads, agricultural fields, forests, water bodies, aerial reconnaissance with the search for targets in the combat zone, use in tracking and target designation systems. In the course of further research, the aerial survey data set will allow solving such problems as information filtering of excessive information content of terrain images, automated formalization and description of new unlabeled classes, the transition from recognition of individual terrain elements to a structural description of the aerial survey area in order to automate its georeferencing and many others.</p>
      </abstract>
      <kwd-group>
        <kwd>Neural network element recognition</kwd>
        <kwd>video analytics</kwd>
        <kwd>monitoring</kwd>
        <kwd>filtering</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The success of the widespread introduction of artificial intelligence technologies over the past
decade is largely due to the development of machine learning methods, in particular, such a component
as deep learning. That is why the value of data sets, on the basis of which the creation and training of
intelligent systems for various purposes, is increasing. In recent years, issues related to data collection
and their preparation for learning have already formed into an integral part of a separate area - machine
learning engineering. The result of its deployment and implementation depends on how adequate these
goals are for the sake of which a machine learning system or model is created. At any time, with machine
learning, the researcher is not limited to a linear data processing algorithm. The learning model can be
changed, improved depending on the purpose and circumstances of the applied task, and therefore
requires the introduction of additional hyperparameters or the involvement of additional data. Of course,
we are talking about complex applied problems, the solution of which is faced with the need to solve
scientific and technical problems.</p>
      <p>One of these tasks is the recognition of terrain elements and objects from aerial photography data
obtained from aircraft surveillance cameras, UAVs and high-resolution satellite images as a special case
of Earth remote sensing data. The complexity of this task is due to a large number of factors, so here
are the most important ones (the order of mention is not critical):</p>
      <p>2022 Copyright for this paper by its authors.
• Various shooting heights lead to the necessity to recognize objects at different scales and with
different levels of detail;</p>
      <p>• Recognition of objects during different seasons, under meteorological conditions and period of the
day;
• Different shooting angle of objects creates perspective distortion of recognition targets;
• Interference that occurs when shooting: from aberration or micro-movement of the fixation camera
on board the aircraft to distortion from ground-based means, such as electronic warfare;
• The necessity to identify not only objects in the image, but also often the solution of related tasks,
such as setting the location of the shooting frame, the speed of processing a large amount of information
per unit of time, preferably in near real time, etc.</p>
      <p>In the complex solution of the problem of recognition of aerial data, special attention is required to
the formation of a high-quality training set, on the basis of which it is already possible to form a problem
statement for the creation of various machine learning systems, both surface and deep.</p>
      <p>The relevance of the work on creating a training set of aerial survey data is especially important for
Ukraine, as an aviation State that actively develops and uses various unmanned arial vehicles. With the
outbreak of war with the Russian Federation, the use of recognition systems in the processing of aerial
reconnaissance and air surveillance data in the combat zone acquired particular importance. According
to the order [1] of the Cabinet of Ministery of Ukraine dated 08.30.2017 No. 600-r “Some issues of the
development of critical technologies in the field of production and military equipment” and its Adition
1 as amended on 23.02.2022, in the list of critical technologies in the field of production of weapons
and military techniques include but are not limited to:</p>
      <p>• Technologies for figurative interpretation, selection and classification of targets for homing
systems of high-precision weapons;
• Technologies for controlling robotic platforms, identifying, recognizing and tracking targets;
• Technologies of machine learning, artificial intelligence, neural networks for the design,
production and operation of military and special equipment;</p>
      <p>• Technologies for coding, transmission and receipt (automatic recognition, processing, analysis,
generation, visualization) of information.</p>
      <p>As you can see, it is difficult to overestimate the importance of creating, filling and using a training
set for recognition of data from aerial surveillance cameras (aerial photography data).</p>
      <p>
        That is why, starting from 2018, the Department of Applied Mathematics of the National Aviation
University [
        <xref ref-type="bibr" rid="ref1">2</xref>
        ] began work in this area, in particular, the creation and filling of the Aerial Survey training
set was started [
        <xref ref-type="bibr" rid="ref2 ref3">3, 4</xref>
        ]. First of all, the specified data set is considered by the authors as a launching pad
for the deployment, research and continuous improvement of machine learning systems. During the
creation of the ARIEL SURVEY set, the department's scientists, students and graduate students solved
various problems, improving both the training set and the systems and information technologies based
on it. Let set the purpose of this paper to show the dynamics of the conducted research, to outline
problem areas and to identify promising directions for further development of this area.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Analysis of sources and studies</title>
      <p>Consider, according to open sources, the existing developments in the field of training sets of aerial
observation data (or may be suitable for such processing) and pay attention to deep learning
technologies that have the prospect of being used when working with the mentioned sets.</p>
      <p>Conventionally, the formation of sets can be divided into two options:
1. Formation of the most universal set for a wider coverage of possible tasks.
2. Formation of a dataset for a specific task.</p>
      <p>
        Of course, the first option is a priority in case of an excess of time, and the second - in case of its
absence. So in [
        <xref ref-type="bibr" rid="ref4">5</xref>
        ], the Inria dataset is presented, which was used for automatic pixel marking of aerial
images. The coverage of 810 km is marked into two classes: structure and non-structure. At the same
time, the team “captain Whu” focused on presenting the most representative datasets for solving
segmentation problems [
        <xref ref-type="bibr" rid="ref5 ref6">6, 7</xref>
        ], object detection [
        <xref ref-type="bibr" rid="ref7">8</xref>
        ], and ect. It is logical to think that over time, any set
needs to be supplemented and verified. Such networks are the most valuable.
      </p>
      <p>
        In general, a large amount of data is currently freely available. For example, since 2006, the
Environment Agency has been publishing sets collected on a plane from a few square kilometers to
hundreds. Images are provided as a raster dataset in ECW (extended compressed wavelet), as true color
(RGB), near infrared (NIR), or 4-band (RGBN) dataset. If the image was taken under incident response
conditions and/or lighting conditions not be optimal, it is identified by the IR prefix. This set is
constantly updated (last updated March 28, 2022) [
        <xref ref-type="bibr" rid="ref8">9</xref>
        ]. In addition, on the kaggle resource, you can find
high-quality data sets for solving various problems, for example, a set for semantic segmentation [
        <xref ref-type="bibr" rid="ref9">10</xref>
        ]
or Semantic Drone Dataset [
        <xref ref-type="bibr" rid="ref10">11</xref>
        ].
      </p>
      <p>
        To work with a data set, those types of neural networks that are typical in image processing are
suitable: convolutional (CNN), with an autoencoder (autoencoder), generative competitive networks
(GAN). Thus, it is relevant to process the set using the VGG-16 network, which is an improved version
of AlexNet [
        <xref ref-type="bibr" rid="ref11">12</xref>
        ], ResNet50 is a powerful base model [
        <xref ref-type="bibr" rid="ref12">13</xref>
        ], research with a convolutional autoencoder
(Convolutional Autoencoder) [
        <xref ref-type="bibr" rid="ref13">14</xref>
        ], which gives various types of representations, as well as processing
with complex models like MASK R-CNN [
        <xref ref-type="bibr" rid="ref14">15</xref>
        ] and DCNN [
        <xref ref-type="bibr" rid="ref15">16</xref>
        ].
      </p>
      <p>
        The YOLO network [
        <xref ref-type="bibr" rid="ref16">17</xref>
        ] is also excellent for real-time object detection. Unlike previous object
identification methods repurposed by classifiers for detection, YOLO proposes to use a pass-through
method. The neural network simultaneously provides bounding boxes and probabilities of classes.
YOLO delivers state-of-the-art results using a groundbreaking approach to object recognition,
surpassing previous real-time object detection methods. There are various versions of the neural
network, although YOLO does not add a new model architecture to the YOLO family, it does provide
a new PyTorch training and deployment system that improves on the state of the art for object detectors.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Presenting main material</title>
      <p>At the moment, the images in the “Aerial survey” dataset, which are the subject of this publication,
are divided into 16 classes and have the structure shown in Fig. 1.</p>
      <p>In total, the dataset consists of almost eighty thousand 64x64 pixel images. In 2018, the first
iteration of the formation of the dataset and work on image processing on it was carried out. At that
time, the set consisted of about two thousand images with a 64x64 pixel extension, divided into four
classes: water, road, building, target class [18, CNN-technology for recognition of objects for aerial
photography data]. The study used a convolutional neural network consisting of two convolutional
blocks and a multilayer perceptron. Three experiments were carried out in which the dataset was divided
into training and test parts of different sizes, and the neural network was trained for a different number
of epochs (50, 60 and 80). In each of them, the training and test data were shuffled randomly five times.
The summarized results are presented below in Tab. 1.</p>
      <p>As of 2019, the dataset has been updated to 16,000 images divided into eleven classes: buildings,
forests, fields, water bodies, roads, vehicle traces, civil engineering, poles, and three target classes. In
addition, the study was also carried out on a new type of networks - a convolutional autoencoder with
a classifier [18, Automated object recognition system based on aerial survey]. The results of the
classifier, in the most general form, are presented in the diagram (Fig. 3) The results of the classifier
are presented in more detail in the diagram (Fig. 2).</p>
      <p>Also, work continued with the convolutional network - the classifier. The number of images was
additionally added and augmented (up to 16 thousand), and the classes were redistributed as follows:
buildings, buildings between trees, forest, roads, roads between buildings, roads between trees,
vegetation field, non-vegetation field, waters between trees, technics footprints, trenches, military and
civil technics[18, Neural network recognition of image classes from aerial survey]. The work done, in
general, contributed to the quality of recognition (Fig. 3).</p>
      <p>In 2020, the number of images in the studies was increased to 22,000. In addition, as the analysis of
the results of previous recognition works (Fig. 2, 3) shows, the “technics footprints” class, which is
closely related to the recognition of roads, in particular dirt roads, required special attention. and
different paths. Therefore, in [18, Information technologies for recognizing road classes based on aerial
survey data], new elements of the dataset with different types of roads were created, about 40 thousand
of the total number of images. Subsequently, a more detailed analysis of representations was carried
out for architectures based on the autoencoder model. In the same work, for the representations obtained
from the encoder, the principal component method was applied, according to which it turned out that
the first three components contain more than 50% of the variability, and the first of them accounted for
28.9%, the second 12.3%, the third 8.9%.</p>
      <p>Through the application of the principal components method, a two-dimensional and
threedimensional model of data variability was built (Fig. 4). In the course of testing the classifier on different
types of roads, the histogram of the top and bottom transitions was taken away (Fig. 5). In addition, the
possibility of presenting road classes in the representation space to the autoencoder in the form of
actually linearly separated sets was demonstrated (Tab. 2)
“Other” - blue color;
"Roads between buildings" - pink color.
"Roads between trees" - a shade of green;
"Narrow dirt road" - green color;
"Roads between buildings " - purple color.
"Other" - blue color;
"Roads between trees" - a shade of green.
"Roads between trees" - a shade of green;
"Asphalt roads" - red color;
"Roads between buildings " - purple color.
"Roads between the trees" - a shade of green;
"Trails" - yellow color;
"Asphalt roads" - red color.</p>
      <p>An interesting experience was the study of the generative network model (GAN). About twenty
thousand images were processed by the network, after which a new sample was generated [18,
Technology of neural network generation of a training set of digital images]. Both samples during the
research served as both training and test for the convolutional network of the classifier. As can be seen
from Table 3, the classifier showed the best recognition accuracy when trained on data generated by the
GAN.</p>
      <p>In addition, the possibility of presenting road classes in the representation space to the autoencoder
in the form of actually linearly separated sets was demonstrated (Tab. 2).</p>
      <p>In addition, studies of individual classes were carried out. For example, the analysis of the
recognition of specific forest plantations and individual trees [18, Automation of filling the training
sample according to aerial observation data]. This work is interesting because the qualitative recognition
of just such a class makes it possible to automate the calculation of forest plantations, as well as more
reliable recognition of "mixed" classes, for example: buildings among trees or trees (forest plantations)
along roads. In 2021, the data from the set was used to train modern architecture - MASK R-CNN. The
network was used to detect buildings on live video. Positive results were obtained, even with the degree
of Jaccard (IoU) 0.8, accuracy and completeness of recognition reached approximately 0.6.</p>
      <p>Also at this stage, studies were carried out to identify data that were not formalized in the training
set. That is, images with objects that were not present in the training set were attached to the test sets
[18, Identification of aerial survey data classes not formalized in the training set]. The result of 36
experiments was demonstrated to add a fourth (unknown) to the neural network classifier trained in
three classes. On average, for the case of adding one non-formalized class to three, its identification
was carried out with an accuracy of ~68%. And in the case of adding one formalized class to four
classes, approximately 63%.</p>
      <p>Research continued on the representations obtained by using the encryption part of the autoencoder.
In particular, the representations were subject to analysis, in which they were clustered by the k -means
algorithm into a different number (from 10 to 17) of newly formed classes, compared with the initial
eleven [18, Information technology for modeling the initial datasets of aerospace surveys based on
neuromeasurement architectures with an autoencoder]. As an encoder architecture that solves the
problem of data dimensionality reduction, the following network was built (Fig. 6).</p>
      <p>The refore, using the PCA, the following representations were obtained (see Fig. 7). After applying
the PCA, it was found that the first 10 principal components account for 74% of the data variability.
The use of PCA made it possible to carry out cluster analysis in the feature space. An analysis of the
dependence of the choice of the optimal clustering parameter and repeated retraining of the network on
the resulting clusters was carried out. It is shown that using this approach, we obtain a comparable level
of recognition - up to 97% on the test sample, and this opens up the possibility of researching approaches
in the field of building self-learning systems.</p>
      <p>In several works, attention was paid to automating the filling of the training set of images. In the
master's work [18, Information technology for classification of terrain types and object search based on
aerial survey data], the idea was to carry out the initial segmentation of images, after which the selected
clusters were used to automatically generate new classes for the training set. The disadvantage of this
approach is that the training data obtained in this way must be reviewed by a human specialist and
assigned a name to the class. However, as the practice of applying this approach has shown, in many
cases this approach has significantly increased the number of images for such types of images: forests,
fields, buildings, large water bodies. A more adequate automation of data set filling was demonstrated
in a thesis [18, Information technology for automating the filling of the training set for neural network
recognition]. In this case, an automated application was created that used the results of neural network
training to process aerial data. Recognized image fragments were automatically marked up and could
be added to an existing set of training images. Testing has shown that this approach reduces the cost of
data labeling and improves the quality of recognition after training on the updated training set.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions and directions for further research</title>
      <p>As even a brief review of the work performed shows, the availability of the Aerial Survey training
set allows a wide range of scientific and practical research in the field of machine learning and neural
network recognition of terrain elements. On the applied side, the presence of such a set already now
allows solving a number of video analytics tasks, including dual-use ones: various aerial monitoring of
roads, agricultural fields, forest plantations, water bodies, conducting aerial reconnaissance with the
search for targets in the combat zone actions, use in tracking and target designation systems.</p>
      <p>In addition to the indicated practical, the authors of this work also have a number of purely scientific
interests and plans for working with the set. Let us outline some of them and a general perspective for
further research.</p>
      <p>If we strive for the universality of this training set, then we should take into account the need to label
data for different seasons, which will lead to significant data heterogeneity within some classes, for
example, trees, roads, fields, etc. In addition, even within Ukraine, the structure and elements of the
landscape, vegetation, and even buildings are noticeably different. Increasing the class options
presented in the set requires research into the impact on the quality of training and recognition. Perhaps
a separate study will be the selection of some "core" of base classes within the overall architecture of
the training set, using for "additional learning" classes of images characteristic of a particular season or
type of terrain. In this context, the experience of automatic replenishment of the dataset, which can be
performed on the basis of a stable training “core”, with adjustment to a specific monitoring task, is
useful. Another area of research may be work on the generation of artificial data, on the principle of
generative networks, or on the basis of research and use of representations in the autoencoder space.
Working with low-dimensional representations also allows the study and primary analysis of the
distributions of data presented in individual classes. The influence of heterogeneity or the presence of
anomalies on the learning process is quite an interesting and promising area of research.</p>
      <p>Information filtering of excessive information content of terrain images, automated formalization
and description of new, unlabeled classes, the transition from the recognition of individual terrain
elements to the structural description of the aerial survey area in order to automate its georeferencing
this is not a complete list of tasks that can be performed with the “Aerial survey” dataset in further
studies.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Acknowledgements</title>
      <p>
        The authors express their gratitude to the Aerorozvidka public association [
        <xref ref-type="bibr" rid="ref3">4</xref>
        ] for supporting the
development of modern information technologies in Ukraine.
6. References
[1] Order of the Cabinet of Ministers of Ukraine, 2022. URL: https://zakon.rada.gov.ua/laws/
show/600-2017-%D1%80#Text.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>National</given-names>
            <surname>Aviation</surname>
          </string-name>
          University,
          <year>2022</year>
          .URL:http://applmaths.nau.edu.ua/index.php.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <article-title>[3] Training data set "Aerial Survey"</article-title>
          . URL: https://drive.google.com/file/d/1BAmSRbYUyCnrPYnjpHI7l_6qsNmc9o6/view.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <article-title>[4] Aerial reconnaissance. Unmanned aerial vehicles, situational awareness</article-title>
          , cybersecurity,
          <year>2022</year>
          . URL: https://aerorozvidka.xyz/uk/
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Е.</given-names>
            <surname>Maggiori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tarabalka</surname>
          </string-name>
          , G.Charpiat, and
          <string-name>
            <given-names>P.</given-names>
            <surname>Alliez</surname>
          </string-name>
          ,
          <article-title>Can Semantic Labeling Methods Generalize to Any City? The Inria Aerial Image Labeling Benchmark</article-title>
          ,
          <source>IEEE International Symposium on Geoscience and Remote Sensing (IGARSS)</source>
          ,
          <source>Fort Worth, USА</source>
          ,
          <year>2017</year>
          . URL: https://hal.inria.fr/hal01468452/document.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Xin-Yi</surname>
            <given-names>Tong</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gui-Song</surname>
            <given-names>Xia</given-names>
          </string-name>
          , Qikai Lu, Huangfeng Shen,
          <string-name>
            <given-names>Shengyang</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Shucheng</given-names>
            <surname>You</surname>
          </string-name>
          , and Liangpei Zhang,
          <article-title>Land-Cover Classification with High-Resolution Remote Sensing Images Using Transferable Deep Models</article-title>
          , Remote Sensing of Environment, China,
          <year>2020</year>
          . URL: https://arxiv.org/pdf/
          <year>1807</year>
          .05713.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Syed</given-names>
            <surname>Waqas</surname>
          </string-name>
          <string-name>
            <surname>Zamir</surname>
          </string-name>
          , Aditya Arora,
          <article-title>Akshita Gupta and others, iSAID: A Large-scale Dataset for Instance Segmentation in Aerial Images, Computer Vision and Pattern Recognition Conference (CVPR), Long Beach</article-title>
          , USA,
          <year>2019</year>
          . URL: https://openaccess.thecvf.com/content_CVPRW_2019/papers/DOAI/Zamir_iSAID_
          <article-title>A_Largescale_Dataset_for_Instance_Segmentation_in_Aerial_Images_CVPRW_2019_paper</article-title>
          .pdf.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Gui-Song</surname>
            <given-names>Xia</given-names>
          </string-name>
          , Xiang Bai, Jian Ding, Zhen Zhu,
          <article-title>Serge Belongie and others, DOTA: A Large-scale Dataset for Object Detection in Aerial Images</article-title>
          ,
          <year>2019</year>
          . URL: https://arxiv.org/pdf/1711.10398.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Vertical</given-names>
            <surname>Aerial</surname>
          </string-name>
          <string-name>
            <surname>Photography</surname>
          </string-name>
          , Environment Agency,
          <year>2022</year>
          . URL: https://data.gov.uk/dataset/4921f8a1-d47e
          <string-name>
            <surname>-</surname>
          </string-name>
          458b
          <string-name>
            <surname>-</surname>
          </string-name>
          873b
          <article-title>-2a489b1c8165/vertical-aerial-photography</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [10]
          <article-title>Semantic segmentation of aerial imagery</article-title>
          ,
          <year>2020</year>
          . URL: https://www.kaggle.com/datasets/ humansintheloop/semantic
          <article-title>-segmentation-of-aerial-imagery.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>The</given-names>
            <surname>Semantic Drone Dataset</surname>
          </string-name>
          ,
          <source>Institute of Computer Graphics and Vision</source>
          ,
          <year>2022</year>
          . URL: https://www.tugraz.at/index.php?id=
          <fpage>22387</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>K.</given-names>
            <surname>Simonyan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zisserman</surname>
          </string-name>
          ,
          <article-title>Very deep convolutional networks for large-scale image recognition</article-title>
          ,
          <source>International Conference on Learning Representations (ICLR)</source>
          , San Diego, USA,
          <year>2015</year>
          . URL: https://arxiv.org/pdf/1409.1556.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Kaiming</surname>
            <given-names>He</given-names>
          </string-name>
          , Xiangyu Zhang, Shaoqing Ren, and
          <string-name>
            <given-names>Jian</given-names>
            <surname>Sun</surname>
          </string-name>
          .
          <article-title>Deep Residual Learning for Image Recognition, Computer Vision and Pattern Recognition Conference (CVPR), Las Vegas</article-title>
          , USA,
          <year>2016</year>
          ,
          <fpage>770</fpage>
          -
          <lpage>778</lpage>
          . URL:https://www.cvfoundation.org/openaccess/content_cvpr_2016/ papers/He_Deep_
          <article-title>Residual_Learning_CVPR_2016_paper</article-title>
          .pdf.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Yifei</surname>
            <given-names>Zhang,</given-names>
          </string-name>
          <article-title>A Better Autoencoder for Image: Convolutional Autoencoder, 1st ANU Bio-inspired Computing conference</article-title>
          , Canberra, Australia,
          <year>2018</year>
          . URL: http:// users.cecs.anu.edu.au/ ~Tom.Gedeon/conf/ABCs2018/paper/ABCs2018_paper_58.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>K.</given-names>
            <surname>He</surname>
          </string-name>
          , G. Gkioxari,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dollar</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Girshick</surname>
          </string-name>
          ,
          <string-name>
            <surname>Mask</surname>
            <given-names>R-CNN</given-names>
          </string-name>
          ,
          <article-title>Computer Vision</article-title>
          and Pattern Recognition Conference (CVPR), Honolulu, Hawaii,
          <year>2017</year>
          ,
          <fpage>2961</fpage>
          -
          <lpage>2969</lpage>
          . URL: https:// openaccess.thecvf.com/content_ICCV_
          <year>2017</year>
          /papers/He_Mask_
          <string-name>
            <surname>R-CNN</surname>
          </string-name>
          _
          <article-title>ICCV_2017_paper</article-title>
          .pdf.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>Lars</given-names>
            <surname>Lien Ankile</surname>
          </string-name>
          , Morgan Feet Heggland, and Kjartan Krange,
          <article-title>Deep Convolutional Neural Networks: A survey of the foundations, selected improvements, and some current applications</article-title>
          ,
          <year>2020</year>
          . URL: https://deepai.org/publication/deep-convolutional
          <article-title>-neural-networks-a-survey-of-thefoundations-selected-improvements-and-some-current-applications.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>J.</given-names>
            <surname>Redmon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Divvala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Girshick</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Farhadi</surname>
          </string-name>
          , You Only Look Once: Unified,
          <string-name>
            <surname>Real-Time Object</surname>
            <given-names>Detection</given-names>
          </string-name>
          ,
          <year>2016</year>
          . URL: https://doi.org/10.48550/arXiv.1506.02640
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [18]
          <article-title>Research of the Department of applied mathematic for young seintists</article-title>
          . National Aviation University (NAU), Kyiv, Ukraine,
          <fpage>2018</fpage>
          -
          <lpage>2022</lpage>
          . URL: http://applmaths.nau.edu.ua/categoryview.php?cat=diplomas, http://applmaths.nau.edu.ua/show.php?id=
          <fpage>281</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Gnatyuk</surname>
            ,
            <given-names>V. A.</given-names>
          </string-name>
          (
          <year>2001</year>
          ).
          <article-title>Mechanism of laser damage of transparent semiconductors</article-title>
          .
          <source>Physica B: Condensed Matter</source>
          ,
          <fpage>308</fpage>
          -
          <lpage>310</lpage>
          ,
          <fpage>935</fpage>
          -
          <lpage>938</lpage>
          . doi:
          <volume>10</volume>
          .1016/S0921-
          <volume>4526</volume>
          (
          <issue>01</issue>
          )
          <fpage>00865</fpage>
          -
          <lpage>1</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Dudnik</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Presnall</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tyshchenko</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Trush</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          (
          <year>2021</year>
          ).
          <article-title>Methods of determining the influence of physical obstructions on the parameters of the signal of wireless networks</article-title>
          .
          <source>Paper presented at the CEUR Workshop Proceedings</source>
          , ,
          <volume>3179</volume>
          <fpage>227</fpage>
          -
          <lpage>240</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Latsyshyn</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Iatsyshyn</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kovach</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zinovieva</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Artemchuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Popov</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Turevych</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Application of open and specialized geoinformation systems for computer modelling studying by students</article-title>
          and
          <source>PhD students</source>
          .
          <source>Paper presented at the CEUR Workshop Proceedings</source>
          , ,
          <volume>2732</volume>
          <fpage>893</fpage>
          -
          <lpage>908</lpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>