<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>X (A. Gozhyj);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Application of convolutional neural networks for detection of damaged buildings</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Aleksandr Gozhyj</string-name>
          <email>alex.gozhyj@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Irina Kalinina</string-name>
          <email>irina.kalinina1612@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Valerii Dymo</string-name>
          <email>dymovalery@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>MoMLeT-2024: 6th International Workshop on Modern Machine Learning Technologies</institution>
          ,
          <addr-line>May, 31 - June, 1, 2024, Lviv-Shatsk</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Petro Mohyla Black Sea National University</institution>
          ,
          <addr-line>St. 68 Desantnykiv 10 Mykolaiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>The paper describes an approach to solving the problem of detection damaged buildings on satellite or other images using convolutional neural networks. To solve the problem, the U-Net convolutional network architecture was chosen. For the study, a proprietary data set containing 50 images with dimensions of 512x512 pixels was used. The application of augmentations was considered to increase the variability of the data set, which made it possible to train the neural network on a small number of images, which had a positive effect on further results. 5 different models of the U-Net architecture were built, the impact of various parameters on the effectiveness of the models was investigated. It has been proven that the initial number of filters has a positive effect on the accuracy of the model. Improved segmentation accuracy for damaged and undamaged buildings. The proposed approach makes it possible to make a preliminary assessment of the degree of damage of buildings and contributes to the implementation of recognition systems based on convolutional neural networks for solving practical tasks.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;recognition of damaged buildings</kwd>
        <kwd>computer vision</kwd>
        <kwd>semantic segmentation</kwd>
        <kwd>CNN 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        organizations has its own methodologies that are used to assess the destruction and
calculate the subsequent impact on the economy of the affected regions. For example, the
DaLA methodology [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] (The Damage and Loss Assessment Methodology) was formed,
based on the document of the same name, which was developed by ECLAC (Economic
Commission for Latin America and the Caribbean) in the 1970s.
      </p>
      <p>In general, assessing and overcoming the consequences of emergency situations is a
complex and multi-stage process that cannot completely exclude human work. In many
cases, expert groups receive information from open sources, internal structures, local
administrations and state administration bodies. For a full assessment of the situation, the
members of the commissions need to personally study the data from the affected regions,
which guarantees accuracy and impartiality in the results. In turn, the automation of the
preliminary analysis of the affected regions can make it possible to reduce the time and
speed up the examination process, providing information about the most affected areas.</p>
      <p>
        There are many ways to recognize objects in images, in this case – houses. In a number
of studies [
        <xref ref-type="bibr" rid="ref2 ref3 ref4 ref5">2-6</xref>
        ], various approaches are given: from the use of classical algorithms to the
combination of neural networks into complex systems. For example, the application of
convolutional networks of the U-Net architecture [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] for the detection of destroyed
buildings after natural disasters is considered. The authors use high-resolution RGB
satellite images from the xView dataset. The paper considers the use of only a part of the
available set of images, using the example of two classes - destroyed and not destroyed
buildings. The architecture of the neural network proposed by the authors, combined with
the used methods of pre-processing and data augmentation, allows to recognize
undamaged buildings with an accuracy of about 95.9%, and 76% for destroyed ones.
      </p>
      <p>
        Another study [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] proposes a complex system architecture based on deep learning for
rapid post-earthquake detection of buildings. The system consists of many components,
such as feature processing and extraction, application of a convolutional autodecoder, a
separate procedure for automatic selection of appropriate training samples, etc. The
authors investigate the effect of choosing the ratio of the training and testing set, as well as
the effect of different modeling parameters. The proposed system has an overall accuracy
of 93%, considering the efficiency of the individual algorithms used, a kappa coefficient of
74% was determined.
      </p>
      <p>
        The paper [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] proposes the use of the Single-shot Multibox Detector algorithm, the
authors of the paper note that the additional use of neural networks improves the overall
accuracy by about 10% compared to the usual use of the SSD algorithm. As a neural
network for application, the VGG16 model is proposed, which is trained on a dataset with
a resolution of 1920x1080. The proposed architecture has a classification accuracy of
79.4% for damaged objects and about 70% for undamaged objects.
      </p>
      <p>
        It is worth noting that the above papers investigate the recognition of damaged
buildings due to natural disasters. In turn, the nature of damage during hostilities can be
significantly different from damage caused by the distaters. Research [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] examines the
post-war detection of damaged buildings on the example of a dataset of Syrian cities after
the civil war. The dataset used in the study contains images from the GeoEye-1 satellite
from the Zabadani and Damascus regions. The authors propose a mathematical model that
uses such features as shadow, dispersion and correlation. Since the shadow actually
reflects the geometric image of the building, so after an explosion or other destruction, the
shadow will be deformed. The model uses the Gray Level Co-occurrence Matrix, which
represents the statistical characteristics of the second-order texture of the image. The
accuracy of the classification of buildings in some images was 95.65% for the surviving
ones, and 81.25% for the damaged ones.
      </p>
      <p>In the study, which examines the assessment of damaged buildings in Kyiv [6] due to
the full-scale invasion of Russia, the authors present a completely different possible way of
recognizing destroyed objects, namely the use of SAR images with the analysis of satellite
image textures and the calculation of the pixel intensity coefficient. The authors argue for
the choice of this satellite technology by the fact that weather conditions can become a
factor that limits high-quality aerial photography. This study is also distinguished by the
fact that the authors show the influence of the size of the building in the corresponding
images on the quality and completeness of the classification assessment. Thus, when the
area of the building increases, the overall recognition accuracy increases from 64% to
76%.</p>
      <p>In recent years, it should be noted an increase in the number of papers devoted to
overcoming the consequences of disasters and destruction during hostilities, the authors
cite different approaches to solving the problem of recognizing damaged buildings, using
both high-quality satellite images and aerial photography from unmanned aerial vehicles.
The use of individual methods has its advantages in solving the problem, for example,
convolutional neural networks are able to determine various patterns in complex images,
and modern architectures are more accurate than classical algorithms.</p>
      <p>It should be noted that one of the problems is the need to use a sufficient amount of
data for training the model. In turn, it is difficult to obtain high-quality photos from the
places of hostilities, so there is a need to use different approaches to increase the
variability of the existing data set, for example, data augmentation methods, or
optimization of the neural network architecture for working with a small number of
images. A combination of different approaches will improve recognition performance.</p>
      <p>Problem statement. The purpose of this paper is to study convolutional neural
networks for solving the problem of detection of damaged buildings. Building and
application of a convolutional network model for detection of damaged buildings on
satellite images.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Convolutional networks for object recognition</title>
      <p>Today, there are a large number of models of different convolutional neural networks.
Considering some of the most popular convolutional networks [7-10, 24-26], it is possible
to note the achievement of high recognition accuracy on various data sets.</p>
      <p>Each architecture has its own advantages in some tasks, for example, the VGG model [7]
shows high accuracy in classification tasks, but it is also used in others - objest detection in
an image or segmentation. The R-CNN network [9] has been applied to object detection
tasks on various data sets and shows reliable results, and the latest modifications, such as
Faster R-CNN or Mask R-CNN [11-12] can be used to solve objects search tasks in
realtime and image segmentation with higher accuracy than predecessors.</p>
      <p>Since this paper deals with the detection of damaged buildings in images, Figure 1 in
the next page provides an example of the different computer vision techniques that can be
used.</p>
      <p>Each technique entails the need to use certain approaches and appropriate models.
Since there will be a large number of buildings in the images, the usual classification
option, in which one image belongs to one or more classes, is not considered. In turn, the
use of object detection, object localization or image segmentation will have both practical
and visual significance. Therefore, the authors suggest using the semantic segmentation
technique, in which each pixel of the image is associated with one of the classes. This will
allow more accurate classification of damaged buildings, generating segmentation maps of
affected areas, the results of which will be easier to interpret.</p>
      <p>As the selected model, the U-Net convolutional network architecture [8] is proposed,
which is based on the idea of a fully convolutional neural network [13], which is effective
for capturing both context and spatial information. In principle, this architecture is similar
to the encoder-decoder model, has fewer parameters due to the lack of fully connected
layers, but compensates for this with the greater complexity of the expanding path. Since
this work considers the processing of "static" images, there is no need to use complex
models that are designed to solve tasks in real time, which makes the U-Net architecture
optimal for the task of detection damaged buildings.</p>
    </sec>
    <sec id="sec-3">
      <title>3. U-Net convolutional network architecture</title>
      <p>U-Net is built on the basis of a fully convolutional neural network, the authors of the
architecture modified it in such a way as to be able to work with a small amount of
training data [8], and at the same time obtain sufficiently accurate segmentation of images
- which is critical in the field of biomedicine, for research in which and the model was
implemented. The basic idea was to supplement the usual sub-network with additional
sequential layers, in which the pooling operators were replaced by upsampling layers,
increasing the output resolution. In turn, the features of high resolution from the
narrowing path are combined with the result of upsampling layers, which allows for more
accurate segmentation based on this information.</p>
      <p>In this study, to solve the problem of recognition of damaged buildings, the classical
architecture of the U-Net network will be used, Figure 2 shows a visual diagram of the
network.</p>
      <p>Summarizing the work of the model, the structure of the network [8] can be divided
into several logical blocks:
•
•
•
•</p>
      <p>Convolutional Block. It consists of a 3x3 convolutional layer and a ReLU activation
function that are repeated twice. This block is used in the Encoder Block as a
subsidiary one.</p>
      <p>Encoder Block. A Contracting Path block consisting of one Convolutional Block, a
2x2 pooling layer with a step of 2. A Dropout Block is also added to prevent rapid
overfitting of the model.</p>
      <p>Bottleneck. It is the "narrowest" part of the model, consisting of a Convolutional
Block with a maximum multiplier of filters (feature channels). Thus, at this point
the model has the lowest image resolution but the highest number of
corresponding channels.</p>
      <p>Decoder Block. It is an analogue of Encoder Block for Expanding Path, which
consists of an upsampling layer, a combination with a corresponding layer from a
Contracting Path, a Dropout layer and a corresponding Convolutional Block.</p>
      <p>By connecting these blocks together, the U-Net network model will be implemented.
Thus, the total number of convolutional layers in the model is 23, taking into account the
other layers, the model will contain about 34.5 million parameters for training with an
initial value of 64 filters, and 8 million parameters, reducing the base number to 32.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Dataset building and pre-processing</title>
      <p>The data set has a direct impact on the operation and results of any neural network. In this
paper, the authors use their own data set, created using satellite images from the Google
Earth service.</p>
      <p>The set has a small amount of data: 50 satellite images with a size of 512x512, which
contain images of buildings from the private sector of the city of Mariupol. The pictures
were taken in May 2022. At the same time, the data set contains more than 1,500 instances
of different buildings, which belong to two classes: damaged ('damaged' label) and
undamaged ('normal' label). Figure 3 shows an example of images from the dataset.</p>
      <p>It is worth noting that the approach used in the study does not allow to understand the
nature of damage inside the building - therefore, when annotating images, only visual
damage is used, for example, on the roof or facade of the building. Due to the complexity of
annotating buildings of different shapes, as well as taking into account dense buildings in
some cases, simple shapes were used when creating the segmentation mask, some parts of
the buildings could be combined into one shape.</p>
      <p>When annotating images, we used the free Labelme tool, which is publicly available on
the Github resource [14]. Figure 4 shows a representation of the dataset, namely the
original image enhanced with the OpenCV library and its segmentation map. In general,
the data set contains three classes: background – which is responsible for the background
image (black color), normal – for annotating undamaged buildings (green), and damaged –
for damaged ones (red).</p>
      <p>Thus, in total, the dataset contains 100 images: 50 source images to be used to train the
model and 50 generated segmentation maps for them, each pixel corresponding to one of
the three classes.</p>
      <p>Data preprocessing is the next important step for effective model training. Modern
studies [15-18] propose various image processing techniques, from classical rotation and
mirroring, to the creation of separate procedures for generating images by inserting
individual elements [17], or using networks such as GAN. Each approach can be useful in
certain situations.</p>
      <p>Since the initial data set has a small number of images, the authors decided to apply
data augmentation - expanding the set by adding images already existing in the set with
certain changes:
•
•
•
•</p>
      <p>Image shift.</p>
      <p>Image scale.</p>
      <p>Image rotation.</p>
      <p>Image blur.</p>
      <p>Before applying augmentations, each image was scaled to the appropriate size specified
in the model and normalized accordingly: since pixel values range from 0 to 255, each
pixel value in the image was divided by 255. It is worth noting that a large amount of
image manipulation can lead to loss of quality and clarity, which will affect the learning of
the neural network. That is why various combinations of defined augmentations with limit
values for geometric transformations are used in the work. Thus, the probability of image
corruption at the pre-processing stage is reduced.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Applied functions and metrics in model training</title>
      <p>Functions. The standard model uses a softmax function in the output layer. The softmax,
or normalized exponential function, transforms a vector of real numbers into a probability
distribution of possible outcomes, thus it is suitable for classification tasks, including
image segmentation, by transforming some of the pixel values of the original array into an
array of class probabilities that sum to 1. Formula 1 displays the softmax activation
function [19,22-26]:

(
 ) =
exp⁡(

∑
 =1 exp⁡(
 )</p>
      <p>,
 )

 =1
  
= − ∑ 
 
ℎ × log⁡(
(
 )),
where c is a defined class from 1 to k, j is an iteration of classes from 1 to k, output is
the output data of the network (image), and exp is the exponent from the corresponding
set of values.</p>
      <p>In general, the output values of a neural network, without the use of activation
functions, are ordinary sets of values that reflect the final result of operations in the layers
of the network. In the case of the U-Net architecture, the SoftMax activation function
described above is used, which transforms a set of values into a set of probabilities whose
sum is equal to 1. This allows you to use these values in the future to calculate the
accuracy of the model, its costs and the use of metrics, since the latter uses probability
values.</p>
      <p>In this paper, the categorical cross-entropy is used as a cost function, which is common
in solving semantic segmentation problems (binary in the case of classifying one sample of
one class in the image), which measures the 'distance' between the predicted distribution
and the actual distribution of classes. Thus, the lower the entropy value, the better the
agreement between the prediction and reality, while an increase makes the model worse
in terms of predicting the actual results.</p>
      <p>Formula 2 shows the application of the function in the case of multi-sample
classification [19,22, 23]:</p>
      <p>where c is the defined class in the range from 1 to k, output is the output data of the
network (image) and ground_truth is the segmentation map (mask) to the corresponding
image.</p>
      <p>Metrics. To calculate the effectiveness of the model, several metrics are used in this
study, namely, the overall accuracy, which reflects the ratio of the number of matching
pixels of the original image
with the previously created segmentation
map, IoU
(Intersection over Union) and its modification - MeanIoU, which is respectively the
average value of IoU for several classes.</p>
      <p>IoU is a popular metric used in machine learning to measure localization accuracy,
calculating localization errors in different models. In general, IoU can be represented as
the ratio of the common intersection between two areas to their total area. Formula 3 is
used to calculate this metric [20,22,23]:

 =</p>
      <p>+   +  
where c is the class for which the metric is calculated, TP is the 'true positive' value, FP
is the 'false positive' value, FN is the 'false negative' pixel value.</p>
      <p>Thus, the IoU metric reflects how accurately the model segmented the image according
to the reference segmentation map. In turn, the classes of damaged and undamaged
buildings can dominate other classes, in the case of this study - the background class, so
(1)
(2)
(3)
the authors use the MeanIoU calculation, which reflects the average value of IoU for all
three classes, while IoU will be used to calculate the indicator separately for building
classes. Formula 4 reflects the calculation of the MeanIoU metric [20,22,23]:
(4)
where c is the class for which the metric is calculated, IoUc is the calculated IoU value
for class c. Thus, using a combination of different metrics, you can get a more accurate
picture of the effectiveness of the model both on separate classes and in general.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Comparative analysis of models with different parameters</title>
      <p>The models were built in Python using the Tensorflow library and Google's Colab machine
learning service, which uses cloud computing to run different pieces of code and train the
models [21]. Training was conducted using the NVIDIA T4 GPU graphics processor.</p>
      <p>As part of the experiment, an initial model was built: the initial value of the filters was
32, the input image was 256x256 pixels, and the data set was increased by 10 times. The
data set was split into training and test sets in the ratio of 80% to 20%, so the model was
trained on 400 images for 25 epochs.</p>
      <p>Since the model was trained on a small data set and Dropout layers with a firing
frequency of 0.3 were applied, the image has typical 'teeth'. During all training epochs, the
model improved the results, overtraining was prevented by limiting the number of epochs.
segmentation map, for which a test dataset of non-training images was applied.</p>
      <p>The overall accuracy of the initial model is 82.28%, but this does not indicate its
accuracy with respect to the main classes - damaged buildings. Comparing the IoU scores
for each class, the results are as follows: 42.67% accuracy for damaged buildings and
38.37% accuracy for undamaged buildings at the pixel level. To study the results, the
authors suggested building models with different parameters (see Table 1).</p>
      <p>Based on the results of testing on a separate set, the basic model of the U-Net neural
network highlights the general features of buildings and generally correctly segments the
image, but individual elements prevent the network from recognizing objects more
accurately, for example, the network confuses the courtyard with the roof of buildings, the
model also does not recognize objects are small in size well.
5 models were built, including the base model, which had different parameters: the
initial number of filters in the model, the number of training epochs, and the applied
augmentations. The size of the image could have a positive effect on the quality of the
model, but the increase to the original size of 512x512 was not possible for technical
reasons. Based on the obtained results, it was possible to improve the overall accuracy of
the model to 84.21%, respectively, the recognition of damaged buildings at the level of
45.83% and undamaged, respectively, at 49.14%.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusions</title>
      <p>This paper considers an approach to solving the problem of recognizing damaged
buildings on satellite or other images using convolutional neural networks. To solve the
problem, the U-Net convolutional network architecture was chosen.</p>
      <p>For the study, a proprietary data set containing 50 images with dimensions of 512x512
pixels was used. The application of augmentations was considered to increase the
variability of the data set, which made it possible to train the neural network on a small
number of images, which had a positive effect on further results.</p>
      <p>As part of the work, 5 different models of the U-Net architecture were built, the
influence of various parameters on the effectiveness of the models was investigated.
According to the results, the initial number of filters has a positive effect on the accuracy
of the model, in turn, there is no need to train the model on a large number of epochs,
since in most cases this leads to overfitting, which can be avoided in case of increasing the
dataset with original images. A segmentation accuracy of 45.83% was achieved for
damaged buildings and 49.14% for those that were not damaged or could not be visually
identified. Thus, the application of convolutional neural networks of the U-Net
architecture allows recognition of general features of damaged buildings in images. The
proposed approach makes it possible to make a preliminary assessment of the damage
degree of damaged buildings and contributes to the implementation of recognition
systems based on convolutional neural networks for solving practical problems.
[6] Aimaiti, Yusupujiang, Christina Sanon, Magaly Koch, Laurie G. Baise, and Babak
Moaveni. War Related Building Damage Assessment in Kyiv, Ukraine, Using Sentinel-1
Radar and Sentinel-2 Optical Images. Remote Sensing. 2022. Vol. 14, 24: 6239. DOI:
10.3390/rs14246239.
[7] Karen Simonyan, Andrew Zisserman, Very deep convolutional networks for
largescale image recognition, 2015. URL: https://arxiv.org/pdf/1409.1556.pdf.
[8] Olaf Ronneberger, Philipp Fischer, Thomas Brox, U-Net: Convolutional Networks for</p>
      <p>Biomedical Image Segmentation, 2015. URL: https://arxiv.org/pdf/1505.04597.pdf.
[9] Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik. Web Spam Taxonomy,
2014. URL: https://arxiv.org/pdf/1311.2524.pdf.
[10] Alex Krizhevskym, Ilya Sutskever, Geoffrey E. Hinton. ImageNet Classification with
Depp Convolutional Neural Networks, 2012. URL:
https://proceedings.neurips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8
436e924a68c45b-Paper.pdf.
[11] Kaiming He, Georgia Gkioxari, Piotr Dollar, Ross Girshick, Mask R-CNN, 2018. URL:
https://arxiv.org/pdf/1703.06870.pdf.
[12] Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun, Faster R-CNN: Towards Real-Time
Object Detection with Region Proposal Networks, 2016. URL:
https://arxiv.org/pdf/1506.01497.pdf.
[13] Jonathan Long, Evan Shelhamer, Trevor Darrell. Fully Convolutional Networks for</p>
      <p>Semantic Segmentation. CVPR2015. DOI: 10.48550/arXiv.1411.4038.
[14] Labelme. Image Polygonal Annotation with Python. URL:
https://github.com/labelmeai/labelme.
[15] Khaled Alomar, Halil Ibrahim Aysel, Xiaohao Cai. Data Augmentation in Classification
and Segmentation: A Survey and New Strategies. Imaging. 2023. Vol. 9, № 2, p. 46.</p>
      <p>DOI: 10.3390/jimaging9020046.
[16] Sandhi Wangiyana, Piotr Samczynski, Artur Gromek. Data Augmentation for Building
Footprint Segmentation in SAR Images: An Empirical Study. Remote Sens. 2022. Vol.
14, №9. DOI: 10.3390/rs14092012.
[17] Sengdeok Bang, Francis Baek, Somin Park, Wontae Kim, Hyoungkwan Kim. Image
augmentation to improve construction resource detection using generative
adversarial networks, cut-and-paste, and image transformation techniques.</p>
      <p>Automation in Construction. 2020. Vol. 115. DOI: 10.1016/j.autcon.2020.103198.
[18] Golnaz Ghiasi, Yin Cui, Aravind Srinivas. Simple Copy-Paste is a Strong Data
Augmentation Method for Instance Segmentation. CVPR 2021. 2021. DOI:
10.48550/arXiv.2012.07177.
[19] Bishop, Christopher M. Pattern Recognition and Machine Learning. Springer, 2006.</p>
      <p>ISBN 0-387-31073-8.
[20] Aziz Taha, Abdel, Metrics for evaluating 3D medical image segmentation: analysis,
selection, and tool. BMC Medical Imaging, 2006, 15 (29): 1–28.
doi:10.1186/s12880015-0068-x.
[21] Colaboratory. Google, 2024. URL:
https://research.google.com/colaboratory/faq.html.
[22] Peter Bidyuk, Irina Kalinina, Oleksandr Zhebko, Aleksandr. Gozhyj and Tetyana
Hannichenko, Classification System Based on Ensemble Methods for Solving Machine
Learning Tasks, CEUR- WS. 2023, vol. 3426, pp. 1-11.
CEUR-WS.org/Vol3426/paper5.pdf.
[23] Irina Kalinina, Peter Bidyuk, Alesandr Gozhyj and Pavel Malchenko, Combining
Forecasts Based on Time Series Models in Machine Learning Tasks, CEUR-WS. 2023,
vol. 3426, pp. 25-35. CEUR-WS.org/Vol-3426/paper2.pdf.
[24] V. Hamolia, V. Melnyk, P. Zhezhnych, and A. Shilinh, (2020). Intrusion detection in
computer networks using latent space representation and machine learning.
International Journal of Computing, 19(3), 442-448.
https://doi.org/10.47839/ijc.19.3.1893
[25] V.Turchenko, E.Chalmers, &amp; A. Luczak, (2019). A deep convolutional auto-encoder
with pooling – unpooling layers in caffe. International Journal of Computing, 18(1),
831. https://doi.org/10.47839/ijc.18.1.1270
[26] Y. Bodyanskiy, A.Deineko, V. Skorik, &amp; F.Brodetskyi, (2022). Deep Neural Network
with Adaptive Parametric Rectified Linear Units and its Fast Learning. International
Journal of Computing, 21(1), 11-18. https://doi.org/10.47839/ijc.21.1.2512</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Damage</surname>
            , Loss and
            <given-names>Needs</given-names>
          </string-name>
          <string-name>
            <surname>Assessment</surname>
          </string-name>
          .
          <source>Guidance Notes</source>
          ,
          <year>2010</year>
          . URL: https://documents1.worldbank.org/curated/en/617521468335985769/pdf/88086 0v10WP0Bo0PUBLIC00TTL0Vol10WEB.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>V</given-names>
            <surname>Rashidian</surname>
          </string-name>
          <string-name>
            <given-names>V</given-names>
            ,
            <surname>Baise</surname>
          </string-name>
          <string-name>
            <given-names>LG</given-names>
            ,
            <surname>Koch</surname>
          </string-name>
          <string-name>
            <given-names>M</given-names>
            ,
            <surname>Moaveni</surname>
          </string-name>
          <string-name>
            <given-names>B.</given-names>
            <surname>Detecting Demolished</surname>
          </string-name>
          <article-title>Buildings after a Natural Hazard Using High Resolution RGB Satellite Imagery and Modified U-Net Convolutional Neural Networks</article-title>
          .
          <source>Remote Sensing</source>
          .
          <year>2021</year>
          . Vol.
          <volume>13</volume>
          (
          <issue>11</issue>
          ):
          <fpage>2176</fpage>
          . DOI:
          <volume>10</volume>
          .3390/rs13112176.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Takhtkeshha</surname>
            , Narges,
            <given-names>Ali</given-names>
          </string-name>
          <string-name>
            <surname>Mohammadzadeh</surname>
            , and
            <given-names>Bahram</given-names>
          </string-name>
          <string-name>
            <surname>Salehi</surname>
          </string-name>
          .
          <article-title>A Rapid SelfSupervised Deep-Learning-Based Method for Post-Earthquake Damage Detection Using UAV Data (Case Study: Sarpol-e Zahab, Iran)</article-title>
          .
          <source>Remote Sensing</source>
          .
          <year>2023</year>
          . Vol.
          <volume>15</volume>
          ,
          <issue>1</issue>
          :
          <fpage>123</fpage>
          . DOI:
          <volume>10</volume>
          .3390/rs15010123.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>Yundong</given-names>
          </string-name>
          , Wei Hu, Han Dong,
          <string-name>
            <given-names>and Xueyan</given-names>
            <surname>Zhang</surname>
          </string-name>
          .
          <article-title>Building Damage Detection from Post-Event Aerial Imagery Using Single Shot Multibox Detector</article-title>
          .
          <source>Applied Sciences</source>
          .
          <year>2019</year>
          . Vol.
          <volume>9</volume>
          ,
          <issue>6</issue>
          :
          <fpage>1128</fpage>
          . DOI:
          <volume>10</volume>
          .3390/app9061128.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Ghandour</surname>
            ,
            <given-names>Ali J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Abedelkarim</surname>
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Jezzini</surname>
          </string-name>
          .
          <article-title>Post-War Building Damage Detection</article-title>
          .
          <source>Proceedings</source>
          .
          <year>2018</year>
          . Vol.
          <volume>2</volume>
          ,
          <issue>7</issue>
          :
          <fpage>359</fpage>
          . DOI:
          <volume>10</volume>
          .3390/ecrs-2-05172.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>