<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Assessing neural network accuracy algorithm in graphic content recognition⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Andii Sahun</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vladyslav Khaidurov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Valerii Lakhno</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>CQPC-2024: Classic</institution>
          ,
          <addr-line>Quantum, and Post-Quantum Cryptography</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute</institution>
          ,”
          <addr-line>37 Peremohy ave., 03056 Kyiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>National University of Life and Environmental Sciences of Ukraine</institution>
          ,
          <addr-line>15 Heroyiv Oborony str., 03041 Kyiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <fpage>19</fpage>
      <lpage>24</lpage>
      <abstract>
        <p>The study presents the results of applying the main known metrics used to evaluate the performance and accuracy of algorithms and neural network models on different classes for the task of graphic content recognition. For the analysis, different classes of images processed by the neural network algorithm were compared. То evaluate the quality of the algorithm's training based on the results of graphical pattern recognition, nine different metrics for the five conducted correct classification computational experiments were used. The sample used in research, the CamVid benchmark video dataset for training the neural network model, shows different training results for different recognition classes, with this indicator ranging from 38.15 to 97.07% when using the VGG-16 function. At the same time, the highest standard deviation of accuracy, with a value of 0.030351419, was recorded only for the “Pavement” class. This indicates the imperfection of the CamVid training dataset. It should be modified to improve recognition quality by increasing the size and number of test images.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;distance metrics</kwd>
        <kwd>neural network</kwd>
        <kwd>classifier</kwd>
        <kwd>algorithm's quality evaluation</kwd>
        <kwd>image recognition 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Machine learning and neural networks are closely
related, as neural networks are one of the primary
technologies in the field of machine learning [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1–3</xref>
        ]. In
machine learning, several key metrics are used to
evaluate model performance [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]. These metrics help
to understand how well the model is performing the
given task and to identify areas where it can be
improved. There are several metrics for evaluating
different neural network algorithms [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ]. All of them
are used to analyze the recognition of various properties
and characteristics of neural network recognition
algorithms [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ]. These are useful for creating an
optimal model of a graphic information recognition
system. The most important ones are the metrics for
evaluating the quality of learning [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>
        Therefore, it is of particular interest to understand
whether there is a correlation between the weight
coefficient of the presence of a particular classification
object in graphic object recognition and the accuracy of
such recognition. For example, in the works [
        <xref ref-type="bibr" rid="ref11 ref12 ref13 ref14">11–14</xref>
        ], the
use of metrics such as Distance metrics is considered,
while in the research [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] the use of Euclidean Distance.
However, the formulation of the task differs from the
identification of graphical objects. At the same time, [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]
emphasizes that the accuracy of identification
(recognition) was 96.38% as the maximum value. In
another research related to practical tasks of recognition
and identification of graphical images, the average
recognition (identification) accuracy is reported at 76.78%
[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
      <p>Therefore, it is important to assess how accurately
graphical patterns are recognized in a specific practical
task.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Main part</title>
      <p>Now mostly part of more complex practical application
systems, which are known as Image Identification and
Recognition Systems (IIRS). IIRS are often used both for
detecting defects on parts within quality control systems
according to ISO-9000 standards and for detecting and
recognizing the values of vehicle license plates. A
relevant area of application for graphic information
recognition systems is machine vision systems. The
common principle of construction for all such systems is:
1.
2.
3.</p>
      <p>The technical part of acquiring and initial
processing of the image.</p>
      <p>The technical or software part for analyzing
and classifying image elements.</p>
      <p>The subsystem for registration/ identification
and summarization of recognition data.</p>
      <p>
        For those practical tasks where IIRS is now mostly
used, a mathematical apparatus based on neural
networks with different types of training is applied [
        <xref ref-type="bibr" rid="ref16 ref17 ref18 ref19 ref20">16–
20</xref>
        ]. The choice of the type of neural network training
0000-0002-5151-9203 (A. Sahun); 0000-0002-4805-8880
(V. Khaidurov); 0000-0001-9695-4543 (V. Lakhno)
© 2024 Copyright for this paper by its authors. Use permitted under
      </p>
      <p>
        Creative Commons License Attribution 4.0 International (CC BY 4.0).
model is not the subject of this study. And the aspects
related to this choice are described, in particular, in [
        <xref ref-type="bibr" rid="ref15 ref16 ref17 ref18 ref19 ref20">15–
20</xref>
        ] and in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        The test model chosen is the neural network model
described in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. This model has several layers of
neurons (Fig. 1).
      </p>
      <p>
        Given the practice of using neural network-based
algorithms in recognition and identification systems, a
deep-learning neural network model was chosen. This is
due to several existing advantages of such models for
graphic identification/recognition tasks [
        <xref ref-type="bibr" rid="ref1 ref3">1, 3</xref>
        ].
      </p>
      <p>The main goal of the study is the evaluation of the
accuracy of a neural network algorithm in the task of
recognizing graphic content.</p>
      <p>The neural network diagram of the IIRS shown in
Fig. 1 operates with the Haar feature. This approach is
most effective when using a deep-learning neural
network.</p>
      <p>
        In the basic model described in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], the input layer of
neurons receives initial data, such as the intensity of
each pixel and Haar features for various graphical
objects to be identified (bushes, trees, cars, roads, sky,
sidewalk elements, fences, pedestrians, etc.).
3. Applying distance metrics for
neural networks
In the Matlab environment, there is a built-in function
vgg16() which implements the architecture of a deep
neural network. There is also a function analogous to it,
vgg19(). The first function operates with 16
convolutional and fully connected layers of neurons,
including 13 convolutional and 3 fully connected layers.
This function is used for image classification in the
process of pattern recognition. The vgg16() function in
MATLAB returns a neural network object but does not
contain a specific method for computing distances
(metrics) between feature vectors for processed images.
      </p>
      <p>The vgg19() function also implements the
architecture of a deep neural network and has an input

1.
2.</p>
      <p>3.
size of 224×224×3. Unlike vgg16, the neural network in
the vgg19 network is trained and fine-tuned on a dataset
of graphical data containing over 1,000,000 images and
1000 classes. This allows this neural network to have
more powerful capabilities for feature extraction in
images. To define metrics based on VGG19 in MATLAB,
we first need to load and prepare the VGG19 model, and
extract image features from a specific layer of the neural
network. After this, both vgg16 and vgg19 functions
must use different metrics to compare these features.
That is, neither function has built-in distance metric
determination.</p>
      <p>To use distance metrics with feature vectors
extracted from the VGG16 model in MATLAB, we have
to follow these steps:</p>
      <p>Loading and preparing the VGG16 Model (use
the pre-trained VGG16 model to extract feature
vectors from images.</p>
      <p>Extracting Feature Vectors (feed your images
through the VGG16 model to get the feature
vectors).</p>
      <p>Computing Distance Metrics (use different
distance metrics to compare the feature
vectors).</p>
      <p>
        Below are the main known metrics used to evaluate
the performance of algorithms and neural network
models on different classes of graphic content
recognition. These metrics are used in machine learning
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Accuracy metric in machine learning. Accuracy
shows the proportion of correctly classified objects
among all objects. This metric is well suited for tasks
where classes are balanced. The expression below
provides an example of obtaining the accuracy metric in
machine learning algorithms [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]:
=
      </p>
      <p>+ 
+ 
+ 
+ 
(1)
where</p>
      <p>TP (True Positive) is the number of correct positive
classifications.</p>
      <p>TN (True Negative) is the number of correct negative
classifications.</p>
      <p>FP (False Positive) is the number of incorrect positive
classifications.</p>
      <p>FN (False Negative) is the number of incorrect
negative classifications.</p>
      <sec id="sec-2-1">
        <title>Precision metric in machine learning. Precision</title>
        <p>measures the proportion of correctly classified positive
objects among all objects classified as positive. This
metric is important when the cost of false positive</p>
      </sec>
      <sec id="sec-2-2">
        <title>False Positive Rate (FPR). The FPR measures the</title>
        <p>proportion of false positive results among all negative
examples during training.</p>
        <p>False Negative Rate (FNR). The FNR measures the
proportion of false negative results among all positive
examples during training.
.
.
=
.</p>
      </sec>
      <sec id="sec-2-3">
        <title>Recall metric in machine learning. Recall</title>
        <p>measures the proportion of correctly classified positive
objects among all actual positive objects. This metric is
important when the cost of false negative results is high.
The following expression is used to compute this metric
(2):
=
.</p>
        <p>The F1-score metric of recall. The F1-score is the
harmonic mean between precision and recall. It is useful
when balancing these two metrics is necessary. It is
calculated according to the expression provided below:
 1 − 
=
2 ⋅</p>
        <p>⋅ 
+ 
. (4)</p>
      </sec>
      <sec id="sec-2-4">
        <title>Intersection over Union metric. IoU is used to</title>
        <p>evaluate the quality of segmentation
and
object
detection by measuring the ratio of the intersection area
of predicted and ground truth objects to their union area.
It is calculated according to the expression provided
below:
(2)
(3)
.</p>
        <p>(5)</p>
      </sec>
      <sec id="sec-2-5">
        <title>Mean</title>
      </sec>
      <sec id="sec-2-6">
        <title>Average</title>
      </sec>
      <sec id="sec-2-7">
        <title>Precision metric.</title>
        <p>Average
precision is calculated for each category and then
averaged across all categories. This metric is often used
for object detection tasks. Such a metric is particularly
relevant for evaluating the training quality of this neural
network-based
model. The
metric
value
can be
determined using the expression provided below:
  ,
(6)
where  is the number of categories.</p>
        <p>Confusion matrix. This matrix shows the number
of correct and incorrect classifications for each class. It
includes TP, FP, TN, and FN for each category.</p>
        <p>Area under the ROC curve. The ROC curve shows
the relationship between TPR and FPR at different
thresholds. The area under the curve (AUC) measures
the model's ability to distinguish between classes (7).
results is high. In (2) we present the expression for
computing the accuracy metric in machine learning.

=
(7)
(8)
(9)</p>
        <p>The above-mentioned
metrics
help objectively
assess the quality and effectiveness of the model for
identifying graphical objects in a video surveillance
system based on neural networks, as well as choosing
the most efficient algorithm for specific conditions and
tasks.
training.</p>
        <p>In this research, all the evaluation metrics (1)–(9)
listed above were used to assess the quality of model
experiments (calculated result of correct classification of
objects for all classes).</p>
        <p>The significance of the Intersection over Union (IoU)
metric, calculated for each of the semantic classes, lies in
its ability to measure the accuracy of the neural
network’s recognition performance. IoU assesses how
well the predicted segmentation overlaps with the
ground truth segmentation for each class. Higher IoU
values indicate better performance, meaning the
predicted areas closely match the actual areas. This
metric is crucial for evaluating the effectiveness and
reliability of the neural network in accurately
recognizing and segmenting different semantic classes
within the graphical content. In Fig. 3 we can see the
values of the IoU Accuracy evaluation metric.</p>
        <p>As can be understood from above, the most
important and resultant indicator of model training
quality is the IoU (Intersection over Union) metric. The
result of the correct classification of objects for each
class in the 5 conducted computational experiments
values for different detection classes are presented in
Figs. 4–6.</p>
        <p>Considering that the model was trained on 421
images, it can be considered that its training level may
be sufficient for the graphical identification task at hand.
But we see that the training quality even for the same
semantic classes varies significantly across the different
5 experiments.</p>
        <p>The smallest value of such a deviation will be for
objects of the “Bicyclist” class at 0.76%, and the largest
will be for objects of the “Fence” class at 25.25%.</p>
        <p>Such a difference can be explained by various
reasons. For example, the imperfection of the algorithm
or the insufficient quality or length of the training data
sample.</p>
        <p>As shown by the calculations obtained in Table 1,
the most accurate results of the learning algorithm NM
were obtained for the classes: “Road”—97.06%, “Sky”—
94.46%, and “Car”—94.16% accuracy of correct
recognitions.</p>
        <p>At the same time, the recognition quality of images
of the type “SignSymbol” was 38.15%, and “Tree” had
45.19% accuracy of correct recognition. The average
learning quality of this algorithm on the test fragments
was 75.42%.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Conclusions</title>
      <p>
        Analyzing the data presented and visualized in Table
1 and Figs. 4–6, it can be said that the quality of the
learning algorithm described in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] significantly
depends on the accuracy of the training. The accuracy
of image recognition in neural network-based
algorithms is highly dependent on the quality of
training. Here are some key points of this dependence:
Training Data Quality; Training Data Quantity;
Preprocessing; Algorithm Complexity; Training
Process. The sample used in the study [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]—CamVid
benchmark video dataset for training the neural
network model shows different training results for
different recognition classes. This indicator ranges
from 38.15% to 97.07% when using the VGG-16
function. It can be noted that all the provided training
quality metrics on the same recognition classes yield
approximately the same accuracy values. While the
variance (standard deviation) indicator is highest
only for the “Pavement” class. It amounts to
0.030351419.
      </p>
      <p>The obtained average recognition accuracy of
graphical objects at 75.42% is comparable to the
recognition rate of 98.7%. This indicates insufficient
training quality due to the shortcomings of the
training dataset.</p>
      <p>It can be assumed that the simplest way to
improve recognition accuracy could also be using a
more complex neural network algorithm. Such one
present in MatLab is called VGG-19. Also, to improve
the quality of graphic content recognition, it is
necessary either to use another, higher-quality
training dataset that contains a larger number of
relevant sets of graphic dataset. We can also create an
improved CamVid benchmark video dataset.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>F.</given-names>
            <surname>Ahmad</surname>
          </string-name>
          , T. Ahmad,
          <source>Image Mining Based on Deep Belief Neural Network and Feature Matching Approach Using Manhattan Distance, Comput. Assisted Meth. Eng. Sci</source>
          .
          <volume>28</volume>
          (
          <issue>2</issue>
          ) (
          <year>2021</year>
          )
          <fpage>139</fpage>
          -
          <lpage>167</lpage>
          . doi:
          <volume>10</volume>
          .24423/cames.323.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Sahun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Khaidurov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Bobkov</surname>
          </string-name>
          ,
          <article-title>Model of Graphic Object Identification in a Video Surveillance System based on a Neural Network</article-title>
          ,
          <source>in: Cybersecurity Providing in Information and Telecommunication Systems</source>
          , vol.
          <volume>3654</volume>
          (
          <year>2024</year>
          ),
          <fpage>361</fpage>
          -
          <lpage>367</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.-H.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>Minimum Euclidean Distance Evaluation Using Deep Neural Networks</article-title>
          ,
          <source>AEU-Int. J. Electron. Commun</source>
          .
          <volume>112</volume>
          (
          <year>2019</year>
          ). doi:
          <volume>10</volume>
          .1016/j.aeue.
          <year>2019</year>
          .
          <volume>152964</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>V.</given-names>
            <surname>Buhas</surname>
          </string-name>
          , et al.,
          <article-title>Using Machine Learning Techniques to Increase the Effectiveness of Cybersecurity</article-title>
          ,
          <source>in: Cybersecurity Providing in Information and Telecommunication Systems</source>
          , vol.
          <volume>3188</volume>
          , no.
          <issue>2</issue>
          (
          <year>2021</year>
          )
          <fpage>273</fpage>
          -
          <lpage>281</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>V.</given-names>
            <surname>Zhebka</surname>
          </string-name>
          , et al.,
          <article-title>Optimization of Machine Learning Method to Improve the Management Efficiency of Heterogeneous Telecommunication Network</article-title>
          ,
          <source>in: Workshop on Cybersecurity Providing in Information and Telecommunication Systems</source>
          , vol.
          <volume>3288</volume>
          (
          <year>2022</year>
          )
          <fpage>149</fpage>
          -
          <lpage>155</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>B.</given-names>
            <surname>Bebeshko</surname>
          </string-name>
          , et al.,
          <source>Application of Game Theory, Fuzzy Logic and Neural Networks for Assessing Risks and Forecasting Rates of Digital Currency, J. Theor. Appl. Inf. Technol</source>
          .
          <volume>100</volume>
          (
          <issue>24</issue>
          ) (
          <year>2022</year>
          )
          <fpage>7390</fpage>
          -
          <lpage>7404</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>K.</given-names>
            <surname>Khorolska</surname>
          </string-name>
          , et al.,
          <article-title>Application of a Convolutional Neural Network with a Module of Elementary Graphic Primitive Classifiers in the Problems of Recognition of Drawing Documentation and Transformation of 2D to 3D Models</article-title>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Theor</surname>
          </string-name>
          . Appl. Inf. Technol.
          <volume>100</volume>
          (
          <issue>24</issue>
          ) (
          <year>2022</year>
          )
          <fpage>7426</fpage>
          -
          <lpage>7437</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Z. B.</given-names>
            <surname>Hu</surname>
          </string-name>
          , et al.,
          <source>Authentication System by Human Brainwaves Using Machine Learning and Artificial Intelligence</source>
          , in: Advances in Computer Science for Engineering and
          <string-name>
            <surname>Education IV</surname>
          </string-name>
          (
          <year>2021</year>
          )
          <fpage>374</fpage>
          -
          <lpage>388</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -80472-5_
          <fpage>31</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>V.</given-names>
            <surname>Zhebka</surname>
          </string-name>
          , et al.,
          <article-title>Methodology for Predicting Failures in a Smart Home based on Machine Learning Methods</article-title>
          ,
          <source>in: Cybersecurity Providing in Information and Telecommunication Systems</source>
          , vol.
          <volume>3654</volume>
          (
          <year>2024</year>
          )
          <fpage>322</fpage>
          -
          <lpage>332</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hossin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. N.</given-names>
            <surname>Sulaiman</surname>
          </string-name>
          .
          <article-title>A Review on Evaluation Metrics for Data Classification Evaluations</article-title>
          ,
          <source>Int. J. Data Mining Knowledge Manag. Process. 5</source>
          . (
          <year>2015</year>
          ).
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          . doi:
          <volume>10</volume>
          .5121/ijdkp.
          <year>2015</year>
          .
          <volume>5201</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>O. V.</given-names>
            <surname>Herasina</surname>
          </string-name>
          ,
          <string-name>
            <surname>V. I. Korniienko</surname>
          </string-name>
          ,
          <article-title>Global and Local Optimization Algorithms in the Problem of Identification of Complex Dynamic Systems</article-title>
          , Inf. Process. Syst.
          <volume>6</volume>
          (
          <year>2010</year>
          )
          <fpage>73</fpage>
          -
          <lpage>77</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>V.</given-names>
            <surname>Lakhno</surname>
          </string-name>
          , et al.,
          <article-title>Information Security Audit Method Based on the Use of a Neuro-Fuzzy System</article-title>
          , Software Engineering Application in Informatics, LNNS
          <volume>232</volume>
          (
          <year>2021</year>
          )
          <fpage>171</fpage>
          -
          <lpage>184</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -90318-3_
          <fpage>17</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>H. G.</given-names>
            <surname>Schuster</surname>
          </string-name>
          (
          <year>1992</year>
          ). Deterministic Chaos: Introduction and
          <string-name>
            <given-names>Recent</given-names>
            <surname>Results</surname>
          </string-name>
          , Nonlinear Dynamics in Solids,
          <volume>22</volume>
          -
          <fpage>30</fpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>642</fpage>
          - 95650-
          <issue>8</issue>
          _
          <fpage>2</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Pérez-Romero</surname>
          </string-name>
          , et al.,
          <source>Evaluation of Artificial Intelligence-Based Models for Classifying Defective Photovoltaic Cells, Appl. Sci</source>
          .
          <volume>11</volume>
          (
          <year>2021</year>
          )
          <article-title>4226</article-title>
          . doi:
          <volume>10</volume>
          .3390/app11094226.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>L.</given-names>
            <surname>Ljung</surname>
          </string-name>
          , et al.,
          <source>Deep Learning and System Identification, IFAC-PapersOnLine</source>
          <volume>53</volume>
          (
          <issue>2</issue>
          ) (
          <year>2020</year>
          )
          <fpage>1175</fpage>
          -
          <lpage>1181</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.ifacol.
          <year>2020</year>
          .
          <volume>12</volume>
          .1329
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>V.</given-names>
            <surname>Lakhno</surname>
          </string-name>
          , et al.,
          <article-title>Development Strategy Model of the Informational Management Logistic System of a Commercial Enterprise by Neural Network Apparatus</article-title>
          ,
          <source>in: Cybersecurity Providing in Information and Telecommunication Systems</source>
          , vol.
          <volume>2746</volume>
          (
          <year>2020</year>
          )
          <fpage>87</fpage>
          -
          <lpage>98</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>I.</given-names>
            <surname>Goodfellow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Courville</surname>
          </string-name>
          , Deep Learning, The MIT Press (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>D. F.</given-names>
            <surname>Kandamali</surname>
          </string-name>
          , et al.,
          <source>Machine Learning Methods for Identification and Classification of Events in ϕ- OTDR Systems: a review</source>
          ,
          <source>Applied Optics</source>
          <volume>61</volume>
          (
          <issue>11</issue>
          ) (
          <year>2022</year>
          )
          <article-title>2975</article-title>
          . doi:
          <volume>10</volume>
          .1364/ao.444811.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bickler</surname>
          </string-name>
          ,
          <source>Machine Learning Identification and Classification of Historic Ceramics, Archaeology in New Zealand</source>
          <volume>61</volume>
          (
          <year>2018</year>
          )
          <fpage>48</fpage>
          -
          <lpage>58</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>O.</given-names>
            <surname>Rainio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Teuho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Klén</surname>
          </string-name>
          ,
          <article-title>Evaluation Metrics and Statistical Tests for Machine Learning</article-title>
          ,
          <source>Sci. Rep</source>
          .
          <volume>14</volume>
          (
          <year>2024</year>
          )
          <article-title>6086</article-title>
          . doi:
          <volume>10</volume>
          .1038/s41598-024- 56706-x.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>S.</given-names>
            <surname>Orozco Arias</surname>
          </string-name>
          , et al.,
          <source>Measuring Performance Metrics of Machine Learning Algorithms for Detecting and Classifying Transposable Elements, Processes</source>
          <volume>8</volume>
          (
          <year>2020</year>
          )
          <fpage>1</fpage>
          -
          <lpage>19</lpage>
          . doi:
          <volume>10</volume>
          .3390/pr8060638.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>