<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A YOLO-based Method for Object Contour Detection and Recognition in Video Sequences</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mariia Nazarkevych</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maryna Kostiak</string-name>
          <email>maryna.y.kostiak@lpnu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nazar Oleksiv</string-name>
          <email>nazar.oleksiv.mnsa.2020@lpnu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Victoria Vysotska</string-name>
          <email>victoria.a.vysotska@lpnu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrii-Taras Shvahuliak</string-name>
          <email>andrii-taras.shvahuliak@lnu.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Lviv Ivan Franko National University</institution>
          ,
          <addr-line>1 Universytetska str., Lviv, 79000</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Lviv Polytechnic National University</institution>
          ,
          <addr-line>12 Stepan Bandera str., Lviv, 79013</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <fpage>49</fpage>
      <lpage>58</lpage>
      <abstract>
        <p>A method for recognizing the contours of objects in a video data stream is proposed. The data will be uploaded using the video camera. Objects will be recognized in real-time. We will use YOLO-a method of identification and recognition of objects in real-time. Recognized objects will be recorded in a video sequence showing the contours of the objects. The approach proposed in the project reasonably synthesizes methods of artificial intelligence, theories of computer vision on the one hand, and pattern recognition on the other; it makes it possible to obtain control influences and mathematical functions for decision-making at every moment with the possibility of analyzing the influence of external factors and forecasting the flow of processes and refers to the fundamental problems of mathematical modeling of real processes. The installation of the neural network is shown in detail. The characteristics of the neural network and its capabilities are shown. Approaches to computer vision for object extraction are shown. Well-known methods are methods of expanding areas, methods based on clustering, contour selection, and methods using a histogram. The work envisages building a system for rapid identification of combat vehicles based on the latest image filtering methods developed using deep learning methods. The time spent on machine identification will be 10-20% shorter, thanks to the developed new information technology for detecting objects in conditions of rapidly changing information.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Artificial intelligence</kwd>
        <kwd>tracking</kwd>
        <kwd>selection of objects</kwd>
        <kwd>image recognition</kwd>
        <kwd>YOLO</kwd>
        <kwd>segmentation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Video surveillance is a common means of
solving problems related to security and event
monitoring [1–3]. One of the main tasks arising
from video surveillance is detection [4],
tracking [5] and identification [6] of moving
objects. Video cameras are near us and record
data about us. Therefore, there is a need to
recognize data and objects. To recognize the
data, you need to go through the stage of
preprocessing, that is, improving the visual
quality—increasing the contrast, distinguishing
the boundaries, removing blurring, and
filtering. Then there is an operation for the
preparation of graphic images—selection of
objects, segmentation, and selection of
contours.</p>
      <p>Tracking is determining the location of a
moving object [7] or several objects over time
using a video camera (Fig. 1). The algorithm
analyzes video frames and outputs the position
of moving objects relative to the frame.</p>
      <p>Contrast
enhancement</p>
      <p>Blur
reduction</p>
      <p>
        Video stream preprocessing
The main tracking problem is matching the
positions of the target object in successive
frames, especially if the object is moving fast
compared to the frame rate. Thus, tracking
systems usually use a movement model [
        <xref ref-type="bibr" rid="ref36">8</xref>
        ],
how the image of the target object can change
during various movements (Fig. 2).
      </p>
      <p>Selection of
objects</p>
      <p>Segmentati
on
Examples of such simple movement patterns are
flat object tracking—affine transformation or
object image homography [9].</p>
      <p>The target can be a rigid three-dimensional
object, the motion model determines the
appearance depending on its position in space
and orientation.</p>
      <p>For video compression, keyframes are
divided into macroblocks. A motion model is a
burst of keyframes where each macroblock is
transformed using a motion vector.</p>
      <p>The image of a deformable object can be
covered with a grid and the movement of the
object is determined by the position of the
vertices of this grid.</p>
      <p>When an object is to be searched and
matched against a given one, a new set of key
points is extracted into the test image the two
sets are matched and a similarity score is
calculated.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Review of Literature</title>
      <sec id="sec-2-1">
        <title>2.1. Object Selection</title>
        <p>
          Before selecting an object from a video stream.
there are pixel-by-pixel, block-by-block, and
methods based on energy functional
minimization [
          <xref ref-type="bibr" rid="ref16 ref23 ref26 ref43 ref46 ref57 ref8">10</xref>
          ] (Fig. 3).
        </p>
        <p>Pixel-by-pixel methods of object selection
process all points of the image. These methods
are highly accurate, but they are sensitive to
noise.</p>
        <p>Selecting an object from a video stream
Pixel</p>
        <p>Post-block</p>
        <p>MInimization of the
energy
functional</p>
        <p>
          Grayscale methods perform segmentation—
dividing a digital image into several sets of
pixels [
          <xref ref-type="bibr" rid="ref65">11</xref>
          ]. Image segmentation is commonly
used to highlight objects and boundaries. More
precisely, image segmentation is the process of
assigning such labels to each pixel of an image
so that pixels with the same labels share visual
characteristics.
        </p>
        <p>
          Block-based methods do not process
individual pixels [12], but groups of pixels
combined into blocks. If the block contains a
boundary, then in such areas the boundary of
the object is determined inaccurately [
          <xref ref-type="bibr" rid="ref45">13, 14</xref>
          ].
        </p>
        <p>The disadvantage of methods [15] based on
the energy function is the low speed of
operation.
2.2.</p>
      </sec>
      <sec id="sec-2-2">
        <title>Methods of Expanding Regions</title>
        <p>The methods of this group are based on the use
of local features of the image [16]. The idea of
the region expansion method is to analyze first
the starting point, then its neighboring points
according to the criterion of homogeneity of
the analyzed points into one or another group.
In more effective variants of the method, the
starting point is not individual pixels, but the
division of the image into several small areas.
Each region is then checked for homogeneity,
and if the result of the test is negative, the
corresponding area is divided into smaller
sections.</p>
        <p>Threshold segmentation and segmentation
[17] according to the homogeneity criterion
based on average brightness (Fig. 4) often do
not give the desired results.</p>
        <p>Methods of expanding regions</p>
        <p>By the
criterion of
homogeneity
based on
brightness</p>
        <p>According to
the
texture</p>
        <p>based
homogeneity
criterion</p>
        <p>Threshold
segmentation
Such segmentation usually results in a large
number of small regions. The most effective
results are given by the segmentation based on
the homogeneity criterion based on the texture
[18].</p>
      </sec>
      <sec id="sec-2-3">
        <title>Selection of Contours</title>
        <p>In the video, heterogeneous objects are often
observed, so you have to face the task of finding
perimeters, curvature, form factors, specific
surface area of objects, etc. All these tasks are
related to the analysis of contour elements of
objects.</p>
        <p>Methods for highlighting contours in an
image can be divided into three main classes:
1. High-frequency filtering methods [19].
2. Methods of spatial differentiation [20].
3. Methods of functional approximation
[21] (Fig. 5).</p>
        <p>Common to all these methods is the
development of the boundary as a region of a
sharp drop in the image brightness function
( ,), which is distinguished by the introduced
mathematical contour model.</p>
        <p>Selection of contours
Methods of
high-frequency
filtering</p>
        <p>Methods of</p>
        <p>spatial
differentiation</p>
        <p>Methods of
functional
approximation.
By the tasks, contour selection algorithms are
subject to requirements: the selected contours
must be thin, without gaps, and closed. The
process of selecting contours is complicated
due to the need to apply algorithms for
thinning and eliminating gaps. However, the
contours are not closed and unsuitable for
analysis procedures.
2.4.</p>
      </sec>
      <sec id="sec-2-4">
        <title>Methods Based on Clustering</title>
        <p>The  -means method is an iterative method
used to divide an image into  clusters. The
basic algorithm is given below [22]:</p>
        <p>Step 1. Choose  cluster centers, randomly
or based on some heuristics.</p>
        <p>Step 2. Place each image pixel in a cluster
whose center is closest to that pixel.</p>
        <p>Step 3. Recalculate the cluster centers by
averaging all the pixels in the cluster.</p>
        <p>Step 4. Repeat steps 2 and 3 until
convergence (for example, when the pixels
remain in the same cluster).</p>
        <p>The distance is usually taken as the sum of
squares or absolute values of the differences
between the pixel and the center of the cluster.</p>
        <p>The difference is most often based on color,
brightness, texture, and pixel location, or a
balanced combination of these factors.
2.5.</p>
      </sec>
      <sec id="sec-2-5">
        <title>Methods Using a Histogram</title>
        <p>Histogram-based methods [23] are very
efficient when compared to other image
segmentation methods because they require
only a one-pixel pass.</p>
        <p>A histogram is calculated over all pixels in
the image and its minima and maxima are used
to find clusters in the image. Color or
brightness can be used when comparing.</p>
        <p>An improvement on this is to recursively
apply it to the clusters in the image to divide
them into smaller clusters. The process is
repeated gradually with smaller and smaller
clusters until the moment when new clusters
stop appearing altogether.</p>
        <p>Approaches based on the use of histograms
can also be quickly adapted for multiple frames
while retaining their single-pass speed
advantage.
2.6.</p>
      </sec>
      <sec id="sec-2-6">
        <title>YOLO—Object Detection</title>
        <p>You-Only-Look-Once (YOLO) [24] is an
independent video object detection system
that can operate in real-time at very high frame
rates—the common limit is 45 frames per
second, with a claimed useful frame rate of up
to 155 frames per second. YOLO was originally
released in 2015 with Facebook research.</p>
        <p>YOLO consists of two main parts: a class
detector and a framework detector. The class
detector determines which objects are present
in the image. The frame detector determines
the location of objects in the image. The class
detector works by using a regression neural
network that learns to predict the value of a
variable. It learns to predict the probability
that a certain object is present in the image.</p>
        <p>The YOLO class detector is a regression
neural network with 24 deep layers. The input
layer of the network receives a 448×448 pixel
image. The output layer of the network
contains 84 values. Each value corresponds to
the probability that a certain object is present
in the image.</p>
      </sec>
      <sec id="sec-2-7">
        <title>2.7. Features of Video Tracking</title>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Problem Statement</title>
      <p>Digital IP cameras are increasingly used in
modern video surveillance systems.
Connecting an IP camera to an already existing
local network can guarantee minimal
installation costs.</p>
      <p>Let’s consider the characteristics that must
be taken into account when choosing computer
technologies for a digital video surveillance
system.</p>
      <p>The first characteristic is the number of
physical ports to which other devices can be
connected. Will this parameter determine the
maximum number of IP cameras that can be
connected? For a home video surveillance
system, a switch that has 4 ports is often used.
Equipment with 8–16–24 ports is used for
professional systems [25].</p>
      <p>The second characteristic is bandwidth. At
the same time, the bandwidth of each port is
taken into account. The most common values
are 10/100 Mbps and 1 Gbps. It should be
taken into account that often the total
bandwidth of the switch can be lower than the
total value of all ports. When choosing the
bandwidth of a candle, you need to determine
what data transfer rate your network can
handle.</p>
      <p>The third characteristic is the speed of data
transmission, which will limit the possibility of
receiving and transmitting information.</p>
      <p>The fourth feature of PoE is the function
that allows you to power other devices through
the same cable that transmits data. This is very
important for the organization of video
surveillance, as it allows you to get rid of
unnecessary wires, and also simplifies the
process of installation and organization of the
power supply of connected devices.</p>
      <p>The fifth characteristic is management
protocols. Yes, PoE switches are divided into
managed and unmanaged. Managed switches
are devices that support several protocols
(functions) of network management and data
transmission.</p>
      <p>To build simple and small IP surveillance
systems, physically isolated from networks in
which other critical data is transmitted
(telemetry data, banking and financial data,
video conferences, etc.), it is possible to
dispense with the use of unmanaged PoE
switches.</p>
      <p>Let’s install object tracking in the video stream
and examine the speed of object detection.</p>
      <p>To do this, we need to write the following
command in the terminal:
pip install ultralytics
And then import it into the code:
from ultralytics import YOLO</p>
      <p>Now everything is ready to create a neural
network model:
model = YOLO(“yolov8m.pt”)</p>
      <p>As mentioned earlier, YOLOv8 is a group of
neural network models. These models were
built and trained using PyTorch and exported
as .pt files.</p>
      <p>The first time you run this code, it will
download the yolov8m.pt file from the
Ultralytics server to the current folder. It will
then construct a model object. You can now
train this model, detect objects, and export
them for use. There are convenient methods
for all these tasks:</p>
      <p>train({dataset descriptor file path})—used
to train the model on the image dataset.</p>
      <p>predict({image})—Used to predict the
specified image, for example, to detect the
bounding boxes of all objects that the model can
find in the image.</p>
      <p>export({format})—used to export the model
from the default PyTorch format to the specified
format.</p>
      <p>All YOLOv8 object detection models are
already pre-trained on the COCO dataset,
which is a huge collection of images of 80
different types.</p>
      <p>
        The prediction method accepts many
different types of input data, including a path
to a single image, an array of image paths, an
Image object from Python’s well-known PIL
library, and others [
        <xref ref-type="bibr" rid="ref59">26</xref>
        ].
      </p>
      <p>After running the input data through the
model, it returns an array of results for each
input image. Since we only provided one
image, it returns an array with one element,
which you can extract like this:
The result contains detected objects (Fig. 6) and
convenient properties for working with them.
The most important is the boxes array with
information about the detected bounding boxes
on the image (Fig. 7).
You can determine how many objects are
found by running the len function:
After launch, “2” was received, which means
that two boxes were detected: one for a mobile
phone, and the other for a person (Fig. 8).</p>
      <p>You can analyze each box either in a loop or
manually. Let’s take the first object:
The box object contains bounding box
properties, including:</p>
      <p>xyxy—coordinates of the box in the form of
an array [x1,y1,x2,y2]
cls—object type identifier
conf—confidence level of the model
regarding this object. If it’s very low, like&lt;0.5,
you can just ignore the field.</p>
      <p>Let’s display information about the object:
For the first object, you
information:
will receive
Since YOLOv8 contains PyTorch models, the
outputs from PyTorch models are encoded as
an array of PyTorch Tensor objects, so you
need to extract the first element from each of
these arrays:
Now we see the data as Tensor objects. To extract
the actual values from a Tensor, you need to use
the .tolist() method for tensors with an array
inside, and the .item() method for tensors with
scalar values.</p>
      <p>Let’s load the data into the corresponding
variables:
Now we see the actual data. Coordinates can be
rounded, and probability can also be rounded
to two decimal places.</p>
      <p>All objects that a neural network can detect
have digital identifiers. In the case of the
pretrained YOLOv8 model, there are 80 feature
types with IDs from 0 to 79. The COCO feature
classes are public. Additionally, the YOLOv8
result object contains a convenient names
property to retrieve these classes.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Data to Proposed Model</title>
      <p>The web application consists of 2 main files:</p>
      <p>Flaskapp.py is a file responsible for the
project itself, its appearance, and its structure.</p>
      <p>YOLO_Video.py is a file that is responsible
for the YOLO algorithm, namely for the
implementation of object recognition in the
video stream.</p>
      <p>Implementation of the Flaskapp.py file:
Configuring the Flask application:</p>
      <p>A web application is created using the Flask
class.</p>
      <p>Configuration parameters such as the secret
key and the file download folder are set.</p>
      <p>Defines UploadFileForm class using
FlaskWTF to handle file uploads.</p>
      <p>Video processing functions:</p>
      <p>Generate_frames and generate_frames_web
functions are defined to generate frames based
on output from YOLO detection.</p>
      <p>These functions use the video_detection
function from the YOLO_Video.py file to perform
object detection on video frames.</p>
      <p>Routes are defined for the home page (/ and
/home), the webcam page (/webcam), and the
video download page (/FrontPage).</p>
      <p>The /video and /webapp routes are
responsible for broadcasting video frames with
object detection results.</p>
      <p>The webcam and front routes render HTML
templates for webcam pages and video uploads.</p>
      <p>The UploadFileForm class is used to handle
uploads of video files.</p>
      <p>The application runs on the development
server if the script is executed directly.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Implementation</title>
      <p>The video_detection function takes a video
path as input and performs object detection
using the YOLO model.</p>
      <p>The YOLO model from Ultralytics is loaded
from the specified checkpoint file (yolov8n.pt).</p>
      <p>The frames of the detected objects on each
frame are determined, and the processed
frames are returned.</p>
      <p>Class names corresponding to detected
objects are defined in the classNames list.</p>
      <p>Video capture and processing:</p>
      <p>OpenCV is used to capture video frames
from the specified path.</p>
      <p>On each frame, detected objects are drawn
along with class labels and confidence levels.</p>
      <p>The processed frames are returned for
streaming in the user’s browser.</p>
      <p>The general course of work:
• The user uploads a video file through
the interface.
• The file is saved and its path is stored
in the Flask session.
• The object detector is called from the
received video path.
• The processed video frames are
transmitted for real-time viewing
through the user’s browser.</p>
      <p>This project uses Flask for the web
application and integrates YOLO for real-time
video processing. The YOLO_Video.py file
isolates functionality related to YOLO, making
it modular and reusable.</p>
      <p>When entering the web application, we are
greeted by the “title” page:</p>
      <p>There are two buttons on this page:</p>
      <p>The first Video button sends us to a page
where we can upload a video, press the Submit
button, and receive the processed video [27].</p>
      <p>The second LiveWebcam button sends us to
a page where the webcam is connected
automatically and displayed on the screen in a
processed format.</p>
      <p>In the images Figs. 9–16 we can see that the
YOLOv8 algorithm is running on the webcam.</p>
      <p>Our model is based on pre-trained OSFA and
is built on top of PyTorch. The training image size
was up to 256×128. A batch size of 64 randomly
selected data was then fed to the network.
During the test, the test images are also resized
to 256×128. Our model is trained on 100 epochs.
The values of α1, α2 and the learning rate are the
same as those set by OSFA. α1, α2 and learning
rate are set to 1, 0.0007, 3.5×10−5, respectively.
In SAM, the number of horizontal parts is 4. All
experiments are performed with a hardware
environment of 11th Gen Intel(R) Core(TM)
i711800H at 2.30 GHz and NVIDIA GeForce RTX
3060.
The data were taken from [28] to form the
dataset. The protected technology was
developed in [29], from where the protected
communication channels are taken. The
structural diagram of the model was taken in
[30], and the use of methods in [31]. Also, the
approaches used for image preprocessing
were taken in [32] and [33]. Data processing
was formed thanks to [34].</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>This study analyzed the YOLOv8 object
recognition algorithm and its differences from
other machine learning algorithms. A web
application for object recognition in a video
stream was also created, analyzed, and tested. A
clear overview of the development tools for a
web application using YOLOv8 has been
provided. The PyCharm programming
environment, the Flask framework, and its
advantages compared to other frameworks were
studied. Also explored is Ultralytics, which helps
in the development and testing of web
applications for video streaming.</p>
      <p>As a result of the work performed, a high level
of understanding of the YOLOv8 algorithm, its
features, and its capabilities in the field of object
detection in the video stream was achieved. The
developed web application is not only a practical
application of thialgorithm but can also serve as
a basis for further developments and
improvements in this direction.</p>
      <p>P. Anakhov, et al., Protecting Objects of
Critical Information Infrastructure from
Wartime Cyber Attacks by
Decentralizing the Telecommunications</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <source>Telecommunication Systems</source>
          , vol.
          <volume>3550</volume>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          (
          <year>2023</year>
          )
          <fpage>240</fpage>
          -
          <lpage>245</lpage>
          . [2]
          <string-name>
            <given-names>H.</given-names>
            <surname>Hulak</surname>
          </string-name>
          , et al.,
          <source>Dynamic Model of</source>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>System</surname>
          </string-name>
          , in: 2nd International Conference
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <given-names>Information</given-names>
            <surname>Networks</surname>
          </string-name>
          , vol.
          <volume>3530</volume>
          (
          <year>2023</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          102-
          <fpage>111</fpage>
          . [3]
          <string-name>
            <given-names>V.</given-names>
            <surname>Grechaninov</surname>
          </string-name>
          , et al.,
          <source>Decentralized</source>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          3188, no.
          <issue>2</issue>
          (
          <year>2022</year>
          )
          <fpage>197</fpage>
          -
          <lpage>206</lpage>
          . [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Liu</surname>
          </string-name>
          , et al.,
          <source>Deep Industrial Image</source>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Intell</surname>
          </string-name>
          . Res.
          <volume>21</volume>
          (
          <issue>1</issue>
          ) (
          <year>2024</year>
          )
          <fpage>104</fpage>
          -
          <lpage>135</lpage>
          . doi:
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          10.1007/s11633-023-1459-z. [5]
          <string-name>
            <given-names>E.</given-names>
            <surname>Kruger-Marais</surname>
          </string-name>
          , Subtitling for
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Education</surname>
            ,
            <given-names>Int. J.</given-names>
          </string-name>
          <string-name>
            <surname>Lang</surname>
          </string-name>
          . Stud.
          <volume>18</volume>
          (
          <issue>2</issue>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          (
          <year>2024</year>
          )
          <fpage>129</fpage>
          -
          <lpage>150</lpage>
          . doi:
          <volume>10</volume>
          .5281/zenodo.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          10475319. [6]
          <string-name>
            <given-names>P.</given-names>
            <surname>Li</surname>
          </string-name>
          , et al.,
          <string-name>
            <surname>Efficient</surname>
          </string-name>
          Long-Short
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>tation</surname>
          </string-name>
          , Pattern Recognit.
          <volume>146</volume>
          (
          <year>2024</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          110078. doi:
          <volume>10</volume>
          .1016/j.patcog.
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          110078. [7]
          <string-name>
            <given-names>E.</given-names>
            <surname>Kawamura</surname>
          </string-name>
          , et al.,
          <source>Ground-Based</source>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          2024 Forum (
          <year>2024</year>
          ). doi:
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          10.2514/6.2024-
          <fpage>2010</fpage>
          . [8]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yin</surname>
          </string-name>
          , et al.,
          <source>Numerical Modeling and</source>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>University-SCIENCE</surname>
            <given-names>A</given-names>
          </string-name>
          25(
          <issue>1</issue>
          ) (
          <year>2024</year>
          )
          <fpage>47</fpage>
          -
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          62. doi:
          <volume>10</volume>
          .1631/jzus.a2200014. [9]
          <string-name>
            <given-names>T.</given-names>
            <surname>Wolf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fridovich-Keil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>cation</surname>
          </string-name>
          ,
          <source>AIAA SCITECH 2024 Forum</source>
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          (
          <year>2024</year>
          ).
          <source>doi: 10.2514/6</source>
          .2024-
          <volume>0626</volume>
          . [10]
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , et al.,
          <source>Quality Assessment for</source>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <given-names>Magnitude</given-names>
            <surname>Similarity</surname>
          </string-name>
          , IEEE Transactions (
          <year>2024</year>
          )
          <fpage>1</fpage>
          -
          <lpage>19</lpage>
          . doi:
          <volume>10</volume>
          .1007/s11042-023-
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <string-name>
            <surname>on Multimedia</surname>
          </string-name>
          (
          <year>2024</year>
          )
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          . doi: 17914-
          <fpage>1</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          10.1109/tmm.
          <year>2024</year>
          .
          <volume>3356029</volume>
          . [20]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Su</surname>
          </string-name>
          , et al.,
          <source>Enhancing Concealed Object</source>
          [11]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Qin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>A Review of retinal Detection in Active Millimeter Wave</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <surname>Analysis</surname>
          </string-name>
          ,
          <source>Eng. Appl. Artif. Intell. 128 Process</source>
          .
          <volume>216</volume>
          (
          <year>2024</year>
          )
          <article-title>109303</article-title>
          . doi:
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          (
          <year>2024</year>
          )
          <article-title>107454</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.sigpro.
          <year>2023</year>
          .
          <volume>109303</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          10.1016/j.engappai.
          <year>2023</year>
          .
          <volume>107454</volume>
          . [21]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bhandari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Russo</surname>
          </string-name>
          , Global Optimality [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Li</surname>
          </string-name>
          , et al.,
          <string-name>
            <surname>Ftpe-Bc</surname>
          </string-name>
          :
          <article-title>Fast Thumbnail- Guarantees for Policy Gradient Methods,</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <string-name>
            <given-names>Preserving</given-names>
            <surname>Image Encryption Using Oper. Res.</surname>
          </string-name>
          (
          <year>2024</year>
          ). doi:
          <volume>10</volume>
          .1287/opre.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          <string-name>
            <surname>Block-Churning</surname>
          </string-name>
          , SSRN
          <volume>4698446</volume>
          (
          <year>2024</year>
          ).
          <year>2021</year>
          .
          <volume>0014</volume>
          . [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Vladymyrenko</surname>
          </string-name>
          , et al.,
          <source>Analysis</source>
          <volume>of</volume>
          [22]
          <string-name>
            <given-names>K.</given-names>
            <surname>Qian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Duan</surname>
          </string-name>
          , Optical Counting
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          <source>IEEE International Scientific-Practical Optics</source>
          <volume>63</volume>
          (
          <issue>6</issue>
          ) (
          <year>2024</year>
          )
          <fpage>A7</fpage>
          -
          <lpage>A15</lpage>
          . doi:
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          Conference Problems of Infocom-
          <volume>10</volume>
          .1364/ao.502868.
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          <string-name>
            <surname>munications</surname>
            , Science and Technology [23]
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Lee</surname>
          </string-name>
          , et al.,
          <string-name>
            <surname>Intensity</surname>
          </string-name>
          Histogram-Based
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          (
          <year>2019</year>
          ). doi:
          <volume>10</volume>
          .1109/picst47496.
          <year>2019</year>
          .
          <article-title>Reliable Image Analysis Method for</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          9061376.
          <string-name>
            <surname>Bead-Based Fluorescence</surname>
            <given-names>Immunoassay</given-names>
          </string-name>
          , [14]
          <string-name>
            <given-names>V.</given-names>
            <surname>Buriachok</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Sokolov</surname>
          </string-name>
          , P. Skladannyi,
          <string-name>
            <surname>BioChip J</surname>
          </string-name>
          . (
          <year>2024</year>
          )
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          . doi:
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          <source>Security Rating Metrics for Distributed 10.1007/s13206-023-00137-9.</source>
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          <string-name>
            <given-names>Wireless</given-names>
            <surname>Systems</surname>
          </string-name>
          , in: Workshop of the [24]
          <string-name>
            <given-names>J.</given-names>
            <surname>Redmon</surname>
          </string-name>
          , et al.,
          <source>You Only Look Once:</source>
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          8th International Conference on Unified,
          <string-name>
            <surname>Real-Time Object</surname>
            <given-names>Detection</given-names>
          </string-name>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          nologies.
          <source>Education:” Modern Machine and Pattern Recognition</source>
          (
          <year>2016</year>
          )
          <fpage>779</fpage>
          -
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          <source>Learning Technologies and Data Science</source>
          ,
          <volume>788</volume>
          . doi:
          <volume>10</volume>
          .1109/CVPR.
          <year>2016</year>
          .
          <volume>91</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          vol.
          <volume>2386</volume>
          (
          <year>2019</year>
          )
          <fpage>222</fpage>
          -
          <lpage>233</lpage>
          . [25]
          <string-name>
            <given-names>T.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , et al.,
          <source>The Design and [</source>
          15]
          <string-name>
            <given-names>D.</given-names>
            <surname>Clayton‐Chubb</surname>
          </string-name>
          , et al.,
          <source>Metabolic Implementation of a Wireless Video</source>
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          <string-name>
            <surname>Dysfunction‐Associated Steatotic Liver Surveillance System</surname>
          </string-name>
          , 21st Annual
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          <source>with Frailty and Social Disadvantage, Computing and Networking</source>
          (
          <year>2015</year>
          )
          <fpage>426</fpage>
          -
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          <string-name>
            <given-names>Liver</given-names>
            <surname>Int</surname>
          </string-name>
          .
          <volume>44</volume>
          (
          <issue>1</issue>
          ) (
          <year>2024</year>
          )
          <fpage>39</fpage>
          -
          <lpage>51</lpage>
          . doi: 438. doi:
          <volume>10</volume>
          .1145/2789168.2790123.
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          10.1111/liv. 15725. [26]
          <string-name>
            <given-names>M.</given-names>
            <surname>Nazarkevych</surname>
          </string-name>
          , et al.,
          <string-name>
            <surname>The</surname>
            Ateb-Gabor [16]
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Mira</surname>
          </string-name>
          , et al.,
          <source>Early Diagnosis of Oral Filter for Fingerprinting</source>
          , Conference on
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          <string-name>
            <given-names>Artificial</given-names>
            <surname>Intelligence</surname>
          </string-name>
          ,
          <source>Fusion: Practice Technologies, AISC</source>
          <volume>1080</volume>
          (
          <year>2019</year>
          )
          <fpage>247</fpage>
          -
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          Appl.
          <volume>14</volume>
          (
          <issue>1</issue>
          ) (
          <year>2024</year>
          )
          <fpage>293</fpage>
          -
          <lpage>308</lpage>
          . doi: 255. doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -33695-
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          10.54216/fpa.140122. 0_
          <fpage>18</fpage>
          . [17]
          <string-name>
            <given-names>L.</given-names>
            <surname>Qiao</surname>
          </string-name>
          , et al.,
          <string-name>
            <given-names>A</given-names>
            <surname>Multi-Level Thresholding</surname>
          </string-name>
          [27]
          <string-name>
            <given-names>V.</given-names>
            <surname>Sokolov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Skladannyi</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Platonenko,
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          <article-title>Hybrid Arithmetic Optimization and Unmanned Aerial Vehicles</article-title>
          , in: IEEE 41st
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          <string-name>
            <given-names>Expert</given-names>
            <surname>Syst</surname>
          </string-name>
          .
          <source>Appl</source>
          .
          <volume>241</volume>
          (
          <year>2024</year>
          )
          <fpage>122316</fpage>
          .
          <string-name>
            <surname>Nanotechnology</surname>
          </string-name>
          (
          <year>2022</year>
          )
          <fpage>473</fpage>
          -
          <lpage>477</lpage>
          . doi: [18]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          , et al.,
          <source>Fibrous Whey Protein 10.1109/ELNANO54667</source>
          .
          <year>2022</year>
          .
          <volume>9927105</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          <string-name>
            <given-names>Mediated</given-names>
            <surname>Homogeneous</surname>
          </string-name>
          and Soft- [28]
          <string-name>
            <given-names>M.</given-names>
            <surname>Nazarkevych</surname>
          </string-name>
          , Data Protection Based
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          <string-name>
            <surname>Curcumin</surname>
          </string-name>
          , Food Chemistry
          <volume>437</volume>
          (
          <issue>1</issue>
          ) Technical Conference Computer
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          (
          <year>2024</year>
          )
          <article-title>137850</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.
          <source>Sciences and Information Technologies</source>
        </mixed-citation>
      </ref>
      <ref id="ref52">
        <mixed-citation>
          <string-name>
            <surname>foodchem.</surname>
          </string-name>
          <year>2023</year>
          .
          <volume>137850</volume>
          . (
          <year>2016</year>
          )
          <fpage>30</fpage>
          -
          <lpage>32</lpage>
          . doi:
          <volume>10</volume>
          .1109/STC[19]
          <string-name>
            <given-names>J.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <surname>FP-Net:</surname>
          </string-name>
          Frequency- CSIT.
          <year>2016</year>
          .
          <volume>7589861</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref53">
        <mixed-citation>
          <string-name>
            <given-names>Perception</given-names>
            <surname>Network with Adversarial</surname>
          </string-name>
          [29]
          <string-name>
            <given-names>M.</given-names>
            <surname>Medykovskyy</surname>
          </string-name>
          , et al.,
          <source>Methods of</source>
        </mixed-citation>
      </ref>
      <ref id="ref54">
        <mixed-citation>
          (
          <year>2015</year>
          )
          <fpage>70</fpage>
          -
          <lpage>72</lpage>
          . doi:
          <volume>10</volume>
          .1109/STC-
        </mixed-citation>
      </ref>
      <ref id="ref55">
        <mixed-citation>
          <string-name>
            <surname>CSIT.</surname>
          </string-name>
          <year>2015</year>
          .
          <volume>7325434</volume>
          . [30]
          <string-name>
            <given-names>V.</given-names>
            <surname>Sheketa</surname>
          </string-name>
          , et al.,
          <source>Formal Methods for</source>
        </mixed-citation>
      </ref>
      <ref id="ref56">
        <mixed-citation>
          <string-name>
            <surname>Technology</surname>
          </string-name>
          (
          <year>2019</year>
          )
          <fpage>29</fpage>
          -
          <lpage>34</lpage>
          . doi:
        </mixed-citation>
      </ref>
      <ref id="ref57">
        <mixed-citation>
          10.1109/PICST47496.
          <year>2019</year>
          .
          <volume>9061299</volume>
          . [31]
          <string-name>
            <given-names>V.</given-names>
            <surname>Sheketa</surname>
          </string-name>
          , et al.,
          <source>Empirical Method of</source>
        </mixed-citation>
      </ref>
      <ref id="ref58">
        <mixed-citation>
          <source>Aid Sciences and Application</source>
          (
          <year>2020</year>
          )
          <fpage>22</fpage>
          -
        </mixed-citation>
      </ref>
      <ref id="ref59">
        <mixed-citation>
          26. doi:
          <volume>10</volume>
          .1109/DASA51403.
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref60">
        <mixed-citation>
          9317218. [32]
          <string-name>
            <given-names>N.</given-names>
            <surname>Boyko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Tkachuk</surname>
          </string-name>
          , Processing of
        </mixed-citation>
      </ref>
      <ref id="ref61">
        <mixed-citation>
          <string-name>
            <surname>Hadoop</surname>
          </string-name>
          and Java MapReduce, in: 3rd
        </mixed-citation>
      </ref>
      <ref id="ref62">
        <mixed-citation>
          &amp;
          <string-name>
            <surname>Data-Driven Medicine</surname>
          </string-name>
          Vol.
          <volume>2753</volume>
        </mixed-citation>
      </ref>
      <ref id="ref63">
        <mixed-citation>
          (
          <year>2020</year>
          )
          <fpage>405</fpage>
          -
          <lpage>414</lpage>
          . [33]
          <string-name>
            <given-names>N.</given-names>
            <surname>Boyko</surname>
          </string-name>
          , et al.,
          <source>Fractal Distribution of</source>
        </mixed-citation>
      </ref>
      <ref id="ref64">
        <mixed-citation>
          <string-name>
            <given-names>IDDM</given-names>
            <surname>Vol</surname>
          </string-name>
          .
          <volume>2448</volume>
          (
          <year>2019</year>
          )
          <fpage>307</fpage>
          -
          <lpage>318</lpage>
          . [34]
          <string-name>
            <given-names>I.</given-names>
            <surname>Tsmots</surname>
          </string-name>
          , et al.,
          <source>The Method and</source>
        </mixed-citation>
      </ref>
      <ref id="ref65">
        <mixed-citation>
          <volume>11</volume>
          (
          <issue>5</issue>
          ) (
          <year>2021</year>
          )
          <fpage>518</fpage>
          -
          <lpage>530</lpage>
          . doi:
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>