=Paper= {{Paper |id=Vol-3654/paper5 |storemode=property |title=A YOLO-based Method for Object Contour Detection and Recognition in Video Sequences |pdfUrl=https://ceur-ws.org/Vol-3654/paper5.pdf |volume=Vol-3654 |authors=Mariia Nazarkevych,Maryna Kostiak,Nazar Oleksiv,Victoria Vysotska,Andrii-Taras Shvahuliak |dblpUrl=https://dblp.org/rec/conf/cpits/NazarkevychKOVS24 }} ==A YOLO-based Method for Object Contour Detection and Recognition in Video Sequences== https://ceur-ws.org/Vol-3654/paper5.pdf
                         A YOLO-based Method for Object Contour Detection
                         and Recognition in Video Sequences
                         Mariia Nazarkevych1, Maryna Kostiak1, Nazar Oleksiv1, Victoria Vysotska1,
                         and Andrii-Taras Shvahuliak2
                         1 Lviv Polytechnic National University, 12 Stepan Bandera str., Lviv, 79013, Ukraine
                         2 Lviv Ivan Franko National University, 1 Universytetska str., Lviv, 79000, Ukraine



                                          Abstract
                                          A method for recognizing the contours of objects in a video data stream is proposed. The
                                          data will be uploaded using the video camera. Objects will be recognized in real-time. We
                                          will use YOLO—a method of identification and recognition of objects in real-time.
                                          Recognized objects will be recorded in a video sequence showing the contours of the
                                          objects. The approach proposed in the project reasonably synthesizes methods of
                                          artificial intelligence, theories of computer vision on the one hand, and pattern
                                          recognition on the other; it makes it possible to obtain control influences and
                                          mathematical functions for decision-making at every moment with the possibility of
                                          analyzing the influence of external factors and forecasting the flow of processes and refers
                                          to the fundamental problems of mathematical modeling of real processes. The installation
                                          of the neural network is shown in detail. The characteristics of the neural network and its
                                          capabilities are shown. Approaches to computer vision for object extraction are shown.
                                          Well-known methods are methods of expanding areas, methods based on clustering,
                                          contour selection, and methods using a histogram. The work envisages building a system
                                          for rapid identification of combat vehicles based on the latest image filtering methods
                                          developed using deep learning methods. The time spent on machine identification will be
                                          10–20% shorter, thanks to the developed new information technology for detecting
                                          objects in conditions of rapidly changing information.

                                          Keywords 1
                                          Artificial intelligence, tracking, selection of objects, image recognition, YOLO, segmentation.

                         1. Introduction                                                                                        preparation of graphic images—selection of
                                                                                                                                objects, segmentation, and selection of
                         Video surveillance is a common means of                                                                contours.
                         solving problems related to security and event                                                            Tracking is determining the location of a
                         monitoring [1–3]. One of the main tasks arising                                                        moving object [7] or several objects over time
                         from video surveillance is detection [4],                                                              using a video camera (Fig. 1). The algorithm
                         tracking [5] and identification [6] of moving                                                          analyzes video frames and outputs the position
                         objects. Video cameras are near us and record                                                          of moving objects relative to the frame.
                         data about us. Therefore, there is a need to
                         recognize data and objects. To recognize the                                                                 Contrast            Blur              Selection
                         data, you need to go through the stage of pre-                                                             enhancement        reduction           of contours
                                                                                                                                                                                         Filtration

                         processing, that is, improving the visual                                                                                Video stream preprocessing
                         quality—increasing the contrast, distinguishing
                         the boundaries, removing blurring, and                                                                 Figure 1: Scheme of preprocessing of a video
                         filtering. Then there is an operation for the                                                          sequence with object capture

                         CPITS-2024: Cybersecurity Providing in Information and Telecommunication Systems, February 28, 2024, Kyiv, Ukraine
                         EMAIL : mariia.a.nazarkevych@lpnu.ua (M. Nazarkevych); maryna.y.kostiak@lpnu.ua (M. Kostiak); nazar.oleksiv.mnsa.2020@lpnu.ua
                         (N. Oleksiv); victoria.a.vysotska@lpnu.ua (V. Vysotska); andrii-taras.shvahuliak@lnu.edu.ua (A.-T. Shvahuliak)
                         ORCID: 0000-0002-6528-9867 (M. Nazarkevych); 0000-0002-6667-7693 (M. Kostiak); 0000-0001-7821-3522 (N. Oleksiv); 0000-0001-
                         6417-3689 (V. Vysotska); 0009-0002-0319-1909 (A.-T. Shvahuliak)
                                      ©️ 2024 Copyright for this paper by its authors.
                                      Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

                                      CEUR Workshop Proceedings (CEUR-WS.org)

CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
                                                                                                                       49
The main tracking problem is matching the                                      Grayscale methods perform segmentation—
positions of the target object in successive                                   dividing a digital image into several sets of
frames, especially if the object is moving fast                                pixels [11]. Image segmentation is commonly
compared to the frame rate. Thus, tracking                                     used to highlight objects and boundaries. More
systems usually use a movement model [8],                                      precisely, image segmentation is the process of
how the image of the target object can change                                  assigning such labels to each pixel of an image
during various movements (Fig. 2).                                             so that pixels with the same labels share visual
                                                                               characteristics.
                                                                                  Block-based methods do not process
      Selection of
        objects
                                  Segmentati
                                     on
                                                             Selection
                                                            of contours
                                                                               individual pixels [12], but groups of pixels
                                                                               combined into blocks. If the block contains a
                        Department of objects                                  boundary, then in such areas the boundary of
Figure 2: Scheme of separation of objects                                      the object is determined inaccurately [13, 14].
                                                                                  The disadvantage of methods [15] based on
Examples of such simple movement patterns are                                  the energy function is the low speed of
flat object tracking—affine transformation or                                  operation.
object image homography [9].
    The target can be a rigid three-dimensional
                                                                               2.2.   Methods of Expanding Regions
object, the motion model determines the
appearance depending on its position in space
and orientation.                                                               The methods of this group are based on the use
    For video compression, keyframes are                                       of local features of the image [16]. The idea of
divided into macroblocks. A motion model is a                                  the region expansion method is to analyze first
burst of keyframes where each macroblock is                                    the starting point, then its neighboring points
transformed using a motion vector.                                             according to the criterion of homogeneity of
    The image of a deformable object can be                                    the analyzed points into one or another group.
covered with a grid and the movement of the                                    In more effective variants of the method, the
object is determined by the position of the                                    starting point is not individual pixels, but the
vertices of this grid.                                                         division of the image into several small areas.
    When an object is to be searched and                                       Each region is then checked for homogeneity,
matched against a given one, a new set of key                                  and if the result of the test is negative, the
points is extracted into the test image the two                                corresponding area is divided into smaller
sets are matched and a similarity score is                                     sections.
calculated.                                                                        Threshold segmentation and segmentation
                                                                               [17] according to the homogeneity criterion
                                                                               based on average brightness (Fig. 4) often do
2. Review of Literature                                                        not give the desired results.
2.1. Object Selection
                                                                                                      Methods of expanding regions
Before selecting an object from a video stream.
there are pixel-by-pixel, block-by-block, and                                             By the                According to
methods based on energy functional                                                     criterion of             the texture-         Threshold
minimization [10] (Fig. 3).                                                           homogeneity                  based              segmen-
                                                                                        based on                                       tation
   Pixel-by-pixel methods of object selection                                          brightness
                                                                                                                homogeneity
                                                                                                                  criterion
process all points of the image. These methods
are highly accurate, but they are sensitive to                                 Figure 4: Methods of expanding regions in a
noise.                                                                         video stream

                 Selecting an object from a video stream
                                                                               Such segmentation usually results in a large
                                                                               number of small regions. The most effective
                                                           MInimiza-
                                                                               results are given by the segmentation based on
         Pixel                   Post-block                tion of the
                                                             energy
                                                                               the homogeneity criterion based on the texture
                                                           functional          [18].
Figure 3: Methods of selecting an object from
a video stream


                                                                          50
2.3.   Selection of Contours                                          The difference is most often based on color,
                                                                      brightness, texture, and pixel location, or a
In the video, heterogeneous objects are often                         balanced combination of these factors.
observed, so you have to face the task of finding
perimeters, curvature, form factors, specific                         2.5.   Methods Using a Histogram
surface area of objects, etc. All these tasks are
related to the analysis of contour elements of                        Histogram-based methods [23] are very
objects.                                                              efficient when compared to other image
    Methods for highlighting contours in an                           segmentation methods because they require
image can be divided into three main classes:                         only a one-pixel pass.
    1. High-frequency filtering methods [19].                             A histogram is calculated over all pixels in
    2. Methods of spatial differentiation [20].                       the image and its minima and maxima are used
    3. Methods of functional approximation                            to find clusters in the image. Color or
       [21] (Fig. 5).                                                 brightness can be used when comparing.
    Common to all these methods is the                                    An improvement on this is to recursively
development of the boundary as a region of a                          apply it to the clusters in the image to divide
sharp drop in the image brightness function                           them into smaller clusters. The process is
(𝑖,), which is distinguished by the introduced                        repeated gradually with smaller and smaller
mathematical contour model.                                           clusters until the moment when new clusters
                                                                      stop appearing altogether.
                        Selection of contours
                                                                          Approaches based on the use of histograms
                                                                      can also be quickly adapted for multiple frames
         Methods of
       high-frequency
                               Methods of         Methods of
                                                  functional
                                                                      while retaining their single-pass speed
                                   spatial
           filtering          differentiation   approximation.        advantage.

Figure 5: Methods of highlighting contours in                         2.6.   YOLO—Object Detection
a video stream
By the tasks, contour selection algorithms are                        You-Only-Look-Once (YOLO) [24] is an
subject to requirements: the selected contours                        independent video object detection system
must be thin, without gaps, and closed. The                           that can operate in real-time at very high frame
process of selecting contours is complicated                          rates—the common limit is 45 frames per
due to the need to apply algorithms for                               second, with a claimed useful frame rate of up
thinning and eliminating gaps. However, the                           to 155 frames per second. YOLO was originally
contours are not closed and unsuitable for                            released in 2015 with Facebook research.
analysis procedures.                                                      YOLO consists of two main parts: a class
                                                                      detector and a framework detector. The class
2.4.   Methods Based on Clustering                                    detector determines which objects are present
                                                                      in the image. The frame detector determines
The 𝐾-means method is an iterative method                             the location of objects in the image. The class
used to divide an image into 𝐾 clusters. The                          detector works by using a regression neural
basic algorithm is given below [22]:                                  network that learns to predict the value of a
   Step 1. Choose 𝐾 cluster centers, randomly                         variable. It learns to predict the probability
or based on some heuristics.                                          that a certain object is present in the image.
   Step 2. Place each image pixel in a cluster                            The YOLO class detector is a regression
whose center is closest to that pixel.                                neural network with 24 deep layers. The input
   Step 3. Recalculate the cluster centers by                         layer of the network receives a 448×448 pixel
averaging all the pixels in the cluster.                              image. The output layer of the network
   Step 4. Repeat steps 2 and 3 until                                 contains 84 values. Each value corresponds to
convergence (for example, when the pixels                             the probability that a certain object is present
remain in the same cluster).                                          in the image.
   The distance is usually taken as the sum of
squares or absolute values of the differences
between the pixel and the center of the cluster.


                                                                 51
2.7.   Features of Video Tracking                        3. Problem Statement
Digital IP cameras are increasingly used in              Let’s install object tracking in the video stream
modern       video     surveillance     systems.         and examine the speed of object detection.
Connecting an IP camera to an already existing               To do this, we need to write the following
local network can guarantee minimal                      command in the terminal:
installation costs.                                          pip install ultralytics
   Let’s consider the characteristics that must              And then import it into the code:
be taken into account when choosing computer                 from ultralytics import YOLO
technologies for a digital video surveillance                Now everything is ready to create a neural
system.                                                  network model:
   The first characteristic is the number of                 model = YOLO(“yolov8m.pt”)
physical ports to which other devices can be                 As mentioned earlier, YOLOv8 is a group of
connected. Will this parameter determine the             neural network models. These models were
maximum number of IP cameras that can be                 built and trained using PyTorch and exported
connected? For a home video surveillance                 as .pt files.
system, a switch that has 4 ports is often used.             The first time you run this code, it will
Equipment with 8–16–24 ports is used for                 download the yolov8m.pt file from the
professional systems [25].                               Ultralytics server to the current folder. It will
   The second characteristic is bandwidth. At            then construct a model object. You can now
the same time, the bandwidth of each port is             train this model, detect objects, and export
taken into account. The most common values               them for use. There are convenient methods
are 10/100 Mbps and 1 Gbps. It should be                 for all these tasks:
taken into account that often the total                      train({dataset descriptor file path})—used
bandwidth of the switch can be lower than the            to train the model on the image dataset.
total value of all ports. When choosing the                  predict({image})—Used to predict the
bandwidth of a candle, you need to determine             specified image, for example, to detect the
what data transfer rate your network can                 bounding boxes of all objects that the model can
handle.                                                  find in the image.
   The third characteristic is the speed of data             export({format})—used to export the model
transmission, which will limit the possibility of        from the default PyTorch format to the specified
receiving and transmitting information.                  format.
   The fourth feature of PoE is the function                 All YOLOv8 object detection models are
that allows you to power other devices through           already pre-trained on the COCO dataset,
the same cable that transmits data. This is very         which is a huge collection of images of 80
important for the organization of video                  different types.
surveillance, as it allows you to get rid of                 The prediction method accepts many
unnecessary wires, and also simplifies the               different types of input data, including a path
process of installation and organization of the          to a single image, an array of image paths, an
power supply of connected devices.                       Image object from Python’s well-known PIL
   The fifth characteristic is management                library, and others [26].
protocols. Yes, PoE switches are divided into                After running the input data through the
managed and unmanaged. Managed switches                  model, it returns an array of results for each
are devices that support several protocols               input image. Since we only provided one
(functions) of network management and data               image, it returns an array with one element,
transmission.                                            which you can extract like this:
   To build simple and small IP surveillance
systems, physically isolated from networks in
which other critical data is transmitted                 The result contains detected objects (Fig. 6) and
(telemetry data, banking and financial data,             convenient properties for working with them.
video conferences, etc.), it is possible to              The most important is the boxes array with
dispense with the use of unmanaged PoE                   information about the detected bounding boxes
switches.                                                on the image (Fig. 7).



                                                    52
                                                      You can analyze each box either in a loop or
                                                      manually. Let’s take the first object:



                                                      The box object contains bounding box
                                                      properties, including:
                                                         xyxy—coordinates of the box in the form of
                                                      an array [x1,y1,x2,y2]
                                                         cls—object type identifier
                                                         conf—confidence level of the model
                                                      regarding this object. If it’s very low, like<0.5,
                                                      you can just ignore the field.
Figure 6: Video capture of an object and                 Let’s display information about the object:
“person” recognition




                                                      For the first      object,   you   will   receive
                                                      information:




                                                      Since YOLOv8 contains PyTorch models, the
                                                      outputs from PyTorch models are encoded as
                                                      an array of PyTorch Tensor objects, so you
Figure 7: Capturing the object on video and           need to extract the first element from each of
recognizing the “chair”                               these arrays:
You can determine how many objects are
found by running the len function:


After launch, “2” was received, which means
that two boxes were detected: one for a mobile
phone, and the other for a person (Fig. 8).
                                                      Now we see the data as Tensor objects. To extract
                                                      the actual values from a Tensor, you need to use
                                                      the .tolist() method for tensors with an array
                                                      inside, and the .item() method for tensors with
                                                      scalar values.
                                                         Let’s load the data into the corresponding
                                                      variables:




Figure 8: Video capture of two objects and
recognition of “person” and “cell phone”




                                                 53
                                                          5. Implementation
                                                          The video_detection function takes a video
                                                          path as input and performs object detection
Now we see the actual data. Coordinates can be            using the YOLO model.
rounded, and probability can also be rounded                  The YOLO model from Ultralytics is loaded
to two decimal places.                                    from the specified checkpoint file (yolov8n.pt).
    All objects that a neural network can detect              The frames of the detected objects on each
have digital identifiers. In the case of the pre-         frame are determined, and the processed
trained YOLOv8 model, there are 80 feature                frames are returned.
types with IDs from 0 to 79. The COCO feature                 Class names corresponding to detected
classes are public. Additionally, the YOLOv8              objects are defined in the classNames list.
result object contains a convenient names                     Video capture and processing:
property to retrieve these classes.                           OpenCV is used to capture video frames
                                                          from the specified path.
4. Data to Proposed Model                                     On each frame, detected objects are drawn
                                                          along with class labels and confidence levels.
The web application consists of 2 main files:                 The processed frames are returned for
   Flaskapp.py is a file responsible for the              streaming in the user’s browser.
project itself, its appearance, and its structure.            The general course of work:
   YOLO_Video.py is a file that is responsible                • The user uploads a video file through
for the YOLO algorithm, namely for the                              the interface.
implementation of object recognition in the                   • The file is saved and its path is stored
video stream.                                                       in the Flask session.
   Implementation of the Flaskapp.py file:                    • The object detector is called from the
   Configuring the Flask application:                               received video path.
   A web application is created using the Flask               • The processed video frames are
class.                                                              transmitted for real-time viewing
   Configuration parameters such as the secret                      through the user’s browser.
key and the file download folder are set.                     This project uses Flask for the web
   Defines UploadFileForm class using Flask-              application and integrates YOLO for real-time
WTF to handle file uploads.                               video processing. The YOLO_Video.py file
   Video processing functions:                            isolates functionality related to YOLO, making
   Generate_frames and generate_frames_web                it modular and reusable.
functions are defined to generate frames based                When entering the web application, we are
on output from YOLO detection.                            greeted by the “title” page:
   These functions use the video_detection                    There are two buttons on this page:
function from the YOLO_Video.py file to perform               The first Video button sends us to a page
object detection on video frames.                         where we can upload a video, press the Submit
   Routes are defined for the home page (/ and            button, and receive the processed video [27].
/home), the webcam page (/webcam), and the                    The second LiveWebcam button sends us to
video download page (/FrontPage).                         a page where the webcam is connected
   The /video and /webapp routes are                      automatically and displayed on the screen in a
responsible for broadcasting video frames with            processed format.
object detection results.                                     In the images Figs. 9–16 we can see that the
   The webcam and front routes render HTML                YOLOv8 algorithm is running on the webcam.
templates for webcam pages and video uploads.                 Our model is based on pre-trained OSFA and
   The UploadFileForm class is used to handle             is built on top of PyTorch. The training image size
uploads of video files.                                   was up to 256×128. A batch size of 64 randomly
   The application runs on the development                selected data was then fed to the network.
server if the script is executed directly.                During the test, the test images are also resized
                                                          to 256×128. Our model is trained on 100 epochs.
                                                          The values of α1, α2 and the learning rate are the


                                                     54
same as those set by OSFA. α1, α2 and learning
rate are set to 1, 0.0007, 3.5×10−5, respectively.
In SAM, the number of horizontal parts is 4. All
experiments are performed with a hardware
environment of 11th Gen Intel(R) Core(TM) i7-
11800H at 2.30 GHz and NVIDIA GeForce RTX
3060.



                                                          Figure 12: Human and tank recognition
                                                          results in the Yolo video sequence




Figure 9: Results of recognition and
identification of two people and a truck in the
Yolo video sequence
                                                          Figure 13: Recognition results of two people, a
                                                          tank in the Yolo video sequence




Figure 10: Results of recognition and
identification of a person and a Yolo video
sequence                                                  Figure 14: The results of object recognition
                                                          and identification of the Yolo video sequence




Figure 11: Results of object recognition and              Figure 15: Results of recognition and
identification of the Yolo video sequence                 identification of three objects of the Yolo video
                                                          sequence




                                                     55
                                                                  Network, in: Workshop on Cybersecurity
                                                                  Providing      in    Information       and
                                                                  Telecommunication Systems, vol. 3550
                                                                  (2023) 240–245.
                                                             [2] H. Hulak, et al., Dynamic Model of
                                                                  Guarantee Capacity and Cyber Security
                                                                  Management in the Critical Automated
                                                                  System, in: 2nd International Conference
                                                                  on Conflict Management in Global
Figure 16: Results of recognition and                             Information Networks, vol. 3530 (2023)
identification of three objects of the Yolo video                 102–111.
sequence                                                     [3] V. Grechaninov, et al., Decentralized
                                                                  Access        Demarcation          System
The data were taken from [28] to form the                         Construction in Situational Center
dataset. The protected technology was                             Network, in: Workshop on Cybersecurity
developed in [29], from where the protected                       Providing      in    Information       and
communication channels are taken. The                             Telecommunication Systems II, vol.
structural diagram of the model was taken in                      3188, no. 2 (2022) 197–206.
[30], and the use of methods in [31]. Also, the              [4] J. Liu, et al., Deep Industrial Image
approaches used for image preprocessing                           Anomaly Detection: A Survey, Mach.
were taken in [32] and [33]. Data processing                      Intell. Res. 21(1) (2024) 104–135. doi:
was formed thanks to [34].                                        10.1007/s11633-023-1459-z.
                                                             [5] E. Kruger-Marais,        Subtitling      for
6. Conclusions                                                    Language Acquisition: Eye Tracking as
                                                                  Predictor of Attention Allocation in
This study analyzed the YOLOv8 object                             Education, Int. J. Lang. Stud. 18(2)
recognition algorithm and its differences from                    (2024) 129–150. doi: 10.5281/zenodo.
other machine learning algorithms. A web                          10475319.
application for object recognition in a video                [6] P. Li, et al., Efficient Long-Short
stream was also created, analyzed, and tested. A                  Temporal Attention Network for
clear overview of the development tools for a                     Unsupervised Video Object Segmen-
web application using YOLOv8 has been                             tation, Pattern Recognit. 146 (2024)
provided.     The      PyCharm       programming                  110078. doi: 10.1016/j.patcog.2023.
environment, the Flask framework, and its                         110078.
advantages compared to other frameworks were                 [7] E. Kawamura, et al., Ground-Based
studied. Also explored is Ultralytics, which helps                Vision Tracker for Advanced Air Mobility
in the development and testing of web                             and Urban Air Mobility, AIAA SciTech
applications for video streaming.                                 2024        Forum        (2024).       doi:
   As a result of the work performed, a high level                10.2514/6.2024-2010.
of understanding of the YOLOv8 algorithm, its                [8] Z. Yin, et al., Numerical Modeling and
features, and its capabilities in the field of object             Experimental Investigation of a Two-
detection in the video stream was achieved. The                   Phase Sink Vortex and its Fluid-Solid
developed web application is not only a practical                 Vibration Characteristics, J. Zhejiang
application of thialgorithm but can also serve as                 University-SCIENCE A 25(1) (2024) 47–
a basis for further developments and                              62. doi: 10.1631/jzus.a2200014.
improvements in this direction.                              [9] T. Wolf,     D. Fridovich-Keil,    B. Jones,
                                                                  Mutual Information-Based Trajectory
                                                                  Planning for Cislunar Space Object
References                                                        Tracking using Successive Convexifi-
                                                                  cation, AIAA SCITECH 2024 Forum
[1]   P. Anakhov, et al., Protecting Objects of                   (2024). doi: 10.2514/6.2024-0626.
      Critical Information Infrastructure from               [10] H. Zhang, et al., Quality Assessment for
      Wartime        Cyber      Attacks      by                   DIBR-synthesized Views based on
      Decentralizing the Telecommunications                       Wavelet Transform and Gradient


                                                        56
       Magnitude Similarity, IEEE Transactions                   (2024) 1-19. doi: 10.1007/s11042-023-
       on Multimedia (2024) 1–14. doi:                           17914-1.
       10.1109/tmm.2024.3356029.                          [20]   Y. Su, et al., Enhancing Concealed Object
[11]   Q. Qin, Y. Chen, A Review of retinal                      Detection in Active Millimeter Wave
       Vessel Segmentation for Fundus Image                      Images Using Wavelet Transform, Signal
       Analysis, Eng. Appl. Artif. Intell. 128                   Process. 216 (2024) 109303. doi:
       (2024)              107454.            doi:               10.1016/j.sigpro.2023.109303.
       10.1016/j.engappai.2023.107454.                    [21]   J. Bhandari, D. Russo, Global Optimality
[12]   M. Li, et al., Ftpe-Bc: Fast Thumbnail-                   Guarantees for Policy Gradient Methods,
       Preserving Image Encryption Using                         Oper. Res. (2024). doi: 10.1287/opre.
       Block-Churning, SSRN 4698446 (2024).                      2021.0014.
[13]   M. Vladymyrenko, et al., Analysis of               [22]   K. Qian, H. Duan, Optical Counting
       Implementation         Results     of   the               Platform of Shrimp Larvae Using Masked
       Distributed Access Control System. in:                    K-Means and a Side Window Filter, Appl.
       IEEE International Scientific-Practical                   Optics 63(6) (2024) A7-A15. doi:
       Conference Problems of Infocom-                           10.1364/ao.502868.
       munications, Science and Technology                [23]   S. Lee, et al., Intensity Histogram-Based
       (2019). doi: 10.1109/picst47496.2019.                     Reliable Image Analysis Method for
       9061376.                                                  Bead-Based Fluorescence Immunoassay,
[14]   V. Buriachok, V. Sokolov, P. Skladannyi,                  BioChip        J.   (2024)     1–9.   doi:
       Security Rating Metrics for Distributed                   10.1007/s13206-023-00137-9.
       Wireless Systems, in: Workshop of the              [24]   J. Redmon, et al., You Only Look Once:
       8th    International      Conference     on               Unified, Real-Time Object Detection,
       “Mathematics.        Information      Tech-               IEEE Conference on Computer Vision
       nologies. Education:” Modern Machine                      and Pattern Recognition (2016) 779–
       Learning Technologies and Data Science,                   788. doi: 10.1109/CVPR.2016.91.
       vol. 2386 (2019) 222–233.                          [25]   T. Zhang, et al., The Design and
[15]   D. Clayton‐Chubb, et al., Metabolic                       Implementation of a Wireless Video
       Dysfunction‐Associated Steatotic Liver                    Surveillance System, 21st Annual
       Disease in Older Adults is Associated                     International Conference on Mobile
       with Frailty and Social Disadvantage,                     Computing and Networking (2015) 426–
       Liver Int. 44(1) (2024) 39-51. doi:                       438. doi: 10.1145/2789168.2790123.
       10.1111/liv. 15725.                                [26]   M. Nazarkevych, et al., The Ateb-Gabor
[16]   E. Mira, et al., Early Diagnosis of Oral                  Filter for Fingerprinting, Conference on
       Cancer using Image Processing and                         Computer Science and Information
       Artificial Intelligence, Fusion: Practice                 Technologies, AISC 1080 (2019) 247–
       Appl. 14(1) (2024) 293–308. doi:                          255. doi: 10.1007/978-3-030-33695-
       10.54216/fpa.140122.                                      0_18.
[17]   L. Qiao, et al., A Multi-Level Thresholding        [27]   V. Sokolov, P. Skladannyi, A. Platonenko,
       Image Segmentation Method Using                           Video Channel Suppression Method of
       Hybrid Arithmetic Optimization and                        Unmanned Aerial Vehicles, in: IEEE 41st
       Harris Hawks Optimizer Algorithms,                        International Conf. on Electronics and
       Expert Syst. Appl. 241 (2024) 122316.                     Nanotechnology (2022) 473–477. doi:
[18]   Z. Wang, et al., Fibrous Whey Protein                     10.1109/ELNANO54667.2022.9927105.
       Mediated Homogeneous and Soft-                     [28]   M. Nazarkevych, Data Protection Based
       Textured Emulsion Gels for Elderly:                       on Encryption Using Ateb-Functions,
       Enhancement of Bioaccessibility for                       XIth International Scientific and
       Curcumin, Food Chemistry 437(1)                           Technical         Conference     Computer
       (2024)      137850.       doi:   10.1016/j.               Sciences and Information Technologies
       foodchem.2023.137850.                                     (2016) 30–32. doi: 10.1109/STC-
[19]   J. Gao, Y. Huang, FP-Net: Frequency-                      CSIT.2016.7589861.
       Perception Network with Adversarial                [29]   M. Medykovskyy, et al., Methods of
       Training for Image Manipulation                           Protection Document Formed from
       Localization, Multimed. Tools Appl.                       Latent Element Located by Fractals, Xth


                                                     57
       International Scientific and Technical
       Conference “Computer Sciences and
       Information      Technologies”     (CSIT)
       (2015) 70–72. doi: 10.1109/STC-
       CSIT.2015.7325434.
[30]   V. Sheketa, et al., Formal Methods for
       Solving Technological Problems in the
       Infocommunications        Routines     of
       Intelligent Decisions Making for Drilling
       Control, IEEE International Scientific-
       Practical Conference Problems of
       Infocommunications,       Science    and
       Technology      (2019)     29–34.    doi:
       10.1109/PICST47496.2019.9061299.
[31]   V. Sheketa, et al., Empirical Method of
       Evaluating the Numerical Values of
       Metrics in the Process of Medical
       Software      Quality     Determination,
       International Conference on Decision
       Aid Sciences and Application (2020) 22–
       26. doi: 10.1109/DASA51403.2020.
       9317218.
[32]   N. Boyko, N. Tkachuk, Processing of
       Medical Different Types of Data Using
       Hadoop and Java MapReduce, in: 3rd
       International Conference on Informatics
       & Data-Driven Medicine Vol. 2753
       (2020) 405–414.
[33]   N. Boyko, et al., Fractal Distribution of
       Medical Data in Neural Network, in:
       IDDM Vol. 2448 (2019) 307–318.
[34]   I. Tsmots, et al., The Method and
       Simulation Model of Element Base
       Selection    for    Protection    System
       Synthesis and Data Transmission, Int. J.
       Sensors Wireless Commun. Control
       11(5)      (2021)      518–530.      doi:
       10.2174/22103279109992010221946
       30.




                                                   58