=Paper= {{Paper |id=Vol-3909/Paper_34.pdf |storemode=property |title=Segmentation of the Chorus of the Eye Fund in A Digital Image |pdfUrl=https://ceur-ws.org/Vol-3909/Paper_34.pdf |volume=Vol-3909 |authors=Inna Balanovych,Maksim Tkachenko,Volodymyr Petrivskyi,Oleksii Ivanchenko |dblpUrl=https://dblp.org/rec/conf/iti2/BalanovychTPI24 }} ==Segmentation of the Chorus of the Eye Fund in A Digital Image== https://ceur-ws.org/Vol-3909/Paper_34.pdf
                                Segmentation of the chorus of the eye fund in a digital
                                image⋆
                                Inna Balanovych1, , Maksim Tkachenko1, , Volodymyr Petrivskyi1,                                                                   *
                                                                                                                                                                      and Oleksii
                                Ivanchenko1,
                                1
                                    Taras Shevchenko National University of Kyiv, Bohdan Hawrylyshyn str. 24, Kyiv, 01001, Ukraine



                                                   Abstract
                                                   In the article an algorithm of segmentation of the chorus of the eye fund in a digital image is presented.
                                                   Developed algorithm based on image conversion using bitwise conjunction. Results of developed approach
                                                   with estimation of measurement error are presented.

                                                   Keywords
                                                   eye fund, segmentation, image recognition, clustering1



                                1. Introduction
                                Vision is one of the physiological functions of the sensory system, through which a person receives
                                80-90% of information about the world around him or her. This information is necessary not only for
                                a person's full-fledged existence and orientation but also for aesthetic perception of the world.
                                   According to the WHO, about 285 million people worldwide suffer from visual impairment, of
                                which 39 million are affected by blindness.
                                   Almost a third of the blind and visually impaired are among the disabled with fundus pathology
                                (27.6%). This is due to the increasing prevalence of vascular diseases and diabetes mellitus in Ukraine,
                                which lead to severe changes in the retina (age-related retinal degeneration, diabetic retinopathy,
                                etc.) [1].
                                   The fundus is the inner surface of the eye lined with the retina. The fundus is examined using
                                ophthalmoscopy. This examination method is one of the most popular in modern ophthalmology.
                                   Ophthalmoscopy is a non-invasive diagnostic method that consists in directing a beam of light
                                through the pupil to the retina, while seeing all the changes in the fundus. During ophthalmoscopy,
                                an ophthalmologist sees the healthy nerve disc, macula (area of greatest vision), vitreous, retinal
                                vessels, and retinal periphery (Fig. 1). There are different methods of ophthalmoscopy: direct, indirect
                                (non-contact), biomicroscopy, ophthalmochromoscopy.
                                   The ophthalmoscopy method allows to successfully diagnose such pathologies as retinal vein
                                thrombosis, cataracts, retinal neoplasms, optic nerve pathologies, retinal detachment, eye melanoma,
                                diabetic retinopathy, retinitis, etc. The procedure is also used to diagnose secondary changes in the
                                fundus in such systemic pathologies as hypertension, tuberculosis, diabetes mellitus, and infectious
                                diseases.




                                Information Technology and Implementation (IT&I-2024), November 20-21, 2024, Kyiv, Ukraine
                                 Corresponding author.
                                 These authors contributed equally.
                                   inna.balanovych@gmail.com (Inna Balanovych); maksim.tkachenko@knu.ua (Maksim Tkachenko);
                                vovapetrivskyi@gmail.com (Volodymyr Petrivskyi); ivanchenko.oleksii@gmail.com (O. Ivanchenko)
                                    0000-0003-2929-3495 (M. Tkachenko); 0000-0001-9298-8244 (V. Petrivskyi); 0000-0002-8526-8211 (O. Ivanchenko)
                                              Β© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).



                                                                                                                                                                                        432
CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
   Figure 1: The fundus of the eye: 1 - yellow spot (macula), 2 - central fossa of the yellow spot, 3 -
blind spot (optic disc), 4 - blood vessels. Image borrowed from [2].

   To improve the diagnostic result, segmentation of the eye vessels is used in the analysis of images.
This is the process of highlighting the structures of the vascular system in the eye on medical images.
   Therefore, research in the field of fundus segmentation is necessary, as the condition of the
choroidal vessels indicates signs that contribute to the detection of diseases. There are methods for
solving the problem of automated segmentation, but there is still room for improvement and
development. The defined task is only the first step in the analysis of retinal images. Based on the
data obtained, the study can have the following areas of development: measuring the length of blood
vessels, their thickness, shape, position, distance, and many other indicators to identify other signs
inherent in diseases, deviations, and norms.


2. Literature Overview
The studies and analog systems used for vessel segmentation include:
   1. Automated Detection of Diabetic Retinopathy and Macular Edema in Digital Fundus Images.
This automated system is designed to analyze digital color retinal images for important signs of
nonproliferative diabetic retinopathy (NPDR). The paper discusses color image preprocessing
methods, recursive segmentation algorithms, and growing segmentation algorithms, combined with
the use of a new technique called the "moat operator". The result was 88.5% and 99.7% for the exudate
sensitivity and specificity detection task, respectively, compared to the ophthalmologist. HMAs were
present in 14 retinal images. The algorithm achieved a sensitivity of 77.5% and a specificity of 88.7%
for HMA detection. [3]
   2. Semi-automated Vessel Segmentation in Retinal Images Using a Combination of Image
Processing Techniques. The methods used in this work are a combination of thresholding
segmentation algorithms and an algorithm based on water blurring. The disadvantages include the
fact that the method is semi-automatic and requires human intervention, which can lead to errors in
determining the boundaries of blood vessels. As for the results, an accuracy of approximately 0.87%
can be achieved [4].
Eye vessel segmentation as a popular area of computer vision has been widely studied both using
traditional image processing algorithms, such as clustering-based segmentation, and using popular
modern deep learning architectures, such as PSPNet, FPN, U-Net, SegNet, etc. [5].

3. Segmentation of the choroid of the fundus of the eye
3.1. Clustering
Clustering belongs to the field of computer vision and is a method of unsupervised machine learning.
It is used to solve the problem of biomedical data segmentation.
    Its advantages include the ability to divide an image into clusters - groups of pixels that share
certain properties. The method does not require expert training.


                                                                                                   433
   In the field of vascular lining segmentation, the clustering method can divide the content into two
groups of pixels - those belonging to blood vessels and all others.
   In addition, the algorithm can be used in combination with other methods, such as neural
networks, threshold segmentation, and others, which ultimately improves the final result. This is the
technology that was implemented.

3.2. Segmentation using neural networks
Another method considered is segmentation through model selection, as neural networks are widely
used and effective in a particular industry. The advantages of using the model in the segmentation
of the fundus vasculature include:
         Processing speed. After the training and model generation stage, the processing of any new
         image takes place in a matter of seconds, meaning that the research result can be obtained
         promptly, which is important in the medical field.
         A properly trained neural network model can detect fine and small details, such as small
         blood vessels, that are time-consuming and difficult to identify manually.
         With the right training sample processing, proper layer settings, and deep learning
         properties, the model can maintain segmentation accuracy even when data is limited.

    The selected model is SA-UNet, which is one of the best models for retinal vessel segmentation,
contains unique properties that provide better results compared to other methods and with other
models such as U-Net, SA-UNet has fewer parameters, which means that the model can be trained
on less data. This feature is very important because retinal datasets contain a small amount of data.
Also, the spatial attention map allows the network to reinforce important features, such as vascular
features, and suppress unimportant ones [6].
    The developed technology is a composite software module containing four key components:
segmentation of retinal vessels by clustering methods, segmentation of retinal vessels by applying a
selected neural network system model, consolidation of these algorithms, and a software module for
determining the similarity of data to a set of standardized retinal images.
    The development and research stages were accompanied by the use of the DRIVE dataset (Digital
Retinal Images for Vessel Extraction) [7]. This dataset was developed specifically for the study of the
problem of vessel segmentation in retinal images. The DRIVE database consists of 40 retinal images,
of which 33 images are healthy and the remaining 7 images show signs of mild early diabetic
retinopathy. The fundus camera used to capture these images has a field of view of 45 degrees, which
is an advantage over other sets that have a 30-degree field of view. The resolution of the images in
this database is 565Γ—584 pixels [7]. In addition, each retinal image from the test set is supplemented
with a manually processed segmented image, which can be used for verification and evaluation in
the development of algorithms and training of neural models.
    Segmentation using clustering. The purpose of applying clustering over the image is to highlight
the most prominent areas of blood vessels, for further use of the result when overlaying the original
images to improve the final result of vascular membrane segmentation, i.e., vessel extraction. And
also, using the clustering result as a filter to check the similarity of the uploaded image to a relatively
standard retinal image.
    It is worth noting that this method contains several stages where pre-processing is performed
and other computer vision algorithms are applied. The use of these algorithms is an improvement
and an important component of improving the segmentation of the vascular membrane. Thus, we
can distinguish the following order of key stages:




                                                                                                       434
3.2.1. Conversion to grayscale
One of the first steps of the algorithm is to convert the image to grayscale, i.e., reduce the RGB color
components from three to one gray. The values of red (R), green (G), and blue (B) are converted to a
single value from the gray scale:

                                            𝑅+𝐺+𝐡                                       (1)
                                      π‘₯𝑖,𝑗 =          ,
                                                 3
where π‘₯𝑖,𝑗     is the new pixel value in gray scale, R, G, B are the pixel color values before
transformation obtained from the RGB format, and the three in the denominator indicates the
number of color channels in the selected format.
   In general, for any number of channels, the formula will look like:

                                              βˆ‘π‘π‘›=0 𝐢(𝑛)                                       (2)
                                      π‘₯𝑖,𝑗 =              ,
                                                   𝑁
where 𝑁 is the number of channels, 𝐢(𝑛)is the channel value for pixel π‘₯𝑖,𝑗 .
   In the task of vessel extraction, the conversion of the input image (Fig. 2) to grayscale (Fig. 3)
contributes to the quality and area of detection of the study object in k-means and pore segmentation
methods.




   Figure 2: Input image.                             Figure 3: Convert to grayscale.

   Apply Gaussian blur. To smooth an image, you use Gaussian blur, which is a low-pass filter that
removes high-frequency content, such as noise, while preserving edges and other important
elements of the image.
   The core of Gaussian blur is a two-dimensional matrix that is calculated based on the standard
deviation. The function calculates and returns a one-dimensional column matrix (ksizeΓ—1) of
Gaussian filter coefficients:

                                                  π‘˜π‘ π‘–π‘§π‘’βˆ’1 2
                                               βˆ’(π‘–βˆ’       )                                    (3)
                                                     2
                                     𝐺𝑖 = 𝛼𝑒     2π‘ π‘–π‘”π‘šπ‘Ž 2   ,
where ksize odd and positive core size, i = ksize 1,  scale factor.
    In the task of segmenting the vascular membrane, Gaussian blur helps to smooth colors, reduce
and combine them, which will help to separate the vessels, as well as to blur the background, which
is necessary for their selection and modification.

3.2.2. Background conversion
An important stage of image preprocessing was finding the image mask and using it to change the
background color to white. To do this, we used a binary threshold segmentation algorithm, whose
main task is to divide pixels into only two categories. The threshold number is the number of division
into two groups, with smaller values belonging to the first group and larger values to the second.
The following mathematical formulation of the problem is used for this purpose:
   Let the input image I be of height H and width W, and let I(r,c) represent the gray values of
columns c and rows r of the image I, 0 r π‘‘β„Žπ‘Ÿπ‘’π‘ β„Ž                                 (4)
                             𝑂(π‘Ÿ, 𝑐) = {                      .
                                         0, 𝐼(π‘Ÿ, 𝑐) ≀ π‘‘β„Žπ‘Ÿπ‘’π‘ β„Ž
   To solve the problem, the segmentation value is 0, which corresponds to black. This setting and
the application of the method on the image (Fig. 3) will separate the background from the retina
image (Fig. 4).




   Figure 4: Extract the background from a Figure 5: Threshold segmentation with deleted
 smoothed image (Fig. 3).                       contours.
   Thresholding has also been used as another method of vessel selection (Fig. 2.4.b). In this case,
the otsu binarization is used, which avoids the need to select a value, as it is determined
automatically. The operation is performed on the image with the removed fields (see below, the
section on Extracting contours), the values of which are distorted and negatively affect the
segmentation result (Fig. 5).

3.2.3. Contour extraction
This operation is performed due to the fact that the retinal contours in the image have a certain
deformation in color, which negatively affects the clustering result. Therefore, the edges are searched
for and removed from the digital image. The contour detection algorithm works with binary images,
so you must first apply binary threshold segmentation.
   Mathematically, contours in an image can be represented as curves connecting points of equal
intensity. The algorithm starts by finding the first point of the contour (x,y), which is the point with
the smallest x and y coordinates in the image. Then the contour is tracked by following the object
boundary and searching for the next contour point in a clockwise or counterclockwise direction.
Such contours form closed curves and can be approximated using various mathematical functions or
models, such as Fourier series, depending on the specific application.
   The contour of a circle can be represented mathematically as:

                                    π‘₯ = 𝑐π‘₯ + π‘Ÿπ‘π‘œπ‘ (𝛼)
                                   {                    ,                                            (5)
                                     𝑦 = 𝑐𝑦 + π‘Ÿπ‘ π‘–π‘›(𝛼)
where cx and cy are the coordinates of the center of the circle, r is the radius,     is the angle around
the circle.

3.2.4. Cluster selection
The core of segmentation is cluster analysis implemented using k-means technology, which is one
of the unsupervised clustering algorithms used to cluster data into k clusters. The algorithm
iteratively assigns data points to one of the k clusters depending on how close the data point is to
the cluster centroid.
    Let's assume that there are input data points x1,x2,x3,...,xn and the value k is the number of required
clusters. Then the k-means algorithm will have the following steps:

   1. Select k points as initial centroids from the dataset, either randomly or the first K.
   2. Find the Euclidean distance of each point in the dataset with the identified k points as cluster
      centroids. Calculate the Euclidean distance between two points p and q in space:

                            𝑑(𝑝, π‘ž) = √(π‘ž1 βˆ’ 𝑝1 )2 + (π‘ž2 βˆ’ 𝑝2 )2 ,                                   (6)
where 𝑝 = (𝑝1 , 𝑝2 ), π‘ž = (π‘ž1 , π‘ž2 ).
                                                                                                       436
   3. Assign each data point to the nearest centroid using the distance found in the previous step.
      Let each cluster centroid be denoted as ci C , then each data point x is assigned to a cluster
      based on the function:

                                   π‘Žπ‘Ÿπ‘”π‘šπ‘–π‘›π‘βˆˆπΆ 𝑑𝑖𝑠𝑑(𝑐𝑖 , π‘₯)2 ,                                       (7)
where dist   Euclidian distance (6).

   4. Find new centroid:

                                                1
                                        𝑐𝑖 =       βˆ‘ π‘₯𝑖 ,                                          (8)
                                               |𝑆|
                                                  π‘₯𝑖 βˆˆπ‘†π‘–
   5. Repeat steps 2 to 4 for a fixed number of iterations or until the centroids change [11].

    In the task of vessel extraction, the described k-means method and the filtering algorithm for the
first two clusters are applied to the image shown in Fig. 7, and the result is an image of the type
shown in Fig. 8.




   Figure 6: Change the background color Figure 7: Segmentation result of the k-means
 through a mask (Fig. 4)                             algorithm
   Choosing the number of clusters into which pixels are divided is an important value that affects
the quality of selection and finding the desired object. In this case, the empirical method was to select
10 clusters and leave the 2 darkest ones. These values are customized for the task of vessel detection
and work best for this particular application.

3.2.5. The final stage of fusion
The algorithm creates two objects: a segmented image using k-means (Fig. 5) and a segmented image
using threshold segmentation with contour extraction (Fig. 8). At this stage, it remains to combine
them using the intersection method to obtain the best properties from each of the results. The final
output is a single image. To combine the data, we apply the bitwise operation using a conjunction
that highlights the intersecting bits. Suppose there are two arrays of images I = src1  src2 of the
same size. Then the elementary bitwise conjunction of these arrays will be as follows:

                            𝑑𝑠𝑑(𝐼) = π‘ π‘Ÿπ‘1 (𝐼)β‹€π‘ π‘Ÿπ‘2 (𝐼)|π‘šπ‘Žπ‘ π‘˜(𝐼)β‰ 0                                   (9)
   The result of applying the algorithm is shown in Figure 8.




   Figure 8: The final result of segmentation (merging the images of Fig. 5 and Fig. 7).

                                                                                                     437
3.2.6. Segmentation through model selection
The second method involved in solving the segmentation problem is the use of a convolutional neural
network.
    The model used by SA-Unet [8] is an extension of the U-net network, which is based on the typical
structure of a downsampling encoder and upsampling decoder and a "skip connection" between
them. It combines local and global contextual information through an encoding and decoding
process.
    At each stage, the encoder contains a convolutional extraction block and a 2Γ—2 combining
operation with a doubling of the number of channels. This block is followed by a DropBlock
operation, normalization and a rectified linear unit (ReLU). After encoding, a spatial attention module
is added.
    The next stages of the decoder include a 2Γ—2 transpose convolution operation to upsample, halve
the number of feature channels, and combine. The last convolution layer uses a sigmoidal activation
function to obtain the output segmentation map [8].

3.2.7. Convolutional block
Convolution is often used for image processing and can be described by the following formula:

                        (𝑓 βˆ— 𝑔)[π‘š, 𝑛] = βˆ‘ 𝑓[π‘š βˆ’ π‘˜, 𝑛 βˆ’ 𝑙] βˆ— 𝑔(π‘˜, 𝑙)                              (10)
                                          π‘˜,𝑙
where f is the original image matrix, g is the convolution kernel (matrix).
   The convolutional level implements the idea that each output neuron is associated only with a
specific (small) area of the input matrix (Fig. 9) and thus simulates some features of human vision:

                𝑁(π‘₯𝑛𝑒𝑀 , 𝑦𝑛𝑒𝑀 ) = π‘₯1 π‘₯π‘π‘œπ‘Ÿπ‘’1 + π‘₯2 π‘₯π‘π‘œπ‘Ÿπ‘’2 + π‘₯3 π‘₯π‘π‘œπ‘Ÿπ‘’3 + π‘₯4 π‘₯π‘π‘œπ‘Ÿπ‘’4                  (11)




   Figure 9: Image convolution.


3.2.8. The resulting method
The considered algorithms of segmentation through clustering and segmentation through model
selection provide the necessary data for the formation of the final method of segmentation of the
fundus vasculature, which results in an image with selected vessels and removed other retinal
elements present in the image. The main task of the method is to combine the predefined results. It
is implemented using a bitwise disjunction operation [12]:

                      π‘Ž              𝑏              π‘Ž            𝑏
           π‘Ž ∨ 𝑏 = ((| 𝑛 | π‘šπ‘œπ‘‘2) + (| 𝑛 | π‘šπ‘œπ‘‘2) βˆ’ (| 𝑛 | π‘šπ‘œπ‘‘2) (| 𝑛 | π‘šπ‘œπ‘‘2))                     (12)
                      2              2              2            2
   After applying the described method to the corresponding image, the following result was
obtained:



                                                                                                   438
    Figure 10: Comparison of segmentation methods by clustering (a) and segmentation by model
selection (b) and their combination (c), where (a.1), (b.1), (c.1), (a.2), (b.2), (c.2) are enlarged segments
(a), (b) and (c), respectively.
    As you can see from Figure 11, merging visually improves the segmentation result.

4. Theoretical and experimental research
The research was conducted on two datasets: the DRIVE dataset and a dataset composed of random
retinal photos. Study of the fundus vascular segmentation on the DRIVE dataset. This dataset
contains 20 files of manually annotated images. Figure 12 shows the results of processing the instance
numbered 32.




                  a)                              b)                             c)
    Figure 11: Segmentation of an image from the DRIVE dataset: where (a) is the original image as
input, (b) is the combination of segmentation by clustering and segmentation by model selection, (c)
is the intersection of the result with manually annotated images.
    The image (Fig. 12 c) was compared to the image that was segmented manually. According to the
comparison, the segmentation for this example was improved by 7.75%. The data are based on the
arithmetic mean of the results of two comparison methods: hash and absolute pixel-by-pixel
verification. The combination of the two methods was chosen to improve the reliability of the results.
The general evaluation formula is as follows:

                (β„Žπ‘Žπ‘ β„Ž(π‘–π‘šπ‘Žπ‘”π‘’2 ) βˆ’ β„Žπ‘Žπ‘ β„Ž(π‘–π‘šπ‘Žπ‘”π‘’1 )) + |π‘–π‘šπ‘Žπ‘”π‘’π‘ 2 βˆ’ π‘–π‘šπ‘Žπ‘”π‘’1 |                           (13)
                                             2
   This study was conducted for all annotated images. The result is shown in Table 1. This table also
contains data comparing the segmentation (columns 4, 5, 6) with the existing segmentation model
(columns 2, 3).
   This table also contains segmentation comparison data (columns 4, 5, 6) with the existing
segmentation model (columns 2, 3). From the results obtained, we can establish an average
segmentation improvement of 4.38%.
   According to the formula for expressing the error, the result is obtained for the total amount x of
research data:

                                   βˆ†= π‘₯ βˆ’ π‘₯𝑖𝑠𝑑 = 20 βˆ’ 16 = 4                                           (14)


                                                                                                         439
                                   Ξ”             4
                             𝛿=        100% =       100% = 25%                                        (15)
                                  π‘₯𝑖𝑠𝑑           16
   Equation (14) is absolute error and (15) is relative error.

   Table 1
   The result of the segmentation study on all available instances and its comparison
                                                           Matching hash
                                      Compliance   after                   Together after
  Compliance   Matching hash, units                        after marge,                     Similarities
                                      merger                               merge
                                                           units
     65%               3              74%(+9)              3               4,5%             87%
     59%               5              74%(+15)             3(-2)           13.75%           84%
     76%               10             83%(+7)              7(-3)           12.88%           73%
     57%               7              69%(+12)             6(-1)           9.13%            96%
     70%               6              62%(-8)              5(-1)           -0.88%           64%
     59%               6              64%(+5)              4(-2)           8.75%            88%
     66%               2              28%(-38)             3               -19%             80%
     56%               5              75%(+19)             6(+1)           6.88%            61%
     64%               6              77%(+13)             4(-2)           7.75%            91%
     71%               7              85%(+14)             5(-2)           13.25%           91%
     62%               6              86%(+24)             6               12%              55%
     74%               3              85%(+11)             5(+1)           2.36%            87%
     63%               5              69%(+6)              5               3%               88%
     75%               5              89%(+14)             4(-1)           10.13%           89%
     60%               7              74%(+14)             7               7%               81%
     79%               3              70%(-9)              3               -4,5%            84%




   Figure 12: The result of the program execution as a representation of a segmented image.

   Figure 12 shows the result of segmentation of a retinal image taken during ophthalmoscopy. The
selected image belongs to the DRIVE dataset and, according to the description, it shows background
diabetic retinopathy, which manifests itself in the retinal image in the form of microaneurysms,
intraretinal hemorrhages, exudates, retinal detachments, and other signs [7, 13]. Around the optic
disc, segmentation revealed a sign of soft exudate [14]. The similarity index indicates the possible
presence of an abnormality.

5. Conclusions
Segmentation is an important diagnostic method in the medical field. Its application in the field of
eye examination is a modern and effective auxiliary method in diagnosing various diseases.
However, segmentation of this kind is a complex scientific and technical task that requires the
involvement of specialists or complex software solutions. The paper presents a solution for
automating the segmentation of the choroid, proposes methods for improving the result, and for the
first time presents the assumption of detecting abnormalities in the fundus image compared to the

                                                                                                           440
typical retinal appearance. Using this methodology, the result of the segmentation of the fundus
vessels was obtained, and the features of the combined segmentation method were determined. The
process, analysis and results of the study are presented. The numerical indicator of technology
improvement, namely 4%, is determined. Assumptions about the technology for detecting image
content deviations are developed and put forward.

Declaration on Generative AI
The authors have not employed any Generative AI tools.

References
[1] World Vision Day, 2019. URL: http://khocz.com.ua/10-zhovtnja-2019-roku-vsesvitnij-den-
     zahistu-zoru/.
[2] T.-J. Wang et al., A review on revolutionizing ophthalmic therapy: Unveiling the potential of
     chitosan, hyaluronic acid, cellulose, cyclodextrin, and poloxamer in eye disease treatments,
     International Journal of Biological Macromolecules, volume 273, 2024, p. 132700. URL:
     https://doi.org/10.1016/j.ijbiomac.2024.132700.
[3] Iftakher Mahmood M. A., Aktar N., Fazlul Kader M., A hybrid approach for diagnosing diabetic
     retinopathy from fundus image exploiting deep features, Heliyon, 2023, p. e19625. URL:
     https://doi.org/10.1016/j.heliyon.2023.e19625.
[4] S. Dash et al., Curvelet Transform Based on Edge Preserving Filter for Retinal Blood Vessel
     Segmentation, Computers, Materials & Continua, volume 71, 2022, pp. 2459 2476. URL:
     https://doi.org/10.32604/cmc.2022.020904
[5] V. Martsenyuk et al., Exploring Image Unified Space for Improving Information Technology for
     Person Identification, IEEE Access, 2023, p. 1. URL: https://doi.org/10.1109/access.2023.3297488.
[6] O. Bychkov et al., Using Neural Networks Application for the Font Recognition Task Solution,
     55th International Scientific Conference on Information, Communication and Energy Systems
     and Technologies (ICEST), 10 12 September 2020,                                     2020. URL:
     https://doi.org/10.1109/icest49890.2020.9232788.
[7] DRIVE - Digital Retinal Images for Vessel Extraction. grand-challenge.org. URL:
     https://drive.grand-challenge.org.
[8] G. Dimitrov et al., Increasing the Classification Accuracy of EEG based Brain-computer Interface
     Signals, 10th International Conference on Advanced Computer Information Technologies
     (ACIT), Deggendorf, Germany, 2020, pp. 386-390, doi: 10.1109/ACIT49673.2020.9208944.
[9] OpenCV:          Image        Filtering.     OpenCV         documentation        index.      URL:
     https://docs.opencv.org/4.x/d4/d86/group__imgproc__filter.html#gaabe8c836e97159a9193fb0b1
     1ac52cf1.
[10] Niu Z., Li H., Research and analysis of threshold segmentation algorithms in image processing,
     Journal of Physics: Conference Series, volume 1237, 2019, p. 022122. URL:
     https://doi.org/10.1088/1742-6596/1237/2/022122.
[11] Muthukrishnan M., Mathematics behind K-Mean Clustering algorithm, AI, Computer Vision
     and Mathematics. URL: https://muthu.co/mathematics-behind-k-mean-clustering-algorithm.
[12] V. Petrivskyi et al., A Method for Maximum Coverage of the Territory by Sensors with
     Minimization of Cost and Assessment of Survivability, Applied Sciences, volume 12, 2022, p.
     3059. URL: https://doi.org/10.3390/app12063059.
[13] Mehta S., Diabetic Retinopathy. Eye disorders, MSD Manual Professional Edition. URL:
     https://www.msdmanuals.com/professional/eye-disorders/retinal-disorders/diabetic-
     retinopathy.
[14] Details 300 background diabetic retinopathy - Abzlocal.mx. URL: https://abzlocal.mx/details-
     300-background-diabetic-retinopathy/.




                                                                                                  441