<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Segmentation of the chorus of the eye fund in a digital image⋆</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>Bohdan Hawrylyshyn str. 24, Kyiv, 01001</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <fpage>432</fpage>
      <lpage>441</lpage>
      <abstract>
        <p>In the article an algorithm of segmentation of the chorus of the eye fund in a digital image is presented. Developed algorithm based on image conversion using bitwise conjunction. Results of developed approach with estimation of measurement error are presented.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;eye fund</kwd>
        <kwd>segmentation</kwd>
        <kwd>image recognition</kwd>
        <kwd>clustering1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Vision is one of the physiological functions of the sensory system, through which a person receives
80-90% of information about the world around him or her. This information is necessary not only for
a person's full-fledged existence and orientation but also for aesthetic perception of the world.</p>
      <p>According to the WHO, about 285 million people worldwide suffer from visual impairment, of
which 39 million are affected by blindness.</p>
      <p>Almost a third of the blind and visually impaired are among the disabled with fundus pathology
(27.6%). This is due to the increasing prevalence of vascular diseases and diabetes mellitus in Ukraine,
which lead to severe changes in the retina (age-related retinal degeneration, diabetic retinopathy,
etc.) [1].</p>
      <p>The fundus is the inner surface of the eye lined with the retina. The fundus is examined using
ophthalmoscopy. This examination method is one of the most popular in modern ophthalmology.</p>
      <p>Ophthalmoscopy is a non-invasive diagnostic method that consists in directing a beam of light
through the pupil to the retina, while seeing all the changes in the fundus. During ophthalmoscopy,
an ophthalmologist sees the healthy nerve disc, macula (area of greatest vision), vitreous, retinal
vessels, and retinal periphery (Fig. 1). There are different methods of ophthalmoscopy: direct, indirect
(non-contact), biomicroscopy, ophthalmochromoscopy.</p>
      <p>The ophthalmoscopy method allows to successfully diagnose such pathologies as retinal vein
thrombosis, cataracts, retinal neoplasms, optic nerve pathologies, retinal detachment, eye melanoma,
diabetic retinopathy, retinitis, etc. The procedure is also used to diagnose secondary changes in the
fundus in such systemic pathologies as hypertension, tuberculosis, diabetes mellitus, and infectious
diseases.</p>
      <p>To improve the diagnostic result, segmentation of the eye vessels is used in the analysis of images.
This is the process of highlighting the structures of the vascular system in the eye on medical images.</p>
      <p>Therefore, research in the field of fundus segmentation is necessary, as the condition of the
choroidal vessels indicates signs that contribute to the detection of diseases. There are methods for
solving the problem of automated segmentation, but there is still room for improvement and
development. The defined task is only the first step in the analysis of retinal images. Based on the
data obtained, the study can have the following areas of development: measuring the length of blood
vessels, their thickness, shape, position, distance, and many other indicators to identify other signs
inherent in diseases, deviations, and norms.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Literature Overview</title>
      <p>The studies and analog systems used for vessel segmentation include:</p>
      <p>1. Automated Detection of Diabetic Retinopathy and Macular Edema in Digital Fundus Images.
This automated system is designed to analyze digital color retinal images for important signs of
nonproliferative diabetic retinopathy (NPDR). The paper discusses color image preprocessing
methods, recursive segmentation algorithms, and growing segmentation algorithms, combined with
the use of a new technique called the "moat operator". The result was 88.5% and 99.7% for the exudate
sensitivity and specificity detection task, respectively, compared to the ophthalmologist. HMAs were
present in 14 retinal images. The algorithm achieved a sensitivity of 77.5% and a specificity of 88.7%
for HMA detection. [3]</p>
      <p>2. Semi-automated Vessel Segmentation in Retinal Images Using a Combination of Image
Processing Techniques. The methods used in this work are a combination of thresholding
segmentation algorithms and an algorithm based on water blurring. The disadvantages include the
fact that the method is semi-automatic and requires human intervention, which can lead to errors in
determining the boundaries of blood vessels. As for the results, an accuracy of approximately 0.87%
can be achieved [4].</p>
      <p>Eye vessel segmentation as a popular area of computer vision has been widely studied both using
traditional image processing algorithms, such as clustering-based segmentation, and using popular
modern deep learning architectures, such as PSPNet, FPN, U-Net, SegNet, etc. [5].</p>
    </sec>
    <sec id="sec-3">
      <title>3. Segmentation of the choroid of the fundus of the eye</title>
      <sec id="sec-3-1">
        <title>3.1. Clustering</title>
        <p>Clustering belongs to the field of computer vision and is a method of unsupervised machine learning.
It is used to solve the problem of biomedical data segmentation.</p>
        <p>Its advantages include the ability to divide an image into clusters - groups of pixels that share
certain properties. The method does not require expert training.</p>
        <p>In the field of vascular lining segmentation, the clustering method can divide the content into two
groups of pixels - those belonging to blood vessels and all others.</p>
        <p>In addition, the algorithm can be used in combination with other methods, such as neural
networks, threshold segmentation, and others, which ultimately improves the final result. This is the
technology that was implemented.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Segmentation using neural networks</title>
        <p>Another method considered is segmentation through model selection, as neural networks are widely
used and effective in a particular industry. The advantages of using the model in the segmentation
of the fundus vasculature include:</p>
        <p>Processing speed. After the training and model generation stage, the processing of any new
image takes place in a matter of seconds, meaning that the research result can be obtained
promptly, which is important in the medical field.</p>
        <p>A properly trained neural network model can detect fine and small details, such as small
blood vessels, that are time-consuming and difficult to identify manually.</p>
        <p>With the right training sample processing, proper layer settings, and deep learning
properties, the model can maintain segmentation accuracy even when data is limited.</p>
        <p>The selected model is SA-UNet, which is one of the best models for retinal vessel segmentation,
contains unique properties that provide better results compared to other methods and with other
models such as U-Net, SA-UNet has fewer parameters, which means that the model can be trained
on less data. This feature is very important because retinal datasets contain a small amount of data.
Also, the spatial attention map allows the network to reinforce important features, such as vascular
features, and suppress unimportant ones [6].</p>
        <p>The developed technology is a composite software module containing four key components:
segmentation of retinal vessels by clustering methods, segmentation of retinal vessels by applying a
selected neural network system model, consolidation of these algorithms, and a software module for
determining the similarity of data to a set of standardized retinal images.</p>
        <p>The development and research stages were accompanied by the use of the DRIVE dataset (Digital
Retinal Images for Vessel Extraction) [7]. This dataset was developed specifically for the study of the
problem of vessel segmentation in retinal images. The DRIVE database consists of 40 retinal images,
of which 33 images are healthy and the remaining 7 images show signs of mild early diabetic
retinopathy. The fundus camera used to capture these images has a field of view of 45 degrees, which
is an advantage over other sets that have a 30-degree field of view. The resolution of the images in
this database is 565×584 pixels [7]. In addition, each retinal image from the test set is supplemented
with a manually processed segmented image, which can be used for verification and evaluation in
the development of algorithms and training of neural models.</p>
        <p>Segmentation using clustering. The purpose of applying clustering over the image is to highlight
the most prominent areas of blood vessels, for further use of the result when overlaying the original
images to improve the final result of vascular membrane segmentation, i.e., vessel extraction. And
also, using the clustering result as a filter to check the similarity of the uploaded image to a relatively
standard retinal image.</p>
        <p>It is worth noting that this method contains several stages where pre-processing is performed
and other computer vision algorithms are applied. The use of these algorithms is an improvement
and an important component of improving the segmentation of the vascular membrane. Thus, we
can distinguish the following order of key stages:</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.2.1. Conversion to grayscale</title>
        <p>One of the first steps of the algorithm is to convert the image to grayscale, i.e., reduce the RGB color
components from three to one gray. The values of red (R), green (G), and blue (B) are converted to a
single value from the gray scale:
where   ,</p>
        <p>is the new pixel value in gray scale, R, G, B are the pixel color values before
transformation obtained from the RGB format, and the three in the denominator indicates the
number of color channels in the selected format.</p>
        <p>In general, for any number of channels, the formula will look like:
where  is the number of channels,  ( )is the channel value for pixel   , .</p>
        <p>In the task of vessel extraction, the conversion of the input image (Fig. 2) to grayscale (Fig. 3)
contributes to the quality and area of detection of the study object in k-means and pore segmentation
methods.
  , =</p>
        <p>∑ =0  ( )

,
,
  =  
−( −
2
2
−1)2
2
1, 
where ksize
odd and positive core size, i = ksize
scale factor.</p>
        <p>In the task of segmenting the vascular membrane, Gaussian blur helps to smooth colors, reduce
and combine them, which will help to separate the vessels, as well as to blur the background, which
is necessary for their selection and modification.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.2.2. Background conversion</title>
        <p>An important stage of image preprocessing was finding the image mask and using it to change the
background color to white. To do this, we used a binary threshold segmentation algorithm, whose
main task is to divide pixels into only two categories. The threshold number is the number of division
into two groups, with smaller values belonging to the first group and larger values to the second.
The following mathematical formulation of the problem is used for this purpose:</p>
        <p>
          Let the input image I be of height H and width W, and let I(r,c) represent the gray values of
columns c and rows r of the image I, 0 r&lt;H,0 c&lt;W. The output image after global segmentation is
O,O(r,c), which represents the gray values of the columns c and rows r of the image O [10].
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
 ( ,  ) = {205,5 , ( (,  , ))≤&gt; ℎℎ ℎℎ. (
          <xref ref-type="bibr" rid="ref4">4</xref>
          )
        </p>
        <p>To solve the problem, the segmentation value is 0, which corresponds to black. This setting and
the application of the method on the image (Fig. 3) will separate the background from the retina
image (Fig. 4).</p>
      </sec>
      <sec id="sec-3-5">
        <title>3.2.3. Contour extraction</title>
        <p>This operation is performed due to the fact that the retinal contours in the image have a certain
deformation in color, which negatively affects the clustering result. Therefore, the edges are searched
for and removed from the digital image. The contour detection algorithm works with binary images,
so you must first apply binary threshold segmentation.</p>
        <p>Mathematically, contours in an image can be represented as curves connecting points of equal
intensity. The algorithm starts by finding the first point of the contour (x,y), which is the point with
the smallest x and y coordinates in the image. Then the contour is tracked by following the object
boundary and searching for the next contour point in a clockwise or counterclockwise direction.
Such contours form closed curves and can be approximated using various mathematical functions or
models, such as Fourier series, depending on the specific application.</p>
        <p>The contour of a circle can be represented mathematically as:</p>
        <p>
          { ==  ++  (( )), (
          <xref ref-type="bibr" rid="ref5">5</xref>
          )
where cx and cy are the coordinates of the center of the circle, r is the radius, is the angle around
the circle.
        </p>
      </sec>
      <sec id="sec-3-6">
        <title>3.2.4. Cluster selection</title>
        <p>The core of segmentation is cluster analysis implemented using k-means technology, which is one
of the unsupervised clustering algorithms used to cluster data into k clusters. The algorithm
iteratively assigns data points to one of the k clusters depending on how close the data point is to
the cluster centroid.</p>
        <p>
          Let's assume that there are input data points x1,x2,x3,...,xn and the value k is the number of required
clusters. Then the k-means algorithm will have the following steps:
1. Select k points as initial centroids from the dataset, either randomly or the first K.
2. Find the Euclidean distance of each point in the dataset with the identified k points as cluster
centroids. Calculate the Euclidean distance between two points p and q in space:
 ( ,  ) = √( 1 −  1)2 + ( 2 −  2)2,
where  = ( 1,  2),  = ( 1,  2).
(
          <xref ref-type="bibr" rid="ref6">6</xref>
          )

where dist Euclidian distance (
          <xref ref-type="bibr" rid="ref6">6</xref>
          ).
        </p>
        <p>4. Find new centroid:
 ∈</p>
        <p>(  ,  )2,
1
  = | |
∑   ,
  ∈ 
5. Repeat steps 2 to 4 for a fixed number of iterations or until the centroids change [11].</p>
        <p>In the task of vessel extraction, the described k-means method and the filtering algorithm for the
first two clusters are applied to the image shown in Fig. 7, and the result is an image of the type
shown in Fig. 8.</p>
        <p>3. Assign each data point to the nearest centroid using the distance found in the previous step.</p>
        <p>
          Let each cluster centroid be denoted as ci C , then each data point x is assigned to a cluster
based on the function:
(
          <xref ref-type="bibr" rid="ref7">7</xref>
          )
(
          <xref ref-type="bibr" rid="ref8">8</xref>
          )
(
          <xref ref-type="bibr" rid="ref9">9</xref>
          )
        </p>
      </sec>
      <sec id="sec-3-7">
        <title>3.2.5. The final stage of fusion</title>
        <p>The algorithm creates two objects: a segmented image using k-means (Fig. 5) and a segmented image
using threshold segmentation with contour extraction (Fig. 8). At this stage, it remains to combine
them using the intersection method to obtain the best properties from each of the results. The final
output is a single image. To combine the data, we apply the bitwise operation using a conjunction
that highlights the intersecting bits. Suppose there are two arrays of images I = src1  src2 of the
same size. Then the elementary bitwise conjunction of these arrays will be as follows:
 ( ) =  1( )⋀ 2( )| ( )≠0
The result of applying the algorithm is shown in Figure 8.</p>
      </sec>
      <sec id="sec-3-8">
        <title>3.2.6. Segmentation through model selection</title>
        <p>The second method involved in solving the segmentation problem is the use of a convolutional neural</p>
        <p>The model used by SA-Unet [8] is an extension of the U-net network, which is based on the typical
structure of a downsampling encoder and upsampling decoder and a "skip connection" between
them. It combines local and global contextual information through an encoding and decoding</p>
        <p>At each stage, the encoder contains a convolutional extraction block and a 2×2 combining
operation with a doubling of the number of channels. This block is followed by a DropBlock
operation, normalization and a rectified linear unit (ReLU). After encoding, a spatial attention module</p>
        <p>The next stages of the decoder include a 2×2 transpose convolution operation to upsample, halve
the number of feature channels, and combine. The last convolution layer uses a sigmoidal activation
function to obtain the output segmentation map [8].</p>
      </sec>
      <sec id="sec-3-9">
        <title>3.2.7. Convolutional block</title>
        <p>Convolution is often used for image processing and can be described by the following formula:
( ∗  )[ ,  ] = ∑  [ −  ,  −  ] ∗  ( ,  )</p>
        <p>,
where f is the original image matrix, g is the convolution kernel (matrix).</p>
        <p>The convolutional level implements the idea that each output neuron is associated only with a
specific (small) area of the input matrix (Fig. 9) and thus simulates some features of human vision:
 ( 
,  
) =  1 
1 +  2 
2 +  3 
3 +  4 
4</p>
      </sec>
      <sec id="sec-3-10">
        <title>3.2.8. The resulting method</title>
        <p>
          The considered algorithms of segmentation through clustering and segmentation through model
selection provide the necessary data for the formation of the final method of segmentation of the
fundus vasculature, which results in an image with selected vessels and removed other retinal
elements present in the image. The main task of the method is to combine the predefined results. It
is implemented using a bitwise disjunction operation [12]:

2
 | 

2
 ∨  = ( (|
2) + (|
 | 
2) − (|
 | 

2
2) (|
 | 

2
2))
(
          <xref ref-type="bibr" rid="ref12">12</xref>
          )
        </p>
        <p>
          After applying the described method to the corresponding image, the following result was
obtained:
(
          <xref ref-type="bibr" rid="ref10">10</xref>
          )
(
          <xref ref-type="bibr" rid="ref11">11</xref>
          )
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Theoretical and experimental research</title>
      <p>The research was conducted on two datasets: the DRIVE dataset and a dataset composed of random
retinal photos. Study of the fundus vascular segmentation on the DRIVE dataset. This dataset
contains 20 files of manually annotated images. Figure 12 shows the results of processing the instance
numbered 32.
65%
59%
76%
57%
70%
59%
66%
56%
64%
71%
62%
74%
63%
75%
60%
79%</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>Segmentation is an important diagnostic method in the medical field. Its application in the field of
eye examination is a modern and effective auxiliary method in diagnosing various diseases.
However, segmentation of this kind is a complex scientific and technical task that requires the
involvement of specialists or complex software solutions. The paper presents a solution for
automating the segmentation of the choroid, proposes methods for improving the result, and for the
first time presents the assumption of detecting abnormalities in the fundus image compared to the
typical retinal appearance. Using this methodology, the result of the segmentation of the fundus
vessels was obtained, and the features of the combined segmentation method were determined. The
process, analysis and results of the study are presented. The numerical indicator of technology
improvement, namely 4%, is determined. Assumptions about the technology for detecting image
content deviations are developed and put forward.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>World</given-names>
            <surname>Vision Day</surname>
          </string-name>
          ,
          <year>2019</year>
          . URL: http://khocz.com.ua/10-zhovtnja-2019
          <string-name>
            <surname>-</surname>
          </string-name>
          roku
          <article-title>-vsesvitnij-denzahistu-zoru/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>T.-J. Wang</surname>
          </string-name>
          et al.,
          <article-title>A review on revolutionizing ophthalmic therapy: Unveiling the potential of chitosan, hyaluronic acid, cellulose, cyclodextrin, and poloxamer in eye disease treatments</article-title>
          ,
          <source>International Journal of Biological Macromolecules</source>
          , volume
          <volume>273</volume>
          ,
          <year>2024</year>
          , p.
          <fpage>132700</fpage>
          . URL: https://doi.org/10.1016/j.ijbiomac.
          <year>2024</year>
          .
          <volume>132700</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Iftakher Mahmood M.</surname>
          </string-name>
          <article-title>A</article-title>
          .,
          <string-name>
            <surname>Aktar</surname>
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fazlul Kader</surname>
            <given-names>M.,</given-names>
          </string-name>
          <article-title>A hybrid approach for diagnosing diabetic retinopathy from fundus image exploiting deep features</article-title>
          ,
          <source>Heliyon</source>
          ,
          <year>2023</year>
          , p.
          <fpage>e19625</fpage>
          . URL: https://doi.org/10.1016/j.heliyon.
          <year>2023</year>
          .e19625.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Dash</surname>
          </string-name>
          et al.,
          <source>Curvelet Transform Based on Edge Preserving Filter for Retinal Blood Vessel Segmentation, Computers, Materials &amp; Continua</source>
          , volume
          <volume>71</volume>
          ,
          <year>2022</year>
          , pp.
          <fpage>2459</fpage>
          <lpage>2476</lpage>
          . URL: https://doi.org/10.32604/cmc.
          <year>2022</year>
          .020904
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>V.</given-names>
            <surname>Martsenyuk</surname>
          </string-name>
          et al.,
          <article-title>Exploring Image Unified Space for Improving Information Technology for Person Identification</article-title>
          , IEEE Access,
          <year>2023</year>
          , p.
          <fpage>1</fpage>
          . URL: https://doi.org/10.1109/access.
          <year>2023</year>
          .
          <volume>3297488</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>O.</given-names>
            <surname>Bychkov</surname>
          </string-name>
          et al.,
          <source>Using Neural Networks Application for the Font Recognition Task Solution, 55th International Scientific Conference on Information, Communication and Energy Systems and Technologies (ICEST), 10 12 September</source>
          <year>2020</year>
          ,
          <year>2020</year>
          . URL: https://doi.org/10.1109/icest49890.
          <year>2020</year>
          .
          <volume>9232788</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <article-title>[7] DRIVE - Digital Retinal Images for Vessel Extraction. grand-challenge.org</article-title>
          . URL: https://drive.grand-challenge.org.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>G.</given-names>
            <surname>Dimitrov</surname>
          </string-name>
          et al.,
          <article-title>Increasing the Classification Accuracy of EEG based Brain-computer Interface Signals</article-title>
          , 10th International Conference on Advanced Computer Information Technologies (ACIT), Deggendorf, Germany,
          <year>2020</year>
          , pp.
          <fpage>386</fpage>
          -
          <lpage>390</lpage>
          , doi: 10.1109/ACIT49673.
          <year>2020</year>
          .
          <volume>9208944</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <article-title>[9] OpenCV: Image Filtering. OpenCV documentation index</article-title>
          . URL: https://docs.opencv.
          <source>org/4</source>
          .x/d4/d86/group__imgproc__filter.
          <source>html#gaabe8c836e97159a9193fb0b1 1ac52cf1.</source>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Niu</surname>
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            <given-names>H.</given-names>
          </string-name>
          ,
          <article-title>Research and analysis of threshold segmentation algorithms in image processing</article-title>
          ,
          <source>Journal of Physics: Conference Series</source>
          , volume
          <volume>1237</volume>
          ,
          <year>2019</year>
          , p.
          <fpage>022122</fpage>
          . URL: https://doi.org/10.1088/
          <fpage>1742</fpage>
          -6596/1237/2/022122.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Muthukrishnan</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mathematics behind K-Mean Clustering</surname>
          </string-name>
          algorithm,
          <source>AI</source>
          , Computer Vision and Mathematics. URL: https://muthu.co/mathematics-behind
          <article-title>-k-mean-clustering-algorithm.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>V.</given-names>
            <surname>Petrivskyi</surname>
          </string-name>
          et al.,
          <article-title>A Method for Maximum Coverage of the Territory by Sensors with Minimization of Cost and Assessment of Survivability, Applied Sciences</article-title>
          , volume
          <volume>12</volume>
          ,
          <year>2022</year>
          , p.
          <fpage>3059</fpage>
          . URL: https://doi.org/10.3390/app12063059.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Mehta</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Diabetic</given-names>
            <surname>Retinopathy</surname>
          </string-name>
          .
          <article-title>Eye disorders, MSD Manual Professional Edition</article-title>
          . URL: https://www.msdmanuals.com/professional/eye-disorders/retinal-disorders/diabeticretinopathy.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <article-title>Details 300 background diabetic retinopathy - Abzlocal.mx</article-title>
          . URL: https://abzlocal.mx/details300-background
          <string-name>
            <surname>-</surname>
          </string-name>
          diabetic-retinopathy/.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>