<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>F. (2015). Comparison of
smartphone ophthalmoscopy with slit-lamp biomicroscopy for grading diabetic retinopathy.
American Journal of Ophthalmology</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Information and measurement system for analyzing images of fundus tissue using computer-integrated technologies</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Roman Hrudetskij</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Ivan Franko National University of Lviv</institution>
          ,
          <addr-line>1, Universytetska St., Lviv, 79000</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Lutsk National Technical University</institution>
          ,
          <addr-line>75, Lvivska str., Lusk, 43018</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Oleksandr Povstianoi</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <volume>59</volume>
      <fpage>572</fpage>
      <lpage>576</lpage>
      <abstract>
        <p>The paper presents the results of research and development of a modern information and measurement system (IMS) designed for automated analysis of fundus images for the purpose of diagnosing and monitoring ophthalmic pathologies. The proposed system combines methods of digital medical image processing, artificial intelligence and computer vision, which allows ensuring high accuracy and objectivity of the diagnostic process. Particular attention is paid to the use of deep learning algorithms for the segmentation of anatomical structures of the retina and analysis of fluorescent images. A comprehensive method of photodynamic diagnostics and therapy has been developed, which is implemented based on a modified slit lamp with a laser illuminator. Effective methods of image registration and processing have been proposed, in particular their synchronization and spectral analysis, which allows for quantitative assessment of the state of tissues. The results of the study can be used in ophthalmological practice to increase the accuracy of early detection of pathologies, as well as in telemedicine and personalized medicine systems.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Information and measurement system</kwd>
        <kwd>ophthalmology</kwd>
        <kwd>fundus imaging</kwd>
        <kwd>deep learning</kwd>
        <kwd>fluorescence diagnostics</kwd>
        <kwd>photodynamic therapy</kwd>
        <kwd>automated diagnostics 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The development of information and measurement systems (IMS) for the analysis of fundus tissue
images is a strategic direction in the field of medical technologies, which combines the
achievements of ophthalmology, digital imaging, artificial intelligence and telemedicine. Such
systems are transformed from auxiliary tools into independent diagnostic platforms capable of
screening, detecting pathologies at early stages and quantitatively monitoring the dynamics of
treatment [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ].
      </p>
      <p>
        Modern ophthalmology is characterized by the rapid development of precision intervention
technologies, among which laser ophthalmological complexes play a leading role. Their use is due
to increased requirements for accuracy, safety and effectiveness in the treatment of a wide range of
eye diseases, including glaucoma, diabetic retinopathy, age-related
macular degeneration,
secondary cataract, myopia and astigmatism [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ].
      </p>
      <p>Laser technologies provide:</p>
      <p>0000-0002-1416-225X (O. Povstianoi); 0000-0002-9414-2477 (O. Vynokurova); 0000-0001-9268-5489 (V. Denysiuk);
0000-0001-7374-4770 (Y. Lapchenko); 0000-0001-5747-4168 (R. Hrudetskij)





non-contact and minimally traumatic intervention with minimal risk of infectious
complications;
high spatial accuracy of beam focusing, which allows it to act only on pathological areas of
tissue;
controlled depth of penetration adapted to the individual anatomical features of the patient;
reducing the postoperative period and minimizing pain syndrome;
integration with diagnostic systems for pre-procedure planning and post-procedure
monitoring.</p>
      <p>
        The use of laser ophthalmological systems, such as femtosecond lasers, YAG lasers, excimer
devices, allows for high-precision manipulations in both the anterior and posterior segments of the
eye. Moreover, automation of settings and compatibility with navigation and visualization systems
significantly increase the efficiency of interventions and reduce the likelihood of medical errors [
        <xref ref-type="bibr" rid="ref5 ref6">5,
6</xref>
        ].
      </p>
      <p>
        Due to the global increase in the number of patients with diabetes mellitus, age-related
degenerative changes of the retina, and chronic ophthalmological diseases, the introduction of laser
complexes is becoming a key factor in improving the quality of ophthalmological care [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        Therefore, laser ophthalmological complexes should be equipped not only with modern laser
emitters, but also use the capabilities of modern computer technology to create intelligent
ophthalmoscopy systems [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. In the process of photodynamic ophthalmoscopy, the doctor is forced
to visually assess the condition of the fundus tissues when making a diagnosis. This is becoming
increasingly intolerable in the face of growing needs for creating information intelligent systems
that are close in capabilities to a human. In developed countries, there is a noticeable significant
increase in publications and funding aimed at eliminating this phenomenon. Systems for
automated input of ophthalmoscopic information through various types of scanners, as well as
digital photo and video cameras, are becoming increasingly widespread. At the same time, in terms
of resolution, such input systems are quite close to human vision. For example, the CCD matrix of
a digital camera provides a resolution of up to 3 million pixels/frame. Nevertheless, the capabilities
of intelligent image analysis using computers leave much to be desired. The need for advanced
processing and recognition requires at least two new areas of research:


      </p>
      <p>Monitoring changes in the condition of the fundus tissues during laser surgery and therapy;
Diagnostic systems.</p>
      <p>
        Intelligent information systems equipped with computer vision will allow relatively quickly
determining laser radiation doses during therapy or surgery, determining the areas of necessary
laser radiation action, registering changes in the tissues of the fundus in real time, monitoring the
long-term consequences of laser exposure, and also providing diagnostics and identification of
pathological formations. Expert systems based on databases that include images of pathological
formations require fast and reliable analysis of digitized video information in specialized image
archives of ophthalmological centers or in Internet databases to search and recognize pathologies
[
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ].
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Research method using an information-measuring system for analyzing images of fundus tissue</title>
      <p>
        To analyze the morphological state of the fundus tissues, an information and measurement system
(IMS) designed for automated processing and interpretation of digital images was used. The
method is implemented as a sequential algorithm, which includes the following stages [
        <xref ref-type="bibr" rid="ref11">11, 12, 13</xref>
        ].
1. Image acquisition. Digital images of the fundus are collected using a fundus camera or
other high-precision ophthalmic imaging equipment. The images are saved in formats
compatible with the processing system (e.g., DICOM or TIFF).
2. Pre-processing. At this stage, the image undergoes standard filtering procedures, including:



noise reduction (median or Gaussian filtering);
brightness and contrast adjustment;
elimination of background artifacts.
3. Segmentation of anatomical structures. Computer vision algorithms are used to highlight
relevant areas (vascular network, optic disc, macula). The efficiency of segmentation is
increased by applying deep learning methods (e.g., U-Net or ResNet architectures).
4. Extraction of diagnostic features. Using morphometric analysis, quantitative parameters are
calculated:
– width and branching of blood vessels;
– the contour of the optic disc;
– the presence of microaneurysms or hemorrhages;
– texture characteristics of the image.




      </p>
      <p>Classification of pathological conditions. The processed features are fed to the input of a
classification model (SVM, random forest, neural networks) trained on a validation sample
with clinically confirmed diagnoses. This allows for the detection of signs of diabetic
retinopathy, glaucoma, age-related macular degeneration, etc.</p>
      <p>Reporting. The IVS generates a visualized result with the detection of areas of interest,
statistical indicators and a predicted diagnosis. The results can be integrated with the
patient's electronic medical record.</p>
      <p>The developed model of the information and measurement system (IMS) is based on the
principles of digital medical image processing, machine vision algorithms and methods of
intelligent data analysis. It is intended for automated diagnostics and monitoring of ophthalmic
diseases based on the results of the analysis of fundus images (fundus images). This IMS model
includes the following main functional blocks (fig. 1).</p>
      <p>Currently, new treatment methods based on the photodynamic effect are being introduced into
medical practice. The essence of the effect is the effect of light radiation on the photosensitized area
of the pathology and, as a result, the damage of newly formed vessels due to photochemical
processes [14, 15].</p>
      <p>Low power densities and selective drug accumulation provide a non-thermal effect on the
affected areas of the retina, while preserving the surrounding healthy tissues. The results of
photodynamic therapy are monitored using repeated administration of a fluorescent agent,
sometimes in combination with confocal or optical coherence tomography [16, 17]. These
procedures are performed on different equipment, which also requires additional experience of the
doctor. An intensive search is underway for drugs that have both the ability to fluoresce and a high
photodynamic effect, which would allow monitoring the treatment process during the treatment
procedure. In the method created by Akira Obana, a photosensitizer was selected for photodynamic
therapy, the fluorescence of which can be used to monitor the treatment and diagnose the focus of
pathology [18]. The developed installation for the implementation of photodynamic therapy is
made on the basis of a camera, additionally equipped with an illuminator, a laser, an optical adapter
focusing laser radiation on pathological areas. The system also includes a microscope with a
builtin video channel. The video channel includes a color video camera and a system for optically
transferring the image of the studied area of the eye to this video camera. A filter is installed in
front of the highly sensitive video camera that does not transmit optical radiation of the laser to
register fluorescence. Thus, during the treatment procedure, the doctor has a fluorescent picture of
the fundus, by which he can control the process of photodynamic therapy, and also has accurate
information about the location of the spot of focused exciting radiation relative to the focus of
pathology. However, the ingress of a long-wave component into the highly sensitive camera does
not allow the simultaneous conduct of the diagnostic and therapeutic procedure, the device
provides consistency in these modes, but not their simultaneous implementation [19, 20].</p>
      <p>Therefore, a modern ophthalmoscope should provide effective examination of the fundus with a
narrow pupil, since the procedure of dilating the patient's pupil (and the associated additional
diagnostic tests, medication, and time) is not always possible and desirable. Currently, there are
several main ways to improve the quality of the ophthalmoscope, which are followed worldwide:
1. transition to halogen or laser illuminators. Compared to conventional vacuum incandescent
lamps, halogen lighting provides a threefold increase in lighting intensity and 2 times
increases the service life of the device. Halogen lighting is closer in composition to solar,
more intense and high-temperature, which allows the researcher to obtain even, bright,
“white” lighting of the studied area of the intraocular cavity. A sufficient level of
illumination allows the use of various filters, including polarization;
2. laser illuminators allow for the formation of better quality color images at lower lighting
intensity, with better control of the ratio of color components, at a safer lighting level for
patients and with minimal phototoxic effects;
3. use of the coaxial (coaxial) principle of retinal illumination, coaxial retinal illumination
reduces vignetting to a minimum, and therefore significantly facilitates the examination
procedure, especially with a constricted pupil;
4. the presence of light filters, firstly, the presence of a green (or as it is usually called "not
red" - "Red-free") light filter. The green light filter (absorbing radiation in the red range of
the spectrum) allows you to more clearly, enhancing the contrast, detect disorders in the
vascular (including capillary) system of the eye, small hemorrhages and exudates, depletion
of neural tissue and, which is very important - initial, barely noticeable changes in the
macula;
5. the presence of a red filter, red light perfectly reveals, first of all, any pigmentary anomalies
on the fundus. Dyspigmentation of various genesis, especially in the macular area, most
often precedes the following serious diseases;
6. the presence of polarizing filters that provide maximum glare suppression;
7. the use of a glass fiber optic cable allows it to be successfully used to obtain images under
laser illumination.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Experimental approach</title>
      <p>When creating the equipment that allows for fluorescent studies and photodynamic treatment of
eye pathologies, a standard slit lamp was used, which was adapted for photodynamic diagnostics
and therapy [21, 22]. The slit lamp was equipped with a system for supplying laser radiation to
pathological foci. The ability to focus laser radiation on the surface of the fundus into a spot with a
diameter of 100-1000 microns was provided, as well as a video adapter that includes channels for
forming color and fluorescent images of the studied surface. The structural diagram is presented in
figure 2.</p>
      <p>The main unit of a standard slit lamp is a microscope, which allows for magnification of the
examined surface in the range from 5 to 36 times. Illumination of the examined area is carried out
using a standard illuminator with a built-in halogen source. The lamp radiation is formed by a
condenser, a special slit diaphragm and a lens and directed into the patient's eye using a prism.
Rigid positioning of the illuminator relative to the microscope, slit lamp and centering of the light
slit relative to its linear field make it possible to illuminate the area observed in the slit lamp
eyepieces. The microscope-illuminator system can be moved by the doctor using a standard
manipulator, due to which smooth guidance to the areas of pathology is carried out during the
research.</p>
      <p>This structure of the system makes it possible to control the irradiated area, both using the CCD
matrices of the video adapter and through the standard slit lamp eyepiece. The color and
fluorescent images of the examined surface formed in the channels of the video adapter are
transmitted to a specially designed image processing and output system, which operates in real
time on a PC.</p>
      <p>The method of conducting photodynamic diagnostics (PD) and photodynamic therapy (PDT) on
the developed complex is as follows. In the diagnostic mode, the illuminator irradiates the patient's
fundus, and the color video camera records the radiation scattered from it. The corresponding
image is displayed on the PC monitor. In this case, the radiation of the illuminator of the
ophthalmic device, due to the presence of a filter in it, does not contain components that interfere
with the observation of fluorescence. At the same time, radiation comes out of the adapter, which
passes the beam expander and irradiates the patient's fundus, causing fluorescence of the
photosensitizer. The fluorescent signal through the microscope enters the video analyzer
containing a filter and is fed to a highly sensitive CCD matrix. The filter ensures the transmission
of only fluorescence to the highly sensitive video camera. In this case, neither laser radiation
scattered from the fundus nor radiation of extraneous glare passes through this filter. At the same
time, due to the presence of a special filter in the system, the laser radiation reflected from the eye
is visible to the color matrix. Due to this, the color matrix forms a color image of the examined area
of the patient's eye in the video channel, and a highly sensitive video camera forms a fluorescent
image of this area, the brightness distribution of which corresponds to the distribution of the
photosensitizer in the tissues and vessels of the eye. Comparing these two images, the
ophthalmologist concludes that there are pathological changes in the examined area, their
localization, and size.</p>
      <p>One of the most informative methods for analyzing the blood supply processes of the fundus
tissues is fluorescent angiography [17]. The method consists in injecting a fluorescent drug into the
patient, which, passing through the blood vessels of the fundus tissues, fluoresces under the
influence of exciting radiation [16].</p>
      <p>Since the exciting radiation is shorter-wavelength, using appropriate bandpass filters, only
fluorescence radiation can be observed. Fluorescence radiation changes over time in accordance
with the passage of the drug through the vessels. There is a first arterial phase of fluorescence,
when the fluorescent drug enters the arteries: first the main ones, and then smaller ones, up to the
arterioles. In the second phase, the drug begins to be excreted through the venous system and the
fluorescence intensity in the arteries and veins is equalized. And, finally, the post-venous phase,
when mainly tissues fluoresce, in which for one reason or another the drug has accumulated.</p>
      <p>Observing the dynamics of the drug passage, and accordingly the change in the pattern of tissue
fluorescence, an experienced ophthalmologist concludes about the presence or absence of
anomalies in the blood supply system of the fundus tissues. Angiographic images of the fluorescent
tissues of the fundus are recorded by monochrome cameras sequentially at certain intervals for
further analysis.</p>
      <p>The procedure for analyzing the obtained series of fluorescent images to record the processes of
passage of the fluorescent drug through the circulatory system should include the procedures for
combining the obtained angiographic images and quantifying the change in the spatial distribution
of fluorescence intensity. The most informative, in this case, may be a synthesized color image
composed of fluorescent images obtained in the arterial, venous and post-venous phases. In this
case, the ratio of R, G, B components will allow us to judge the speed of passage of the fluorescent
drug through the circulatory system. In the synthesized color image, areas with the
samecharacteristics of blood supply will be characterized by the same color and areas of
pathological tissues where the blood supply is impaired will be highlighted.</p>
      <p>Figures 3-6 show examples of color image synthesis from three angiographic images recorded
sequentially in time.
g</p>
      <p>3D image synthesis can be used to monitor the dynamics of treatment of pathologies of the
fundus tissues. The method consists in obtaining a digital image of the fundus for patients with an
established diagnosis by taking photographs, localizing the area of study on the fundus, measuring
the brightness distribution of the three primary colors (red, green, and blue) in the localized area,
and determining the depth of penetration of blue radiation relative to the depth of penetration of
red radiation in the tissues of the fundus of the patient under study. Then a new 3D image is
synthesized that reflects the depth of light penetration for each point of the study area [23]. If a
similar procedure is performed on the same area of study after treatment, then the effectiveness of
the treatment is judged by the change in the depth of light penetration.</p>
      <p>The result is achieved by comparative evaluation of digital images of the fundus before and after
treatment, by obtaining indicators of penetration depth in the tissues of the fundus. Figures 7-8
show the original image of the fundus of patient F, which shows the area of localization of the
optic disc and the selected area of research (in this case, the area of edema) for in-depth analysis.</p>
      <p>In the study area, the brightness distribution of the three primary colors (red, green, and blue) is
measured from a computer-coded color model in the R, G, B (0 to 255) component system, and then
the depth of penetration of blue radiation relative to the depth of penetration of red radiation into
the fundus tissue of the patient being studied is determined in a time-sequential series of images:
subtract the B component from the R component, and based on the obtained numbers, construct a
new image that reflects the depth of light penetration for each point in the study area. Then, we
construct a graph of the ratio of blue radiation penetration to red. A similar procedure is performed
for fundus images of the same patient during (Fig. 9) and after treatment (Fig. 10).</p>
      <p>By comparing the obtained images and graphs of the depth of light penetration before, during
and after treatment, the dynamics of treatment of pathologies of the ocular fundus tissues are
monitored.</p>
      <p>It is possible to set not one, but three independent research points on the original color image,
calculate the degree of correlation for each point, build three new images, and then synthesize a
color image from them. Correlation analysis of spectral-zonal images has great information
capabilities compared to existing methods of analysis and segmentation. Figure 11 shows an
example of synthesis of a correlation color image.</p>
      <p>a
b
c</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>The study substantiated the concept and implemented a new generation information and
measurement system (IMS) designed for the analysis of fundus images of the eye, automated
diagnostics and monitoring of pathological changes in the tissues of the fundus of the eye. The
proposed system combines modern achievements in the field of computer vision, machine learning
and digital processing of medical images, which allows to significantly increase the accuracy,
objectivity and speed of diagnostic procedures in ophthalmology.</p>
      <p>The main results of the study are:





</p>
      <p>A functional model of the IBS has been created, which covers the stages of visual input,
preprocessing, segmentation, feature extraction, pathology classification, and diagnostic
conclusion formation. The model is focused on the use of deep neural networks and
methods of intelligent data analysis.</p>
      <p>Deep learning algorithms (e.g., U-Net, ResNet) are integrated, providing high-accuracy
segmentation of anatomical structures of the fundus, such as the optic disc, macula, and
vascular network, which is critically important for detecting ophthalmopathologies at early
stages.</p>
      <p>A comprehensive method of photodynamic diagnostics (PD) and therapy (PDT) has been
developed, which is implemented on the basis of an adapted slit lamp equipped with a video
adapter, laser illuminator and highly sensitive cameras. This allows for visualization and
irradiation of pathological areas in real time.</p>
      <p>Effective methods for registration and analysis of fluorescent images are proposed,
including image fusion using correlation functions and color image synthesis for
quantitative assessment of tissue blood supply. This increases the diagnostic
informativeness of the procedures.</p>
      <p>The feasibility of widespread use of fluorescent drugs and laser radiation for non-invasive
diagnostics and therapy is substantiated. Special attention is paid to the spectral
characteristics of the interaction of light with biotissues, which allows adapting the
irradiation parameters to the specifics of the pathological process.</p>
      <p>The potential of automated IMS in providing precision ophthalmology is shown, in
particular within the framework of screening programs, personalized treatment,
telemedicine, and the creation of expert diagnostic systems that integrate with electronic
medical systems.</p>
      <p>This study demonstrates a significant contribution to the development of intelligent medical
technologies, confirms the effectiveness of combining IBS and laser ophthalmic surgery, and also
opens up prospects for further developments in the direction of automated analysis of biomedical
images, in particular in the tasks of fluorescent angiography, therapy control, and multispectral
imaging.</p>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Gulshan</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peng</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Coram</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stumpe</surname>
            ,
            <given-names>M. C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Narayanaswamy</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , ... &amp;
          <string-name>
            <surname>Webster</surname>
            ,
            <given-names>D. R.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs</article-title>
          .
          <source>JAMA</source>
          ,
          <volume>316</volume>
          (
          <issue>22</issue>
          ),
          <fpage>2402</fpage>
          -
          <lpage>2410</lpage>
          . https://doi.org/10.1001/jama.
          <year>2016</year>
          .17216
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Abramoff</surname>
            ,
            <given-names>M. D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lavin</surname>
            ,
            <given-names>P. T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Birch</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shah</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Folk</surname>
            ,
            <given-names>J. C.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices</article-title>
          .
          <source>NPJ Digital Medicine</source>
          ,
          <volume>1</volume>
          (
          <issue>1</issue>
          ), 39. https://doi.org/10.1038/s41746-018-0040-6
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Ting</surname>
            ,
            <given-names>D. S. W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cheung</surname>
            ,
            <given-names>C. Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lim</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tan</surname>
            ,
            <given-names>G. S. W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Quang</surname>
            ,
            <given-names>N. D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , ... &amp;
          <string-name>
            <surname>Wong</surname>
            ,
            <given-names>T. Y.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes</article-title>
          .
          <source>JAMA</source>
          ,
          <volume>318</volume>
          (
          <issue>22</issue>
          ),
          <fpage>2211</fpage>
          -
          <lpage>2223</lpage>
          . https://doi.org/10.1001/jama.
          <year>2017</year>
          .18152
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Ronneberger</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fischer</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Brox</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>U-Net: Convolutional networks for biomedical image segmentation</article-title>
          .
          <source>In International Conference on Medical image computing and computerassisted intervention</source>
          (pp.
          <fpage>234</fpage>
          -
          <lpage>241</lpage>
          ). Springer. https://doi.org/10.1007/978-3-
          <fpage>319</fpage>
          -24574-4_
          <fpage>28</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Yan</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>He</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Deep learning for retinal image segmentation: A review</article-title>
          .
          <source>IEEE Journal of Biomedical and Health Informatics</source>
          ,
          <volume>23</volume>
          (
          <issue>6</issue>
          ),
          <fpage>2427</fpage>
          -
          <lpage>2436</lpage>
          . https://doi.org/10.1109/JBHI.
          <year>2018</year>
          .2879566
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Draelos</surname>
            ,
            <given-names>R. L.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Carin</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Machine learning and medical imaging</article-title>
          .
          <source>In Annual Review of Biomedical Data Science</source>
          ,
          <volume>3</volume>
          ,
          <fpage>161</fpage>
          -
          <lpage>185</lpage>
          . https://doi.org/10.1146/annurev-biodatasci-
          <volume>030320</volume>
          - 043602
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Pawar</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Raskar</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2021</year>
          ).
          <article-title>Multispectral imaging for early diagnosis of retinal disorders</article-title>
          .
          <source>Biomedical Optics Express</source>
          ,
          <volume>12</volume>
          (
          <issue>3</issue>
          ),
          <fpage>1568</fpage>
          -
          <lpage>1585</lpage>
          . https://doi.org/10.1364/BOE.415570
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Schmidt-Erfurth</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bogunovic</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sadeghipour</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schlegl</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Langs</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gerendas</surname>
            ,
            <given-names>B. S.</given-names>
          </string-name>
          , ... &amp;
          <string-name>
            <surname>Waldstein</surname>
            ,
            <given-names>S. M.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Artificial intelligence in retina</article-title>
          .
          <source>Progress in Retinal and Eye Research</source>
          ,
          <volume>67</volume>
          ,
          <fpage>1</fpage>
          -
          <lpage>29</lpage>
          . https://doi.org/10.1016/j.preteyeres.
          <year>2018</year>
          .
          <volume>07</volume>
          .004
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Nowroozizadeh</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Medeiros</surname>
            ,
            <given-names>F. A.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Integrating structural and functional measurements using artificial intelligence for glaucoma diagnosis and progression detection</article-title>
          .
          <source>Current Opinion in Ophthalmology</source>
          ,
          <volume>30</volume>
          (
          <issue>2</issue>
          ),
          <fpage>106</fpage>
          -
          <lpage>114</lpage>
          . https://doi.org/10.1097/ICU.0000000000000559
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Zaleta</surname>
            <given-names>O. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Povstyanoy</surname>
            <given-names>O. Yu.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ribeiro</surname>
            <given-names>L. F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Redko</surname>
            <given-names>R. G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bozhko</surname>
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Ye</surname>
          </string-name>
          .,
          <string-name>
            <surname>Chetverzhuk</surname>
            <given-names>T. I.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Automation of optimization synthesis for modular technological equipment</article-title>
          .
          <source>Journal of Engineering Sciences</source>
          , Vol.
          <volume>10</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>A6</fpage>
          -
          <lpage>A14</lpage>
          , doi: 10.21272/jes.
          <year>2023</year>
          .
          <volume>10</volume>
          (
          <issue>1</issue>
          ).
          <fpage>a2</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>C. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tyring</surname>
            ,
            <given-names>A. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deruyter</surname>
            ,
            <given-names>N. P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rokem</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>A. Y.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Deep-learning based, automated segmentation of macular edema in optical coherence tomography</article-title>
          .
          <source>Biomedical Optics Express</source>
          ,
          <volume>8</volume>
          (
          <issue>7</issue>
          ),
          <fpage>3440</fpage>
          -
          <lpage>3448</lpage>
          . https://doi.org/10.1364/BOE.8.
          <fpage>003440</fpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>