<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>FedDDR: A Federated Improved DenseNet for Classification of Diabetic Retinopathy</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Akansha Singh</string-name>
          <email>akanshasing@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Krishna Kant Singh</string-name>
          <email>krishnaiitr2011@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Delhi Technical Campus</institution>
          ,
          <addr-line>Greater Noida</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>SCSET, Bennett University</institution>
          ,
          <addr-line>Greater Noida</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <fpage>4</fpage>
      <lpage>14</lpage>
      <abstract>
        <p>Damage to the retina and other eye blood vessels is the result of diabetes, a condition known as diabetic retinopathy (DR). Affected individuals may have retinal clots, lesions, or hemorrhaging. Exudates and lesions in the retina may cause visual loss in people with diabetic retinopathy. Diabetic retinopathy identification is essential for effective patient care. This research proposes a federated version of an enhanced DenseNet deep learning model for use in the detection and classification of Diabetic Retinopathy in retinal fundus pictures. With the dense blocks performing concatenation, the upgraded DenseNet model improves the feature utilization efficiency. The model is trained using federated learning algorithm. Federated learning enables distributed training of the model using remotely hosted datasets without the need to gather data and, subsequently, damage it. This overcomes the limitations posed by the data silos and makes full advantage of the existing medical data. The proposed model improves the performance and ensures patient privacy by not gathering the data at a central dataset. The federated average learning algorithm is used to train the model. The model uses Maximum Probability Based Cross Entropy (MPCE) loss function. The proposed method's outcomes are evaluated and contrasted with those of similar approaches. The results of this comparison demonstrate that the suggested technique is superior to the others in terms of accuracy, precision, and recall when applied to the categorization of retinal pictures.</p>
      </abstract>
      <kwd-group>
        <kwd>1 diabetic retinopathy</kwd>
        <kwd>deep learning</kwd>
        <kwd>Federated Learning</kwd>
        <kwd>DenseNet</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Use Deep learning has emerged as a promising strategy for automated clinical diagnosis. The most
prevalent complications of diabetes are well-known to the general population. Many incidences of
avoidable blindness are caused by diabetic retinopathy, an eye condition that diabetics are prone to, but
which is not as well-recognized as other diabetes consequences. Diabetic retinopathy affects around
60% of people with type 2 diabetes and over 100% of those with type 1. The illness progresses through
four distinct phases, with the first two being the most manageable thanks to early detection and
subsequent preventive care. High blood sugar levels may damage blood vessels in the retina, leading to
diabetic retinopathy, an eye condition, as described by the American Academy of Ophthalmology.
There is a risk of a blockage and subsequent lack of blood flow if the afflicted blood vessels expand
and leak or seal up completely. Diabetic retinopathy may be very damaging to a person's eyes if left
untreated, thus finding it early is crucial. Possessing a reliable method for early detection of the illness
is crucial. The hazards associated with each stage of diabetic retinopathy are discussed here, as well as
the symptoms experienced at each stage and the medicinal interventions available to prevent further
progression of the disease. To diagnose and treat diabetic retinopathy in its earliest stages, it is essential
to take preventative measures, such as arranging yearly diabetic retinal exams.</p>
      <p>These vital retinal examinations may discover hazardous problems before they cause significant
vision loss, giving the patient and their doctors the time to devise a treatment strategy. Patients and
doctors may use this action plan as a road map to better comprehend and address the far-reaching effects
of diabetes on a person's health. Photos of a healthy eye and a DR-affected eye are shown side-by-side
in Figure 1.
adverse medication responses, and more have recently been included into federated learning models
in the healthcare domain [11]. However, most applications of federated learning in healthcare
outcome prediction used relatively small datasets and partitioned the data theoretically (randomly)
to simulate the properties of actual data. In this research, we apply our framework to the Health Facts
data by using the information provided by the healthcare systems for each individual patient.</p>
      <p>Most healthcare federated learning applications employed classification methods such logistic
regression, artificial neural network, multi-layer perceptron, support vector machines, and random
forest to construct federated predictive models. Existing methods for predicting complications from
diabetes, such as retinopathy (eye disease), neuropathy (peripheral nerve disorder), and nephropathy
(kidney disease), rely on centralized machine learning algorithms trained on small-size datasets from
the US population, which contain fewer than ideal numbers of cases of complications and less than
ideal patient information. In this research, we used a federated learning architecture to develop three
different machine learning models for binary classification of the occurrence of three different
diabetes-related complications: those affecting the eyes, the kidneys, and the peripheral nerves.</p>
      <p>The existing deep learning and machine learning models have several limitations. These include
limited medical data availability, patient privacy issues and training overhead at a centralized
location. Therefore, in this paper a modified federated learning DenseNet model is proposed for the
classification if diabetic retinopathy. In this architecture, several sites may work together to train a
single global model. With federated learning, a global model is built by combining training results
from many locations without the need to share datasets. The confidentiality of the patients is
protected in this way. The global model's detection skills are further enhanced by the additional
supervision received from the findings of collaborating locations. When training AI models with
little data, this solves the problem of inadequate supervision. Thus, in this paper the
abovementioned limitations are removed with the proposed Federated Learning Dense net model. The
initial model on the central server is initialized and the parameters are shared with the connected
devices. The results are simulated using tensorflow federated learning module. More than 5,000
retinal pictures from the third biggest dataset APTOS19 are segmented for use in virtual testing. The
DenseNet model used overcomes the vanishing gradient problem and strengthens the feature
propagation as features are concatenated at each stage.</p>
      <p>This paper is organized into five sections in the first section the introduction to the problem and
literature review is discussed. The second section discusses the proposed methodology followed by
results and discussion section. The last section gives the overall conclusion of the work presented in
this paper.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Proposed Method</title>
      <p>The traditional machine learning models that are trained centrally on one device pose some serious
challenges when used for healthcare applications. The limited availability of data due to multiple
constraints of data privacy and sharing is a major issue. Therefore, in this paper a DenseNet model
with federated learning approach is presented. Data privacy, data security, data access rights, and access
to heterogeneous data may all be addressed by using federated learning, which allows several hospitals
to construct a shared, robust machine learning model without sharing data. Therefore, federated
learning models may collect data from several sources (e.g., hospitals, electronic health record
databases) to give more diverse data. In this section the steps involved in the proposed methodology
are discussed in detail. In figure 2 the proposed methodology is shown. The central model is trained
using N connected devices. Each device uses its own dataset for training and transmits the updates in
the model weights.</p>
      <sec id="sec-2-1">
        <title>The general training mechanism is shown in figure 3.</title>
        <p>1. Initial Model Configuration : The DenseNet model [12] is initialized at the central server device.</p>
        <p>
          The training of the central model is done using the APTOS2019 dataset. The initial parameters of the
model are then transmitted to each of the connected devices.
2. Training at connected devices: A copy of the model is available at each of the connected and it uses
the parameters broadcasted by the server. The following steps are followed at each connected device.
3. Input Retinal Images: The retinal pictures are fundus images captured under a variety of lighting
and camera angles. A doctor assigns a score from zero to four for five categories to each picture,
reflecting the severity of diabetic retinopathy. The model is tested and trained using these pictures.
4. Pre-processing of images: The photos are shot in a variety of environments with varying levels of
illumination. Before they may be utilized for model training, these photos need preprocessing. Due to
the lack of contrast in retinal pictures, CLAHE is used to equalize the histograms [13]. The CLAHE
histogram equalization is computed as follows:
!"# = $!"#%$!$% (
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
        </p>
        <p>$&amp;</p>
        <p>The proposed deep learning network receives its input data from the pre-processed images acquired in this
stage.</p>
        <p>5. Image Resizing: As the images at different connected devices are of different size and thus, they need
to be resized before feeding to the deep network. Thus, all images are resized to 224 × 224 pixels.
6. Image Standardization: Image standardization is a data transformation technique. Standardization
rescales the image features so that the mean is 0 and standard deviation is 1. This improves the
optimization and consequently the accuracy of the model.</p>
        <p>
          &amp;' = $% )' (
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
        </p>
        <p>*'
where  is the mean and  is the standard deviation.
7. DenseNet Model: A DenseNet model is used for the classification of the retinal images for
identification of diabetic retinopathy. Using Dense Blocks, in which we directly link all layers (with
matching feature-map sizes) with each other, a DenseNet is a sort of convolutional neural network that
makes use of dense connections between layers. To maintain the feed-forward structure, each layer
pulls in data from the layers below it and sends its own feature maps to the layers above it. DenseNet
s have outperformed traditional CNNs and ResNets on a wide variety of benchmark datasets, and their
smaller model size is a consequence of both factors.The architecture of the DenseNet 121 used is as
follows:</p>
        <p>In each layer the feature maps of the preceding layers are concatenated as input. Due to concatenation
the features are not repeated, and redundant features are removed. Each lth layer receives the feature
maps of the previous layers.</p>
        <p>
          += +([,, -, … . , +%-]) (
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
where [ ] denotes concatenation operation and  is a composite function. It comprises of batch
normalization (BN), a rectified linear unit (ReLU) and a convolution (Conv).
        </p>
        <p>DenseBlocks are the building blocks of DenseNet; the size of the feature maps stay the same inside a
block, but the number of filters varies. By removing one layer of transition between each block, we
may cut the total number of channels in half.</p>
        <p>The amount of information to be added in each layer is controlled by the growth rate (k) of DenseNet.
Thus, in lth layer the amount of information added can be computed as:</p>
        <p>+= . +  ∗ ( − 1)
where 0 is the number of channels in the input layer.</p>
        <p>
          Maximum probability based cross entropy loss: For the sake of fine-tuning the model-learning
process an MPCE loss function is implemented [20]. Because of this, the convergence is accelerated,
and the back propagation error is minimised. MPCE may be expressed mathematically as in eq (
          <xref ref-type="bibr" rid="ref4">5</xref>
          ).
(4)
'() = − ∑/12- /0(/) = − ∑/12-(134 − 5)96(/)
(
          <xref ref-type="bibr" rid="ref4">5</xref>
          )
where,  where the real class, ℎ among m classes, is the largest. If ! is a vector of real classes,
then the uth coordinate is 1, and ! is the vector of uth coordinates. The i-th coordinate of the vector ′
is denoted by ′.
        </p>
        <p />
        <p>
          Adam Optimization: In order to maximise efficiency, the adam optimizer combines the benefits of
both the Momentum and Root Mean Square propagation methods [14]. When the gradient hits its
global minimum, ADAM slows the pace of descent such that there is little oscillation.
(
          <xref ref-type="bibr" rid="ref5">6</xref>
          )
10. Federated averaging Learning: Federated averaging algorithm uses an averaging method to combine
the updates at the central server [15]. A network of N devices available at N different hospitals, indexed
 ∈ {1,2, … , }. Each device or hospital has its own dataset consisting of retinal images denoted by
. Each  comprises of an input vector  and an outcome variable . The model will be trained
using this network of devices. Thus,
        </p>
        <p>
          # = : → @: (
          <xref ref-type="bibr" rid="ref6">7</xref>
          )
 is the input feature vector and " is the predicted output using vector w and loss function.
The local loss at each device can be computed as,
        </p>
        <p>;() = (1⁄|;|) ∑:&lt;=&amp; :
The assumption in this problem is ,</p>
        <p>|| = $$ ∀,</p>
        <p>Thus, the optimization is the average over the (). The objective is to find w that minimizes ()
over the data  = ..</p>
        <p>min (), ℎ () ≔ &gt;- ∑;&gt;2- ;()
In case || ≠ $$ then 1 can be replaced with  = ||⁄||</p>
        <p>
          The complete algorithm for the training process is as follows:
(
          <xref ref-type="bibr" rid="ref7">8</xref>
          )
(
          <xref ref-type="bibr" rid="ref8">9</xref>
          )
(
          <xref ref-type="bibr" rid="ref9">10</xref>
          )
Algorithm: FedDDR learning
Input: K [Number of Hospitals/Devices],T[epochs],[Initial weight vector],
[learning rate of client], [Learning rate of server]
Start
        </p>
        <p>Server broadcasts  to K devices.</p>
        <p>For t-0,…T-1
For each device  = , … ,  computes +
Each devices sends the + back to the server
Server averages and updates the w as
+ =  +  ∑=</p>
        <p>+</p>
        <p>Output the final model parameters 
11. Termination Condition: The training is terminated when the number of iterations is complete, or the
model has converged to the optimal solution.
12. Grad Cam Visualization: The Gradient based Class Activation Map (Grad-CAM) is an example of
a class-discriminative localization map that draws attention to important parts of an image by
calculating the gradient of the class score yc for class c relative to the activations Ak of a convolutional
layer's feature map ∂yc /∂Ak. ack is the result of a global-average-pooled backflow of these gradients,
which is used to derive the significance of neuronal weights [16].</p>
        <p>
          ∝?;= N@-∑OP/∑OQA (
          <xref ref-type="bibr" rid="ref10">11</xref>
          )
        </p>
        <p>
          Grad-CAM is basically a weighted combination of forward activation maps followed by ReLU
operation as follows:
− =  F∑ ∝ H

(
          <xref ref-type="bibr" rid="ref11">12</xref>
          )
        </p>
        <p>With the help of the Grad-CAM visualisation heatmap, we can see how the three categories our model predicts
for test photos are distributed among a set of representative examples. Grad-heatmap CAM's depiction draws
attention to the key pixel clusters used by the model's last convolution layer to make class distinctions. Here, we
see how the GRAD-CAM visualisation system distinguishes between standard and DR photos by highlighting
them in a variety of ways. Class activation maps for both normal and DR photos show that the centre of the image
is emphasised more strongly in the former instance, while the top part of the image is illuminated more densely
in the latter. Important visual features utilised by the model to make the concept prediction are highlighted in the
class activation map. Figure 5 displays several example gradcam representations of retinal images.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Experiments</title>
      <p>The proposed method is tested and its performance is measured by implementing it in Python. The dataset is
large and thus GPU acceleration is used for the simulation purpose. Keras module in Python is used for developing
the deep network and Tensorflow Federated Learning for training. The initial base model is trained using the
following datasets. Thereafter, each client uses its own dataset. For simulation purpose the dataset was divided
into different clients. The dataset used is from APTOS 2019, a dataset on diabetic retinopathy
(https://www.kaggle.com/c/aptos2019-blindness-detection). Aravind Eye Hospital collects the data in rural India
so that it may be used to create AI for DR detection. All the images fall into one of five categories: No DR, Mild,
Moderate, severe and Proliferative DR. The severity, location, and frequency are taken into account when
assigning a grade from 0 to 4.</p>
      <p>The dataset comprises of a total of 3662 images. For experiments the images are split into training and testing
in the ratio of 80:20. Thus, 2930 images are used for training and 732 images are used for testing. The sample
images from the database are shown in figure 6.</p>
      <p>We use these metrics to measure how well the suggested technique performs. Specifically, these indicators are
employed:</p>
      <p>The number of incorrect predictions is the standard measure of accuracy, or precision. You can figure this out
by
 =</p>
      <p>LM
LMNOM
(13)</p>
      <p>For each given model, recall indicates how many true positives it generates. In eq.(14), we see the formula for
calculating the recall:</p>
      <p>= LMNLL&gt;MNNOLM&gt;N O&gt;
where TP, FP and FN represents the true positive, false positive and false negative, respectively.
F1-score can be computed using eq.(16)</p>
      <p>×
1 = 2 × +
(14)
(15)
(16)</p>
    </sec>
    <sec id="sec-4">
      <title>4. Results and Discussion</title>
      <p>In this section the results obtained from the proposed method and a comparative analysis is presented. The
results are obtained by performing simulations and the data was split amongst different clients to observe the
results. Federated data set, i.e., a collection of data from multiple users is required for demonstrating the proposed
method. Thus, to facilitate experimentation, the data set was split amongst five users. Due to individual
differences in data consumption behaviors [21, 22], federated data is often not identically distributed among users.
Due to data scarcity on the device, some customers may have fewer training instances than others, while other
clients may have more than enough. Because this is a simulated environment, we have access to all the data
required to do such a comprehensive examination of a client's data. In a fully operational federated setting, it is
impossible to see the data of a single client.</p>
      <p>An extremely large number of user devices may be involved in a typical federated training scenario, yet only a
subset of these devices may be accessible for training at any one moment. For instance, when the client devices
are mobile phones, they can only take part in the training when they are fully charged, not using the network, and
not being charged. Since this is a simulation, all the information we need is already on-hand. So, when we
conducted simulations, we would usually choose a new group of customers to train with each time.</p>
      <p>The parameters used for simulating the federated learning environment is as follows:</p>
      <sec id="sec-4-1">
        <title>Based on the confusion matrix the following metrics are computed</title>
        <p>732 test photos from a range of grading levels were used in the analysis. Table 4 displays the distribution of
photos by grade level.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion and Future work</title>
      <p>In this paper, a federated deep learning model for detection of diabetic retinopathy from retinal images. The
retinal fundus images are classified into five classes. A modified DenseNet model with federated learning
approach is proposed in this work. The federated learning makes the training process distributed and hence
improving the overall performance of the classifier. The patient privacy is also intact as their data remains on
their device. Also, the limitation of limited medical data for training is overcome as the data is used from multiple
devices. The simulations are done using tensorflow federated learning module. The results show that the proposed
method achieves 94.275 overall accuracy and class wise accuracy is also high. The comparison with other state
of the art methods reveal that the proposed method outperforms the existing state of the art methods.
6. References
Computer Vision and Pattern Recognition (CVPR), 2017.
[13] Goyal, L., Dhull, A., Singh, A., Kukreja, S., &amp; Singh, K. K. (2023). VGG-COVIDNet: A Novel model for
COVID detection from X-Ray and CT Scan images. Procedia computer science, 218, 1926-1935.
[14] Z. Zhang, “Improved adam optimizer for Deep Neural Networks,” 2018 IEEE/ACM 26th International
Symposium on Quality of Service (IWQoS), 2018.
[15] Konečný, J., McMahan, H.B., Yu, F.X., Richtárik, P., Suresh, A.T. and Bacon, D., 2016. Federated learning:
Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492.
[16] N. Sikder, M. Masud, A. K. Bairagi, A. S. Arif, A.-A. Nahid, and H. A. Alhumyani, “Severity classification
of diabetic retinopathy using an ensemble learning algorithm through analyzing retinal images,” Symmetry, vol.
13, no. 4, p. 670, 2021.
[17] G. Kumar, S. Chatterjee, and C. Chattopadhyay, “Dristi: A hybrid deep neural network for diabetic
retinopathy diagnosis,” Signal, Image and Video Processing, vol. 15, no. 8, pp. 1679–1686, 2021.
[18] A. Sugeno, Y. Ishikawa, T. Ohshima, and R. Muramatsu, “Simple methods for the lesion detection and
severity grading of diabetic retinopathy by Image Processing and Transfer Learning,” Computers in Biology and
Medicine, vol. 137, p. 104795, 2021.
[19] G. Kumar, S. Chatterjee, and C. Chattopadhyay, “Dristi: A hybrid deep neural network for diabetic
retinopathy diagnosis,” Signal, Image and Video Processing, vol. 15, no. 8, pp. 1679–1686, 2021.
[20] Y. Zhou, X. Wang, M. Zhang, J. Zhu, R. Zheng, and Q. Wu, “MPCE: A maximum probability based cross
entropy loss function for neural network classification,” IEEE Access, vol. 7, pp. 146331–146341, 2019.
[21] Y. Tolstyak and M. Havryliuk, ‘An Assessment of the Transplant’s Survival Level for Recipients after
Kidney Transplantations using Cox Proportional-Hazards Model’, CEUR-WS.org, vol. 3302, pp. 260–265, 2022.
[22] Y. Tolstyak, V. Chopyak, and M. Havryliuk, ‘An investigation of the primary immunosuppressive therapy’s
influence on kidney transplant survival at one month after transplantation’, Transplant Immunology, vol. 78, p.
101832, Jun. 2023, doi: 10.1016/j.trim.2023.101832.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kumar</surname>
          </string-name>
          <string-name>
            <surname>NC</surname>
          </string-name>
          and R. Y, “
          <article-title>Optimized maximum principal curvatures based segmentation of blood vessels from retinal images</article-title>
          ,
          <source>” Biomedical Research</source>
          , vol.
          <volume>30</volume>
          , no.
          <issue>2</issue>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>G.</given-names>
            <surname>Hassan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>El-Bendary</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. E.</given-names>
            <surname>Hassanien</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fahmy</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Abullah M.</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and V.</given-names>
            <surname>Snasel</surname>
          </string-name>
          , “
          <article-title>Retinal blood vessel segmentation approach based on mathematical morphology,” Procedia Computer Science</article-title>
          , vol.
          <volume>65</volume>
          , pp.
          <fpage>612</fpage>
          -
          <lpage>622</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Mondal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Mandal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Singh</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K. K.</given-names>
            <surname>Singh</surname>
          </string-name>
          , “
          <article-title>Blood vessel detection from retinal fundas images using GIFKCN classifier,” Procedia Computer Science</article-title>
          , vol.
          <volume>167</volume>
          , pp.
          <fpage>2060</fpage>
          -
          <lpage>2069</lpage>
          ,
          <year>2020</year>
          . [4]
          <string-name>
            <given-names>R.</given-names>
            <surname>Reguant</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Brunak</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Saha</surname>
          </string-name>
          , “
          <article-title>Understanding inherent image features in CNN-based assessment of diabetic retinopathy,” Scientific Reports</article-title>
          , vol.
          <volume>11</volume>
          , no.
          <issue>1</issue>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>Benson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Maynard</surname>
          </string-name>
          , G. Zamora,
          <string-name>
            <given-names>H.</given-names>
            <surname>Carrillo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wigdahl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nemeth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Barriga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Estrada</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Soliz</surname>
          </string-name>
          , “
          <article-title>Transfer learning for diabetic retinopathy</article-title>
          ,
          <source>” Medical Imaging</source>
          <year>2018</year>
          :
          <string-name>
            <given-names>Image</given-names>
            <surname>Processing</surname>
          </string-name>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>I.</given-names>
            <surname>Kandel</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Castelli</surname>
          </string-name>
          , “
          <article-title>Transfer learning with convolutional neural networks for diabetic retinopathy image classification. A Review,”</article-title>
          <source>Applied Sciences</source>
          , vol.
          <volume>10</volume>
          , no.
          <issue>6</issue>
          , p.
          <year>2021</year>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>N.</given-names>
            <surname>Sikder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Masud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Bairagi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Arif</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.-A.</given-names>
            <surname>Nahid</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H. A.</given-names>
            <surname>Alhumyani</surname>
          </string-name>
          , “
          <article-title>Severity classification of diabetic retinopathy using an ensemble learning algorithm through analyzing retinal images</article-title>
          ,
          <source>” Symmetry</source>
          , vol.
          <volume>13</volume>
          , no.
          <issue>4</issue>
          , p.
          <fpage>670</fpage>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Chen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Lin</surname>
          </string-name>
          , “
          <article-title>Diabetic retinopathy prediction by ensemble learning based on biochemical and physical data</article-title>
          ,
          <source>” Sensors</source>
          , vol.
          <volume>21</volume>
          , no.
          <issue>11</issue>
          , p.
          <fpage>3663</fpage>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G. T.</given-names>
            <surname>Reddy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bhattacharya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. Siva</given-names>
            <surname>Ramakrishnan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. L.</given-names>
            <surname>Chowdhary</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hakak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kaluri</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Praveen Kumar</surname>
          </string-name>
          <string-name>
            <surname>Reddy</surname>
          </string-name>
          , “
          <article-title>An ensemble based machine learning model for diabetic retinopathy classification</article-title>
          ,
          <source>” 2020 International Conference on Emerging Trends in Information Technology and Engineering (ic-ETITE)</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          , Y. Cheng, Y. Kang,
          <string-name>
            <given-names>T.</given-names>
            <surname>Chen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Yu</surname>
          </string-name>
          , “Federated learning,
          <source>” Synthesis Lectures on Artificial Intelligence and Machine Learning</source>
          , vol.
          <volume>13</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>207</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>N.</given-names>
            <surname>Rieke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hancox</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Milletarì</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. R.</given-names>
            <surname>Roth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Albarqouni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bakas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. N.</given-names>
            <surname>Galtier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. A.</given-names>
            <surname>Landman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Maier-Hein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ourselin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sheller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Summers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Trask</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Baust</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Cardoso</surname>
          </string-name>
          , “
          <article-title>The Future of Digital Health with Federated Learning,” npj Digital Medicine</article-title>
          , vol.
          <volume>3</volume>
          , no.
          <issue>1</issue>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>F.</given-names>
            <surname>Chollet</surname>
          </string-name>
          , “Xception:
          <article-title>Deep learning with depthwise separable convolutions</article-title>
          ,” 2017 IEEE Conference on
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>