<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>N. Deepa, SP.Chokkalingam, "Deep
Convolutional Neural Networks (CNN) for
Medical Image Analysis", International Journal of
Engineering and Advanced Technology (IJEAT)</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">2249-8958</issn>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1109/iww-bci.2016.7457465</article-id>
      <title-group>
        <article-title>AMD-Network: Automatic Macular Diagnoses of disease in OCT scan images through Neural Network</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Praveen Mittal</string-name>
          <email>praveen.mittal@gla.ac.in</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Charul Bhatnagar</string-name>
          <email>charul@gla.ac.in</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>G.L.A University</institution>
          ,
          <addr-line>Mathura, Uttar Pradesh</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>8</volume>
      <issue>3</issue>
      <fpage>1</fpage>
      <lpage>2</lpage>
      <abstract>
        <p>Retinal optical coherence tomography scan images are used to diagnose retinal diseases using Convolutional Neural Network. One of the most beneficial of using Optical coherence tomography scan images is its non-invasive operation. Most ophthalmologists are using optical coherence tomography images for treating the retinal disorder in the human eye but due to its high cost of imaging, every patient can effort this imaging modality. Convolution Neural networks are now day giving lots of opportunities to classify various classes of images automatically. This method first removes noise from the images which get induced at the time of image capturing. Usually, Gaussian noise easily gets introduces in the transmission of images from one device to another. Speckle noise can be easily removed with the help of an average filter with a deviation of 0.7. Convolution Neural Network first trains the network with the help of activation functions like rectilinear Unit. The proposed method achieves 98.8% accuracy in the dataset of 50000 images</p>
      </abstract>
      <kwd-group>
        <kwd>Choroidal Neovascularization</kwd>
        <kwd>Diabetic retinopathy</kwd>
        <kwd>Diabetic Macular Edema</kwd>
        <kwd>Glaucoma</kwd>
        <kwd>Human Retinal Disease</kwd>
        <kwd>Residual Network</kwd>
        <kwd>VGG16</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>The human eye plays a vital role in training the
human brain. Retinal diseases such as diabetic
macular edema, glaucoma, diabetic retinopathy,
and Choroidal Neovascularization not only impact
the vision of the human eye but also made a great
impact on learning the visual object's behavior [1].
Optical Coherence Tomography has become a
useful imaging modality now a day due to its
noninvasive medical imaging method [2]. Retinal
Optical Coherence Tomography allows
ophthalmologists to diagnose retinal diseases such
as age-related macular degeneration and glaucoma
[3].</p>
      <p>Processing Optical coherence topographical images
is not an easy task due to speckle noise present in
the images at the time of scanning [4].</p>
      <p>Many research scholars [5] gave techniques to
remove these speckle noise from optical coherence
tomography but Gaussians filter with a standard</p>
      <p>Denoising is usually one of the major steps in
classification pre-processing [11]. There is many
convolutional neural networks (CNN) based
algorithm as mentioned in [12] on a chest X-ray
dataset to classify medical Images of pneumonia.
We have hooked upon OCT images for this process
as they have a huge relevance when it comes to
generating a schematic that defines multiple tissues
of the eye, called the Retina [13]. It is also used for
examination, by imaging the eyes of the patients
with various eye conditions such as Diabetic
Macular edema and diabetic retinopathy, etc [14].
Noise makes its way into the OCT images during
acquisition. The sensor and circuitry of
a scanner or digital camera could also raise the
problem of getting noise to the image [15]. Further
film grain can also add to be a reason behind image
noise. Never intended to be introduced in an image
it degrades the quality of an image. Hence,
processing an image becomes necessary [16].
Image processing is the procedure of improving the
quality and information content of the original data.
Image enhancement, Restoration is amongst some
relief approaches that are used to improve the
quality of an image. The latest research in [17]
expresses the application of deep learning in
medical image processing.</p>
    </sec>
    <sec id="sec-2">
      <title>2 Related Work</title>
      <p>There are the number of classification work has
been done for OCT images till date, but they are for
classifying diseased eye from Normal eye. The
following section describes the various works done
in the classification of retinal Spectral Domain
OCT (SDOCT) images.</p>
      <p>M. Treder et al. proposed a model in [5], [18],
which is based on machine learning for retinal
SDOCT image categorization for Age-related
Macular Degeneration with the help of a dataset
obtained from Heidelberg. The authors use the
Inception-v3 model for Deep Convolutional Neural
Network where starting layer got trained on
ImageNet and the final layer got trained for taken
dataset. Their work showed a good result for
agerelated macular degeneration diseases. Their work
was only designed for age-related macular
degeneration disease and Normal eye images
The next step that comes is Segmentation.
Segmentation is a process of dividing an image into
regions. This technique is a mid-level processing
technique. This further aims to segment the OCT
images and work further on the results obtained
[19]. Here, Technique aim at diagnosing a various
disease that comprises the retina of the eye. The
method needs to find the thickness of each layer
after finding different layers, to examine the eye for
various diseases. Different layers may consist of
different diseases which need to be diagnosed.
What adds to the problem is that the gradient
amongst the layers is decent and hence it further
becomes a tedious task to segregate the layers. In
this paper, AMD-net is used that works on
crossvalidation which aims to figure the classification
hence, generating the desired classification or
clustering of images.</p>
      <sec id="sec-2-1">
        <title>3 Dataset</title>
        <p>Dataset of optical coherence tomographic retinal
images which is used in this proposed work is
freely available on the Kaggle repository. In the
taken dataset there are four classes of images
names Diabetic Macular edema, Diabetic
Retinopathy, Glaucoma, and Healthy eye. Total
50000 images over which convolutional neural
networks have been trained.</p>
      </sec>
      <sec id="sec-2-2">
        <title>4 Proposed Work</title>
        <p>
          The proposed word showed the classification of
retinal images with an accuracy of 98.8%. This
work uses Neural Network to train the neurons in
the network and provide a feature vector for the
identification of features of the particular disease.
Convolutn_two_dim(submatrics=16,
mask_shape=(
          <xref ref-type="bibr" rid="ref5 ref5">5,5</xref>
          ), S_rate=1, stuffing='valid',
act_funtn ='relunit', input_shape=size)
Convolutn_two_dim (submatrics =16, mask_shape
=(
          <xref ref-type="bibr" rid="ref5 ref5">5,5</xref>
          ), S_rate =1, stuffing ='valid', act_funtn
=''relunit ')
MxPoolingLayer_two_dim(pool_sz=(
          <xref ref-type="bibr" rid="ref4 ref4">4,4</xref>
          ))
Convolutn_two_dim (submatrics =32, mask_shape
=(
          <xref ref-type="bibr" rid="ref5 ref5">5,5</xref>
          ), S_rate =1, stuffing ='valid', act_funtn
=''relunit ')
Convolutn_two_dim submatrics =32, mask_shape
=(
          <xref ref-type="bibr" rid="ref5 ref5">5,5</xref>
          ), S_rate =1, stuffing ='valid', act_funtn
='relunit ')
MxPoolingLayer_two_dim (pool_sz=(
          <xref ref-type="bibr" rid="ref4 ref4">4,4</xref>
          ))
Convolutn_two_dim (submatrics =64, mask_shape
=(
          <xref ref-type="bibr" rid="ref5 ref5">5,5</xref>
          ), S_rate =1, stuffing ='valid', act_funtn
=''relunit ')
Convolutn_two_dim (submatrics =64,
mask_shape =(
          <xref ref-type="bibr" rid="ref5 ref5">5,5</xref>
          ), S_rate =1, stuffing ='valid',
act_funtn ='relunit ')
        </p>
        <p>
          MxPoolingLayer_two_dim (pool_sz=(
          <xref ref-type="bibr" rid="ref4 ref4">4,4</xref>
          ))
        </p>
        <p>
          Convolutn_two_dim (submatrics =128,
mask_shape =(
          <xref ref-type="bibr" rid="ref5 ref5">5,5</xref>
          ), S_rate =1, stuffing ='valid',
act_funtn =''relunit ')
Convolutn_two_dim (submatrics =128,
mask_shape =(
          <xref ref-type="bibr" rid="ref5 ref5">5,5</xref>
          ), S_rate =1, stuffing ='valid',
act_funtn='relunit')
MxPoolingLayer_two_dim (pool_sz=(
          <xref ref-type="bibr" rid="ref4 ref4">4,4</xref>
          ))
___________________________________________________
___________________________________________________
Maxpoollayer@1 (MaxPoolingtwodim (Null, 126, 126, 8) 0
___________________________________________________
___________________________________________________
___________________________________________________
Maxpoollayer@2 (MaxPoolingtwodim (Null, 61, 61, 16) 0
___________________________________________________
80
584
1168
2320
(Null, 59, 59, 32)
___________________________________________________
Convolutn @6 (Convtn2Dm) (Null, 57, 57, 32)
___________________________________________________
Maxpoollayer@3 (MaxPoolingtwodim (Null, 28, 28, 32) 0
___________________________________________________
___________________________________________________
___________________________________________________
Maxpoollayer@4 (MaxPooling2dm (Null, 12, 12, 64) 0
___________________________________________________
loosingvalues@1 (Dropout) (Null, 12, 12, 64) 0
___________________________________________________
flattenofcurvature@1 (Flatten) (Null, 9216) 0
___________________________________________________
compact_1 (Compact) (Null, 128) 1179776
___________________________________________________
compact_2 (Compact) (Null, 4) 516
==============================================
Overall attribute: 2,153,756
learnable attribute: 2,153,756
        </p>
        <p>Non-learnable attribute: 0
9248
18496
36928</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>5 Results and Analysis</title>
      <p>Implementation results are elaborated in Table 2
for the hyper attribute of the experimented network.</p>
    </sec>
    <sec id="sec-4">
      <title>6 Conclusion</title>
      <p>The table shown above depicts the results in the
processing time of the experimented convolutional
neural network as 0.112 sec which is less than the
processing time of the previous research till now.
Further, the classification time of the experimented
convolutional neural network is 1.061 sec which is
19 sec less than the time taken by ResNet50 for the
set of retinal images. So if we talk about the total
time taken to process the retinal OCT images for
classification is 1.173 sec, which is less than the
time taken by ImageNet and ResNet50 on the same
retinal OCT images.
7 References</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Keel</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            <given-names>PY</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scheetz</surname>
            <given-names>J</given-names>
          </string-name>
          , et al.
          <article-title>Feasibility and patient acceptability of a novel artificial intelligence-based screening model for diabetic retinopathy at endocrinology outpatient services: a pilot study</article-title>
          .
          <source>Sci Rep</source>
          <year>2018</year>
          ;
          <volume>8</volume>
          :
          <fpage>4330</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Chen</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oguz</surname>
            <given-names>I</given-names>
          </string-name>
          , et al.
          <article-title>Automated segmentation of the choroid in EDI-OCT images with retinal pathology using convolution neural networks</article-title>
          .
          <source>Fetal Infant Ophthalmic Med Image Anal</source>
          <year>2017</year>
          ;
          <volume>10554</volume>
          :
          <fpage>177</fpage>
          -
          <lpage>84</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Worrall</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wilson</surname>
            <given-names>CM</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brostow</surname>
            <given-names>GJ</given-names>
          </string-name>
          .
          <source>Automated retinopathy of prematurity case detection with convolutional neural networks</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Brown</surname>
            <given-names>JM</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Campbell</surname>
            <given-names>JP</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beers</surname>
            <given-names>A</given-names>
          </string-name>
          .
          <article-title>Fully automated disease severity assessment and treatment monitoring in retinopathy of prematurity using deep learning</article-title>
          .
          <source>Proceedings</source>
          Volume
          <volume>10579</volume>
          ,
          <string-name>
            <surname>Medical</surname>
            <given-names>Imaging 2018</given-names>
          </string-name>
          :
          <article-title>Imaging Informatics for Healthcare, Research, and</article-title>
          <string-name>
            <surname>Applications</surname>
          </string-name>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Pai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Hussain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. P.</given-names>
            <surname>Hebri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Lootah</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Dekhain</surname>
          </string-name>
          ,
          <article-title>"Volcano-like pattern in optical coherence tomography in chronic diabetic macular edema,"</article-title>
          <source>Saudi Journal of Ophthalmology</source>
          , vol.
          <volume>28</volume>
          , pp.
          <fpage>157</fpage>
          -
          <lpage>159</lpage>
          ,
          <issue>41</issue>
          /
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Abelian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Baker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. P. A.</given-names>
            <surname>Coolen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Crossman</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.R.</given-names>
            <surname>Masegosa</surname>
          </string-name>
          ,
          <article-title>"Classification With binary classifications from a nonparametric predictive inference perspective,"</article-title>
          <source>Computational Statistics &amp; Data Analysis</source>
          , vol.
          <volume>71</volume>
          , pp.
          <fpage>789</fpage>
          -
          <lpage>802</lpage>
          ,
          <issue>31</issue>
          /
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          https://doi.org/10.1016/j.compmedimag.
          <year>2018</year>
          .
          <volume>01</volume>
          .00 1 PMID:
          <fpage>29366655</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Sidibè</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sankar</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>LemaõÃtre</surname>
            <given-names>G</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rastgoo</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Massich</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cheung</surname>
            <given-names>CY</given-names>
          </string-name>
          , et al.
          <article-title>An anomaly detection approach for the identification of DME patients using spectral-domain optical coherence tomography images</article-title>
          .
          <source>Computer Methods</source>
          and Programs in Biomedicine.
          <year>2017</year>
          ;
          <volume>139</volume>
          :
          <fpage>109</fpage>
          ±
          <fpage>117</fpage>
          . https://doi.org/10.1016/j.cmpb.
          <year>2016</year>
          .
          <volume>11</volume>
          .001 PMID:
          <fpage>28187882</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Hussain</surname>
            <given-names>MA</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bhuiyan</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ishikawa</surname>
            <given-names>H</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smith</surname>
            <given-names>RT</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schuman</surname>
            <given-names>JS</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kotagiri</surname>
            <given-names>R.</given-names>
          </string-name>
          <article-title>An automated method for choroidal thickness measurement from Enhanced Depth Imaging Optical Coherence Tomography images</article-title>
          .
          <source>Computerized Medical Imaging and Graphics</source>
          .
          <year>2018</year>
          ;
          <volume>63</volume>
          :
          <fpage>41</fpage>
          ±
          <fpage>51</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>L.</given-names>
            <surname>Ngo</surname>
          </string-name>
          , G. Yih,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ji</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Han</surname>
          </string-name>
          :
          <article-title>A study on automated segmentation of retinal layers in Deep Learning Methods”</article-title>
          ,
          <source>Multimedia Tools and Applications (MTAP)</source>
          ,
          <volume>80</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>1</fpage>
          -
          <lpage>32</lpage>
          . https://doi.org/10.1007/s11042-020-09818-1.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Gurpreet</surname>
            <given-names>Kaur</given-names>
          </string-name>
          , Prateek Agrawal, “
          <article-title>Optimisation of Image Fusion using Feature Matching Based on SIFT</article-title>
          and RANSAC”,
          <source>Indian Journal of Science and Technology</source>
          ,
          <volume>9</volume>
          (
          <issue>47</issue>
          ), pp
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Abhishek</surname>
            <given-names>Sharma</given-names>
          </string-name>
          , Prateek Agrawal,
          <source>Vishu Madaan and Shubham Goyal, “Prediction on Diabetes Patient's Hospital Readmission Rates”, 3rd International Conference on Advances Informatics on Computing Research (ICAICR'19)</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          ,
          <year>Jul 2019</year>
          ,
          <article-title>ACM-ICPS.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>P</given-names>
            <surname>Mittal</surname>
          </string-name>
          ,
          <string-name>
            <surname>C Bhatnagar</surname>
          </string-name>
          ,
          <article-title>Automatic classification of retinal pathology in optical coherence tomography scan images using convolutional neural network</article-title>
          ,
          <source>Journal of Advanced Research in Dynamical and Control Systems</source>
          <volume>12</volume>
          (
          <issue>3</issue>
          ),
          <fpage>936</fpage>
          -
          <lpage>942</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>P</given-names>
            <surname>Mittal</surname>
          </string-name>
          ,
          <string-name>
            <surname>C Bhatnagar</surname>
          </string-name>
          ,
          <article-title>Detecting outer edges in retinal OCT images of diseased eyes using graph cut Method with weighted edges</article-title>
          ,
          <source>Journal of Advanced Research in Dynamical and Control Systems</source>
          <volume>12</volume>
          (
          <issue>3</issue>
          ),
          <fpage>943</fpage>
          -
          <lpage>950</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>P.</given-names>
            <surname>Mittal</surname>
          </string-name>
          ,
          <article-title>Automatic segmentation of pathological retinal layer using eikonal equation</article-title>
          ,
          <source>11th International Conference on Advances in Computing, Control, and Telecommunication Technologies</source>
          ,
          <string-name>
            <surname>ACT</surname>
          </string-name>
          <year>2020</year>
          ,
          <year>2020</year>
          , pp.
          <fpage>43</fpage>
          -
          <lpage>49</lpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>