<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Advanced Deep Learning Methodologies for Skin Cancer Classification in Prodromal Stages</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Muhammad Ali Farooq</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Asma Khatoon</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Viktor Varkarakis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Peter Corcoran</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National University of Ireland (NUIG) Galway H91CF50</institution>
          ,
          <country country="IE">IRELAND</country>
        </aff>
      </contrib-group>
      <fpage>6</fpage>
      <lpage>17</lpage>
      <abstract>
        <p>Technology-assisted platforms provide reliable solutions in almost every field these days. One such important application in the medical field is the skin cancer classification in preliminary stages that need sensitive and precise data analysis. For the proposed study the Kaggle skin cancer dataset is utilized. The proposed study consists of two main phases. In the first phase, the images are preprocessed to remove the clutters thus producing a refined version of training images. To achieve that, a sharpening filter is applied followed by a hair removal algorithm. Different image quality measurement metrics including Peak Signal to Noise (PSNR), Mean Square Error (MSE), Maximum Absolute Squared Deviation (MXERR) and Energy Ratio/ Ratio of Squared Norms (L2RAT) are used to compare the overall image quality before and after applying preprocessing operations. The results from the aforementioned image quality metrics prove that image quality is not compromised however it is upgraded by applying the preprocessing operations. The second phase of the proposed research work incorporates deep learning methodologies that play an imperative role in accurate, precise and robust classification of the lesion mole. This has been reflected by using two state of the art deep learning models: Inception-v3 and MobileNet. The experimental results demonstrate notable improvement in train and validation accuracy by using the refined version of images of both the networks, however, the Inception-v3 network was able to achieve better validation accuracy thus it was finally selected to evaluate it on test data. The final test accuracy using state of art Inception-v3 network was 86%.</p>
      </abstract>
      <kwd-group>
        <kwd>Melanoma</kwd>
        <kwd>CNN</kwd>
        <kwd>DNN</kwd>
        <kwd>Dermoscopy</kwd>
        <kwd>Inception-v3</kwd>
        <kwd>MobileNet</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Cancer nowadays is one of the greatest growing groups of diseases throughout the
world, among which skin cancer is most common of them. According to stats and
figures, the annual rate of skin cancer is increasing at an alarming rate each year [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The
modern medical science and treatment procedures prove that if skin cancer is detected
in its initial phase then it is treatable by using appropriate medical measures which
includes laser surgery or removing that part of the skin which ultimately could save a
patient’s life. Skin cancer has two main stages which include malignancy and
melanoma among which melanoma is fatal and comes with the highest risk. In most cases,
malignant mole is clearly visible on the patient’s skin which is often identified by the
patients themselves.
      </p>
      <p>
        Dermoscopic diagnosis refers to a non-invasive skin imaging method, which has
become a core tool in the diagnosis of melanoma and other pigmented skin lesions.
However, performing dermoscopy using conventional methods may lower down the
diagnostic accuracy which can lead to more chances of errors. These errors are
generally caused by the complexity of lesion structures and the subjectivity of visual
interpretations [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
      </p>
      <p>Computer-Aided Diagnosis (CAD) system is a type of digitized platform based on
advanced computer vision, deep learning, and pattern recognition techniques for skin
cancer classification. For the proposed study we have designed a CAD system for skin
cancer classification by utilizing advanced deep neural networks. The system consists
of the following steps: Firstly, a preprocessing of the digital images which includes
removing clutter such as hair from that part of the skin where the pigmented mole is
present and applying a sharpening filter to make that area more clear and visible thus
minimizing the chances of error. The next essential step includes the feature extraction
and classification process to extract the results for the cases under consideration by
utilizing deep learning techniques. Section 2 presents the background and related study
and highlights the medical aspects regarding skin cancer. Section 3 describes the
detailed methodology of the proposed system whereas Section 4 presents the
implementation and experimental results of the proposed study. Section 5 draws the overall
conclusion of the paper.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Background/ Related Work</title>
      <p>
        The human skin is the largest organ of the overall human body. It covers all other organs
of the body. It guards the entire body from microbes, bacterium, ultraviolet radiation,
helps to regulate body temperature and permits the sensations of touch, heat, and cold
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
2.1
      </p>
      <sec id="sec-2-1">
        <title>Skin Moles and Skin Cancer</title>
        <p>
          Mole or nevus on human skin can be described as a dark, erected spot comprised of
skin cells that are grown in a group rather than individually. These cells are generally
known as melanocytes which are responsible for producing melanin, the pigment color
in our skin. The main reason behind mole development on human skin is predominantly
because of direct sun exposure and any kind of extreme injury. The fair skin population
has a greater ratio of skin moles due to the lower quantity of melanin (natural pigments)
in their skins [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. There are three different kinds of skin malignant growth, which
include Basal Cell Carcinoma (BCC), Squamous Cell Carcinoma (SCC), and Melanoma.
Malignancy is a description of the “stage” of cancer. These malignant growths are
critical however, Melanoma comes with the highest risk level and it is discovered more
frequently in individuals maturing under 50 years for men and over 50 years for women
[
          <xref ref-type="bibr" rid="ref4">4</xref>
          ].
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Related Work/ Previous Studies</title>
        <p>
          The study proposed by Simon Kalouche utilizes [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] computer vision-based deep
learning methods to detect skin cancer and more specifically melanoma. Their dataset was
trained on 3 different learning models including a logistic regression model, fine-tuned
VGG-16 and multi-layer perceptron deep neural network to achieve a significant
amount of classification accuracy. Their results show that their algorithm ability to
segment moles and classify skin lesions is 70% to 78%. Md Zahangir et al [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] presented
the Inception Recurrent Residual Convolutional Neural Network (IRRCNN) method
for breast cancer classification on BreakHis and some other publicly available datasets.
They compared their experimental results against existing machine learning techniques
in terms of patch-based, image-based, patient-level and image-level classification.
Their IRRCNN models provide efficient classification in terms of Area Under the
Curve (AUC), global accuracy and the ROC curve. Andre Esteva et al [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] demonstrated
classification of skin cancer using a single CNN, which they have achieved by end to
end training using image data which is based on disease and pixel labels. In their work,
they utilized a large dataset of clinical images that consist of several diseases.
3
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Methodology</title>
      <p>In the proposed study, an efficient skin cancer diagnosis system has been implemented
for precise classification between malignant melanoma and benign cases. The complete
algorithm consists of several steps starting from the input phase of applying image
preprocessing ranging to the analysis of the case under consideration in the form of the
probability of lesion Malignancy. Fig. 1 shows the complete workflow of the proposed
algorithm.
3.1</p>
      <sec id="sec-3-1">
        <title>Image Preprocessing</title>
        <p>
          For the proposed study, the Kaggle skin cancer dataset [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] consisting of processed skin
cancer images of ISIC Archive [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] has been utilized. The dataset has a total of 2637
training images and 660 testing images with a resolution of 224 x 224. It is consists of
two main classes which include melanoma and benign cases. For image preprocessing
two major operations have been applied which includes an initial sharpening filter
followed by hair removal filter using dull razor software [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. These were selected in order
to remove the clutter. The results of the image preprocessing operations on two random
sample cases are shown in Fig. 2.
        </p>
        <p>It is noteworthy that image quality is refined after applying the image preprocessing
operations. This is shown in Section 4 of the paper where the results from four different
image quality metrics Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE),
Maximum Absolute Squared Deviation (MXERR) (MXERR) and Ratio of Squared
Norms (L2RAT) on both ground truth images, and preprocessed images are presented.
4
Fig 1. Workflow diagram of the proposed method
a
b
c</p>
      </sec>
      <sec id="sec-3-2">
        <title>Feature Extraction and Classification</title>
        <p>
          In the next step, the processed images are fed to state-of-the-art deep neural networks
in order to perform the feature extraction and classification steps. In this work, the
Inception-v3 and MobileNet deep learning architectures are utilized. These architectures
play a vital role by extracting feature values from raw pixel images. The Inception-v3
has state of the art performance in the classification task. It is made up of 48 layers
stacked on top of each other [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. The Inception-v3 model was initially trained using
1.2 million images from Imagenet [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] of 1000 different categories. These pre-trained
layers have a strong generalization power and they are able to find and summarize
information that will help to classify most of the images from the real-world environment.
For the proposed study we have utilized this network for our custom classification task
by retraining the final layer of the network thus updating and finetuning the softmax
layer, by applying the method of transfer learning. This was preferred as the amount of
data available for this task is limited and training the Inception-v3 from the beginning
would require a lot of time and computational resources. Therefore, by fine-tuning the
inception v3 model, we take advantage of its powerful pre-trained layers and thus being
able to provide satisfying accuracy results even with a limited amount of data.
MobileNet is one of the other finest deep learning architectures proposed by Howard et
al. 2017 [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] specifically designed for mobile and embedded vision applications.
MobileNet is counted as a lightweight deep learning architecture. It uses depth-wise
separable convolutions that means it performs a single convolution on each color channel
rather than combining all three and flattening it. This has the effect of filtering the input
channels. For our experiments, the networks were trained with two different types of
data. The networks were trained with the original images and also with the images after
applying the preprocessing operations to them. The training and validation accuracy
were examined in order to study the effect of the training on the networks with the two
different types of data. Finally, the accuracy on the test set is calculated in order to
evaluate the overall performance of the classifiers
4
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Implementation and Experimental Result</title>
      <p>
        The overall algorithm was implemented using Matlab R2018a for computing image
quality metrics and TensorFlow [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] for training the classifiers. The system was trained
and tested on a Core I7 sixth-generation machine equipped with NVIDIA RTX 2080
Graphical Processing Unit (GPU) having 8GB of dedicated graphic memory. The first
part of the experimental results displays the image quality metrics measured for both
benign and malignant melanoma cases before and after applying the image
preprocessing operations. It is displayed in Table 1.
      </p>
      <p>Image
99
99
99
99
99
99
99
99
19.3975
847.0221
0.9747
224 x 224
The experimental results show clearly that image quality is not comprised however it
is upgraded which is evident from high PSNR values and other metrics after applying
image preprocessing operations especially the hair removal filter. The image quality
metrics were carried out on more than fifty images and the same observations were
23.2953
measured. The second part of the experiments includes the training of the classifiers
using the two state of the art deep learning networks i.e. Inception-v3 and MobileNet.
For Inception-V3 the data was resized to 299 x 299 since the network has an image
input size of 299 by 299. The classifiers were trained on both sets of images i.e. original
(ground truth) images and images after applying the preprocessing operations to them.
Both the networks were trained using the same hyperparameters. The learning rate was
set to 0.005 with a batch size of 32 and total iterations were set to 5000. The training
data was split in the ratio of 75% and 25% for training and validations images
respectively. Fig. 3 and Fig. 4 display the training and validation accuracy graphs along with
the error rate (cross-entropy) graph of MobileNet and Inception-v3 networks.
The accuracy graphs in Fig. 3 show that training and validation accuracy before
applying the image preprocessing was 86% and 79.8% and it was increased to 89% and
85.9% by using a refined version of images obtained after applying the image
preprocessing operations. Similarly, the validation error rate was also decreased from 61% to
32% by using the refined version of images.
Training loss
Validation loss</p>
      <p>Iterations
The accuracy graphs in Fig. 4 show that training and validation accuracy before
applying the image preprocessing was 88.3% and 84.2%. By using the refined version of
images training accuracy tends to remain the same thought the validation accuracy was
increased to 86.1%. Similarly, the validation error rate was also decreased from 36% to
32.3% by using the refined version of images.</p>
      <p>Overall, in both networks, significant improvements were measured after using the
refined version of images. The experimental results show that the Inception-v3 network
was able to achieve better validation accuracy using a refined version of training data
i.e. 86.1 % thus we will be using the Inception-v3 network for evaluating it on the test
data. For evaluating the classifiers on the test data, we have picked numerous cases
from the test set from both classes, benign and malignant melanoma among which
visually complex and challenging test cases were selected for the proposed research work.
It is pertinent to mention that the network was tested using the original images
(unrefined version) to test the overall effectiveness of the classifier. Fig. 5 shows some of
the results predicted correctly on test images. Table 2 illustrates the complete results on
visually complex test cases selected for the proposed study which will be further used
to evaluate overall testing accuracy, sensitivity (true positive rate), specificity (true
negative rate) and precision metrics. The rows highlighted with red color indicates the
misclassified test cases when compared with ground truth results.
b</p>
      <p>Predicted results using
Inception-v3 Network trained</p>
      <p>on original images
Benign – Low risk – 97.8%</p>
      <sec id="sec-4-1">
        <title>Predicted results using Inception-v3 Network trained on processed images Benign – Low risk – 84.6%</title>
      </sec>
      <sec id="sec-4-2">
        <title>Ground truth Results Low risk</title>
        <p>Malignant–High risk – 91.8%</p>
      </sec>
      <sec id="sec-4-3">
        <title>Malignant – High risk – 89.1%</title>
      </sec>
      <sec id="sec-4-4">
        <title>High risk</title>
        <p>Benign – Low risk – 98.4%</p>
      </sec>
      <sec id="sec-4-5">
        <title>Benign – Low risk – 96.9%</title>
        <p>Benign – Low risk – 98.4%</p>
      </sec>
      <sec id="sec-4-6">
        <title>Benign – Low risk – 98.4%</title>
        <p>Malignant – High risk – 96.4%</p>
      </sec>
      <sec id="sec-4-7">
        <title>Malignant – High risk – 95.7%</title>
        <p>Malignant – High risk – 98.7%</p>
      </sec>
      <sec id="sec-4-8">
        <title>Malignant – High risk – 96.2%</title>
      </sec>
      <sec id="sec-4-9">
        <title>High risk</title>
        <p>Malignant – High risk – 98.8%</p>
      </sec>
      <sec id="sec-4-10">
        <title>Malignant – High risk – 97.8%</title>
      </sec>
      <sec id="sec-4-11">
        <title>High risk</title>
        <p>Malignant – High risk – 99.4%</p>
      </sec>
      <sec id="sec-4-12">
        <title>Malignant – High risk – 99.3%</title>
      </sec>
      <sec id="sec-4-13">
        <title>High risk</title>
        <p>Benign – Low risk – 71.2%</p>
      </sec>
      <sec id="sec-4-14">
        <title>Benign – Low risk – 60.7%</title>
        <p>Malignant – High risk – 85.9%</p>
      </sec>
      <sec id="sec-4-15">
        <title>Malignant – High risk – 76.8%</title>
        <p>Malignant – High risk – 99.5%</p>
      </sec>
      <sec id="sec-4-16">
        <title>Malignant – High risk – 99.3%</title>
      </sec>
      <sec id="sec-4-17">
        <title>High risk</title>
        <p>Malignant – High risk – 98.5%</p>
      </sec>
      <sec id="sec-4-18">
        <title>Malignant – High risk – 99.2%</title>
      </sec>
      <sec id="sec-4-19">
        <title>High risk</title>
        <p>Malignant – High risk – 70.9%</p>
      </sec>
      <sec id="sec-4-20">
        <title>Malignant – High risk – 87.4%</title>
      </sec>
      <sec id="sec-4-21">
        <title>High risk</title>
        <p>Benign– Low risk – 74.8%</p>
      </sec>
      <sec id="sec-4-22">
        <title>Malignant – High risk – 92.3 % High risk</title>
      </sec>
      <sec id="sec-4-23">
        <title>Low risk</title>
      </sec>
      <sec id="sec-4-24">
        <title>Low risk</title>
      </sec>
      <sec id="sec-4-25">
        <title>Low risk</title>
      </sec>
      <sec id="sec-4-26">
        <title>Low risk</title>
        <p>Low risk</p>
        <p>Malignant – High risk – 97.5 %</p>
      </sec>
      <sec id="sec-4-27">
        <title>Malignant – High risk – 98.7 %</title>
        <p>
          The overall performance of the Inception-v3 network on test data has been evaluated
using five quantitative measures: Accuracy, sensitivity, specificity, precision and F1
score [
          <xref ref-type="bibr" rid="ref15 ref19">15,19</xref>
          ]. These measures are computed using the following forms.
 1 
= 2 
 100
(5)
Where   ,  ,   ,    refer to true positive, false positive, false negative, and true
negative. ACC in (1) means overall testing accuracy, TPR in (2) means true positive
rate, TNR in (3) refers to true negative rate while PPV in (4) is an abbreviation for
positive prediction value.
        </p>
        <p>Table 3 illustrates the results of all the four quantitative measures: Accuracy,
sensitivity, specificity, and precision of the Inception-v3 network before and after using the
image preprocessing operations on test data. It can be observed that testing accuracy is
increased to 86% by training the classifier using the refined version of images.



(

(
) =
The main purpose of the proposed study was to improve the overall accuracy level of
two state of art deep learning networks which include Inception-v3 and MobileNet by
using the refined version of skin cancer images obtained after applying image
preprocessing operations. The experiments were conducted using the Kaggle Skin Cancer
Dataset by applying initial sharpening filter and hair removal algorithms. Initially, we
applied these algorithms as image pre-processing mechanisms to remove the clutters thus
producing the refined version of images. Different image quality metrics including Peak
Signal to Noise (PSNR), Mean Square Error (MSE), Maximum Absolute Squared
Deviation (MXERR) and Energy Ratio/ Ratio of Squared Norms (L2RAT) were used to
compare the image quality before and after applying the pre-processing techniques.
These metrics prove that image quality was upgraded after applying sharpening filter
and hair removal algorithms. In the next phase of experimental results, we have seen
substantial improvement in training, validation and test accuracy after applying image
pre-processing operation. Thus, we have achieved an overall test accuracy of 86% using
state of the art Inception-v3 network by fine-tuning the last layer of the network with a
refined version of kaggle skin cancer training dataset.</p>
        <p>
          For future work, more image pre-processing techniques like neural networks based
super image algorithms and other such techniques could be used to improve the image
quality to a better extent. Moreover, other state of the art deep neural networks such as
ResNet-101 [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], Xception [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] could be utilized in order to improve the accuracy
levels.
        </p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>Skin</given-names>
            <surname>Cancer</surname>
          </string-name>
          Facts and Figures, Web Link: http://www.skincancer.
          <article-title>org/skin-cancer-information/skin-cancer-facts, (</article-title>
          <source>Last accessed on 04th Oct</source>
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2. Organs of the Body, Web Link: http://hubpages.com/education/Human-Skin-
          <article-title>The-largestorgan-of-the-</article-title>
          <string-name>
            <surname>Integumentary-System</surname>
          </string-name>
          ,
          <source>(Last accessed on 05th Oct</source>
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3. Types of Skin Moles, Web Link: https://skinvision.com/en/articles/types
          <article-title>-of-skin-molesand-how-to-know-if-they-re-safe, (</article-title>
          <source>Last accessed on 01st Oct</source>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>Skin</given-names>
            <surname>Cancer Risk Factors</surname>
          </string-name>
          . Web Link: http://www.cancer.
          <article-title>org/cancer/skincancer-melanoma/detailedguide/melanoma-skin-cancer-risk-factors, (</article-title>
          <source>Last Accessed on 01st Oct</source>
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Kalouche</surname>
            , Simon,
            <given-names>Andrew</given-names>
          </string-name>
          <string-name>
            <surname>Ng</surname>
            ,
            <given-names>and John</given-names>
          </string-name>
          <string-name>
            <surname>Duchi</surname>
          </string-name>
          .
          <article-title>"Vision-based classification of skin cancer using deep learning." 2015, conducted on Stanfords Machine Learning course (CS 229) taught (</article-title>
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Alom</surname>
            ,
            <given-names>Md</given-names>
          </string-name>
          <string-name>
            <surname>Zahangir</surname>
          </string-name>
          , et al.
          <article-title>"Breast Cancer Classification from Histopathological Images with Inception Recurrent Residual Convolutional Neural Network." Journal of digital imaging (</article-title>
          <year>2019</year>
          ):
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Esteva</surname>
          </string-name>
          ,
          <string-name>
            <surname>Andre</surname>
          </string-name>
          , et al.
          <article-title>"Dermatologist-level classification of skin cancer with deep neural networks</article-title>
          .
          <source>" Nature</source>
          <volume>542</volume>
          .7639 (
          <year>2017</year>
          ):
          <fpage>115</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>Kaggle</given-names>
            <surname>Skin Cancer Dataset</surname>
          </string-name>
          , Web Link: https://www.kaggle.com/fanconic/skin
          <article-title>-cancer-malignant-vs-benign, (</article-title>
          <source>Last accessed on 24th September</source>
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>ISIC</given-names>
            <surname>Archive</surname>
          </string-name>
          <string-name>
            <surname>Dataset</surname>
          </string-name>
          ,Web Link: https:// www.isic archive.com,
          <source>(Last accessed on 24th September</source>
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Lee</surname>
            <given-names>T</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ng</surname>
            <given-names>V</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gallagher</surname>
            <given-names>R</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Coldman</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McLean D. DullRazor</surname>
          </string-name>
          <article-title>: “A software approach to hair removal from images” Published in the Computers in Biology and Medicine, 1997</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Szegedy</surname>
          </string-name>
          ,
          <string-name>
            <surname>Christian</surname>
          </string-name>
          , et al.
          <article-title>"Rethinking the inception architecture for computer vision</article-title>
          .
          <source>" Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          .
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Krizhevsky</surname>
            , Alex,
            <given-names>Ilya</given-names>
          </string-name>
          <string-name>
            <surname>Sutskever</surname>
            , and
            <given-names>Geoffrey E.</given-names>
          </string-name>
          <string-name>
            <surname>Hinton</surname>
          </string-name>
          .
          <article-title>"Imagenet classification with deep convolutional neural networks</article-title>
          .
          <source>" Advances in neural information processing systems</source>
          .
          <source>2012</source>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Howard</surname>
          </string-name>
          , Andrew G., et al.
          <article-title>"Mobilenets: Efficient convolutional neural networks for mobile vision applications</article-title>
          .
          <source>" arXiv preprint arXiv:1704.04861</source>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14. TensorFlow Deep Learning Platform, Web Site: https:// www.tensorflow.org,
          <source>(Last accessed on 27th September</source>
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15. M.
          <article-title>Stojanovi et</article-title>
          .al., “
          <article-title>Understanding sensitivity, specificity, and predictive values”</article-title>
          ,
          <source>Vojnosanit Pregl</source>
          , vol.
          <volume>71</volume>
          ,
          <issue>no11</issue>
          , pp.
          <fpage>1062</fpage>
          -
          <lpage>1065</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <surname>Kaiming</surname>
          </string-name>
          , et al.
          <article-title>"Deep residual learning for image recognition</article-title>
          .
          <source>" Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          .
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Chollet</surname>
          </string-name>
          , François.
          <article-title>"Xception: Deep learning with depthwise separable convolutions</article-title>
          .
          <source>" Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          .
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Doi</surname>
          </string-name>
          , Kunio.
          <article-title>"Computer-aided diagnosis in medical imaging: historical review, current status, and future potential." Published in Computerized medical imaging</article-title>
          and
          <source>graphics 31.4</source>
          (
          <year>2007</year>
          ):
          <fpage>198</fpage>
          -
          <lpage>211</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <article-title>F1 Score in Machine Learning</article-title>
          , Web Link: https://towardsdatascience.com
          <article-title>/multi-class-metrics-made-simple-part-ii-the-</article-title>
          <string-name>
            <surname>f1-</surname>
          </string-name>
          score-ebe8b2c2ca1,
          <source>(Last accessed on 29th Oct</source>
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>