<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Advanced Deep Learning Methodologies for Skin Cancer Classification in Prodromal Stages</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Muhammad</forename><forename type="middle">Ali</forename><surname>Farooq</surname></persName>
							<email>m.farooq3@nuigalway.ie</email>
							<affiliation key="aff0">
								<orgName type="institution">National University of Ireland (NUIG</orgName>
								<address>
									<postCode>H91CF50</postCode>
									<settlement>Galway</settlement>
									<country key="IE">IRELAND</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Asma</forename><surname>Khatoon</surname></persName>
							<email>a.khatoon1@nuigalway.ie</email>
							<affiliation key="aff0">
								<orgName type="institution">National University of Ireland (NUIG</orgName>
								<address>
									<postCode>H91CF50</postCode>
									<settlement>Galway</settlement>
									<country key="IE">IRELAND</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Viktor</forename><surname>Varkarakis</surname></persName>
							<email>v.varkarakis1@nuigalway.ie</email>
							<affiliation key="aff0">
								<orgName type="institution">National University of Ireland (NUIG</orgName>
								<address>
									<postCode>H91CF50</postCode>
									<settlement>Galway</settlement>
									<country key="IE">IRELAND</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Peter</forename><surname>Corcoran</surname></persName>
							<email>peter.corcoran@nuigalway.ie</email>
							<affiliation key="aff0">
								<orgName type="institution">National University of Ireland (NUIG</orgName>
								<address>
									<postCode>H91CF50</postCode>
									<settlement>Galway</settlement>
									<country key="IE">IRELAND</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Advanced Deep Learning Methodologies for Skin Cancer Classification in Prodromal Stages</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">657735AC5417D5F75D8BA5328E08668E</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T23:12+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Melanoma</term>
					<term>CNN</term>
					<term>DNN</term>
					<term>Dermoscopy</term>
					<term>Inception-v3</term>
					<term>MobileNet</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Technology-assisted platforms provide reliable solutions in almost every field these days. One such important application in the medical field is the skin cancer classification in preliminary stages that need sensitive and precise data analysis. For the proposed study the Kaggle skin cancer dataset is utilized. The proposed study consists of two main phases. In the first phase, the images are preprocessed to remove the clutters thus producing a refined version of training images. To achieve that, a sharpening filter is applied followed by a hair removal algorithm. Different image quality measurement metrics including Peak Signal to Noise (PSNR), Mean Square Error (MSE), Maximum Absolute Squared Deviation (MXERR) and Energy Ratio/ Ratio of Squared Norms (L2RAT) are used to compare the overall image quality before and after applying preprocessing operations. The results from the aforementioned image quality metrics prove that image quality is not compromised however it is upgraded by applying the preprocessing operations. The second phase of the proposed research work incorporates deep learning methodologies that play an imperative role in accurate, precise and robust classification of the lesion mole. This has been reflected by using two state of the art deep learning models: Inception-v3 and MobileNet. The experimental results demonstrate notable improvement in train and validation accuracy by using the refined version of images of both the networks, however, the Inception-v3 network was able to achieve better validation accuracy thus it was finally selected to evaluate it on test data. The final test accuracy using state of art Inception-v3 network was 86%.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Cancer nowadays is one of the greatest growing groups of diseases throughout the world, among which skin cancer is most common of them. According to stats and figures, the annual rate of skin cancer is increasing at an alarming rate each year <ref type="bibr" target="#b0">[1]</ref>. The modern medical science and treatment procedures prove that if skin cancer is detected in its initial phase then it is treatable by using appropriate medical measures which includes laser surgery or removing that part of the skin which ultimately could save a patient's life. Skin cancer has two main stages which include malignancy and melanoma among which melanoma is fatal and comes with the highest risk. In most cases, malignant mole is clearly visible on the patient's skin which is often identified by the patients themselves.</p><p>Dermoscopic diagnosis refers to a non-invasive skin imaging method, which has become a core tool in the diagnosis of melanoma and other pigmented skin lesions. However, performing dermoscopy using conventional methods may lower down the diagnostic accuracy which can lead to more chances of errors. These errors are generally caused by the complexity of lesion structures and the subjectivity of visual interpretations <ref type="bibr" target="#b17">[18]</ref>.</p><p>Computer-Aided Diagnosis (CAD) system is a type of digitized platform based on advanced computer vision, deep learning, and pattern recognition techniques for skin cancer classification. For the proposed study we have designed a CAD system for skin cancer classification by utilizing advanced deep neural networks. The system consists of the following steps: Firstly, a preprocessing of the digital images which includes removing clutter such as hair from that part of the skin where the pigmented mole is present and applying a sharpening filter to make that area more clear and visible thus minimizing the chances of error. The next essential step includes the feature extraction and classification process to extract the results for the cases under consideration by utilizing deep learning techniques. Section 2 presents the background and related study and highlights the medical aspects regarding skin cancer. Section 3 describes the detailed methodology of the proposed system whereas Section 4 presents the implementation and experimental results of the proposed study. Section 5 draws the overall conclusion of the paper.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Background/ Related Work</head><p>The human skin is the largest organ of the overall human body. It covers all other organs of the body. It guards the entire body from microbes, bacterium, ultraviolet radiation, helps to regulate body temperature and permits the sensations of touch, heat, and cold <ref type="bibr" target="#b1">[2]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Skin Moles and Skin Cancer</head><p>Mole or nevus on human skin can be described as a dark, erected spot comprised of skin cells that are grown in a group rather than individually. These cells are generally known as melanocytes which are responsible for producing melanin, the pigment color in our skin. The main reason behind mole development on human skin is predominantly because of direct sun exposure and any kind of extreme injury. The fair skin population has a greater ratio of skin moles due to the lower quantity of melanin (natural pigments) in their skins <ref type="bibr" target="#b2">[3]</ref>. There are three different kinds of skin malignant growth, which include Basal Cell Carcinoma (BCC), Squamous Cell Carcinoma (SCC), and Melanoma. Malignancy is a description of the "stage" of cancer. These malignant growths are critical however, Melanoma comes with the highest risk level and it is discovered more frequently in individuals maturing under 50 years for men and over 50 years for women <ref type="bibr" target="#b3">[4]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Related Work/ Previous Studies</head><p>The study proposed by Simon Kalouche utilizes <ref type="bibr" target="#b4">[5]</ref>  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Methodology</head><p>In the proposed study, an efficient skin cancer diagnosis system has been implemented for precise classification between malignant melanoma and benign cases. The complete algorithm consists of several steps starting from the input phase of applying image preprocessing ranging to the analysis of the case under consideration in the form of the probability of lesion Malignancy. Fig. <ref type="figure" target="#fig_0">1</ref> shows the complete workflow of the proposed algorithm.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Image Preprocessing</head><p>For the proposed study, the Kaggle skin cancer dataset <ref type="bibr" target="#b7">[8]</ref> consisting of processed skin cancer images of ISIC Archive <ref type="bibr" target="#b8">[9]</ref> has been utilized. The dataset has a total of 2637 training images and 660 testing images with a resolution of 224 x 224. It is consists of two main classes which include melanoma and benign cases. For image preprocessing two major operations have been applied which includes an initial sharpening filter followed by hair removal filter using dull razor software <ref type="bibr" target="#b9">[10]</ref>. These were selected in order to remove the clutter. The results of the image preprocessing operations on two random sample cases are shown in Fig. <ref type="figure" target="#fig_1">2</ref>.</p><p>It is noteworthy that image quality is refined after applying the image preprocessing operations. This is shown in Section 4 of the paper where the results from four different image quality metrics Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), Maximum Absolute Squared Deviation (MXERR) (MXERR) and Ratio of Squared Norms (L2RAT) on both ground truth images, and preprocessed images are presented.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Feature Extraction and Classification</head><p>In the next step, the processed images are fed to state-of-the-art deep neural networks in order to perform the feature extraction and classification steps. In this work, the Inception-v3 and MobileNet deep learning architectures are utilized. These architectures play a vital role by extracting feature values from raw pixel images. The Inception-v3 has state of the art performance in the classification task. It is made up of 48 layers stacked on top of each other <ref type="bibr" target="#b10">[11]</ref>. The Inception-v3 model was initially trained using 1.2 million images from Imagenet <ref type="bibr" target="#b11">[12]</ref> of 1000 different categories. These pre-trained layers have a strong generalization power and they are able to find and summarize information that will help to classify most of the images from the real-world environment.</p><p>For the proposed study we have utilized this network for our custom classification task by retraining the final layer of the network thus updating and finetuning the softmax layer, by applying the method of transfer learning. This was preferred as the amount of data available for this task is limited and training the Inception-v3 from the beginning would require a lot of time and computational resources. Therefore, by fine-tuning the inception v3 model, we take advantage of its powerful pre-trained layers and thus being able to provide satisfying accuracy results even with a limited amount of data. Mo-bileNet is one of the other finest deep learning architectures proposed by Howard et al. 2017 <ref type="bibr" target="#b12">[13]</ref> specifically designed for mobile and embedded vision applications. Mo-bileNet is counted as a lightweight deep learning architecture. It uses depth-wise separable convolutions that means it performs a single convolution on each color channel rather than combining all three and flattening it. This has the effect of filtering the input channels. For our experiments, the networks were trained with two different types of data. The networks were trained with the original images and also with the images after applying the preprocessing operations to them. The training and validation accuracy were examined in order to study the effect of the training on the networks with the two different types of data. Finally, the accuracy on the test set is calculated in order to evaluate the overall performance of the classifiers</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Implementation and Experimental Result</head><p>The overall algorithm was implemented using Matlab R2018a for computing image quality metrics and TensorFlow <ref type="bibr" target="#b13">[14]</ref> for training the classifiers. The system was trained and tested on a Core I7 sixth-generation machine equipped with NVIDIA RTX 2080 Graphical Processing Unit (GPU) having 8GB of dedicated graphic memory. The first part of the experimental results displays the image quality metrics measured for both benign and malignant melanoma cases before and after applying the image preprocessing operations. It is displayed in Table <ref type="table" target="#tab_1">1</ref>. The experimental results show clearly that image quality is not comprised however it is upgraded which is evident from high PSNR values and other metrics after applying image preprocessing operations especially the hair removal filter. The image quality metrics were carried out on more than fifty images and the same observations were  The accuracy graphs in Fig. <ref type="figure" target="#fig_3">3</ref> show that training and validation accuracy before applying the image preprocessing was 86% and 79.8% and it was increased to 89% and 85.9% by using a refined version of images obtained after applying the image preprocessing operations. Similarly, the validation error rate was also decreased from 61% to 32% by using the refined version of images. The accuracy graphs in Fig. <ref type="figure" target="#fig_4">4</ref> show that training and validation accuracy before applying the image preprocessing was 88.3% and 84.2%. By using the refined version of images training accuracy tends to remain the same thought the validation accuracy was increased to 86.1%. Similarly, the validation error rate was also decreased from 36% to 32.3% by using the refined version of images. Overall, in both networks, significant improvements were measured after using the refined version of images. The experimental results show that the Inception-v3 network was able to achieve better validation accuracy using a refined version of training data i.e. 86.1 % thus we will be using the Inception-v3 network for evaluating it on the test data. For evaluating the classifiers on the test data, we have picked numerous cases from the test set from both classes, benign and malignant melanoma among which visually complex and challenging test cases were selected for the proposed research work. It is pertinent to mention that the network was tested using the original images (unrefined version) to test the overall effectiveness of the classifier. Fig. <ref type="figure" target="#fig_6">5</ref> shows some of the results predicted correctly on test images. Table <ref type="table" target="#tab_2">2</ref> illustrates the complete results on visually complex test cases selected for the proposed study which will be further used    <ref type="formula">4</ref>) is an abbreviation for positive prediction value.</p><p>Table <ref type="table" target="#tab_3">3</ref> illustrates the results of all the four quantitative measures: Accuracy, sensitivity, specificity, and precision of the Inception-v3 network before and after using the image preprocessing operations on test data. It can be observed that testing accuracy is increased to 86% by training the classifier using the refined version of images. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion and Future work</head><p>The main purpose of the proposed study was to improve the overall accuracy level of two state of art deep learning networks which include Inception-v3 and MobileNet by using the refined version of skin cancer images obtained after applying image preprocessing operations. The experiments were conducted using the Kaggle Skin Cancer Dataset by applying initial sharpening filter and hair removal algorithms. Initially, we applied these algorithms as image pre-processing mechanisms to remove the clutters thus producing the refined version of images. Different image quality metrics including Peak Signal to Noise (PSNR), Mean Square Error (MSE), Maximum Absolute Squared Deviation (MXERR) and Energy Ratio/ Ratio of Squared Norms (L2RAT) were used to compare the image quality before and after applying the pre-processing techniques. These metrics prove that image quality was upgraded after applying sharpening filter and hair removal algorithms. In the next phase of experimental results, we have seen substantial improvement in training, validation and test accuracy after applying image pre-processing operation. Thus, we have achieved an overall test accuracy of 86% using state of the art Inception-v3 network by fine-tuning the last layer of the network with a refined version of kaggle skin cancer training dataset.</p><p>For future work, more image pre-processing techniques like neural networks based super image algorithms and other such techniques could be used to improve the image quality to a better extent. Moreover, other state of the art deep neural networks such as ResNet-101 <ref type="bibr" target="#b15">[16]</ref>, Xception <ref type="bibr" target="#b16">[17]</ref> could be utilized in order to improve the accuracy levels.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig 1 .</head><label>1</label><figDesc>Fig 1. Workflow diagram of the proposed method</figDesc><graphic coords="4,229.44,132.48,175.56,300.36" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 2 .</head><label>2</label><figDesc>Fig. 2. Image preprocessing operations a) original image, b) initial Matlab sharpening filter, c) hair removal using dull razor software<ref type="bibr" target="#b9">[10]</ref> </figDesc><graphic coords="4,340.80,538.08,62.16,65.40" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>measured. The second part of the experiments includes the training of the classifiers using the two state of the art deep learning networks i.e. Inception-v3 and MobileNet. For Inception-V3 the data was resized to 299 x 299 since the network has an image input size of 299 by 299. The classifiers were trained on both sets of images i.e. original (ground truth) images and images after applying the preprocessing operations to them. Both the networks were trained using the same hyperparameters. The learning rate was set to 0.005 with a batch size of 32 and total iterations were set to 5000. The training data was split in the ratio of 75% and 25% for training and validations images respectively. Fig.3and Fig.4display the training and validation accuracy graphs along with the error rate (cross-entropy) graph of MobileNet and Inception-v3 networks.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Fig. 3 .</head><label>3</label><figDesc>Fig. 3. Accuracy and loss graph of MobileNet network a), training and validation accuracy before applying image preprocessing operations b), training and validation accuracy after applying image preprocessing operations c), training and validation loss before applying image preprocessing operations and d) training and validation loss after applying image preprocessing operations</figDesc><graphic coords="7,142.56,385.32,159.12,104.76" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Fig. 4 .</head><label>4</label><figDesc>Fig. 4. Accuracy and loss graph of Inception-v3 network a), training and validation accuracy before applying image preprocessing operations b), training and validation accuracy after applying image preprocessing operations c), training and validation loss before applying image preprocessing operations and d) training and validation loss after applying image preprocessing operations</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head></head><label></label><figDesc>testing accuracy, sensitivity (true positive rate), specificity (true negative rate) and precision metrics. The rows highlighted with red color indicates the misclassified test cases when compared with ground truth results.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Fig. 5 .</head><label>5</label><figDesc>Fig. 5. Test case results on two random cases using Inception-v3 network a) case 4 -(benign = Low risk = 98.4% confidence level), b) Case 16 -(malignant melanoma = high risk = 97.8% confidence level).</figDesc><graphic coords="9,342.48,179.76,64.08,63.96" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1 .</head><label>1</label><figDesc>Image Quality Metrics</figDesc><table><row><cell>Image</cell><cell>PSNR</cell><cell>MSE</cell><cell cols="3">MAXERR L2RAT Dimension</cell></row><row><cell></cell><cell>19.4205</cell><cell>743.0656</cell><cell>99</cell><cell>0.9657</cell><cell>224 x 224</cell></row><row><cell></cell><cell>21.5481</cell><cell>655.2738</cell><cell>99</cell><cell>0.9801</cell><cell>224 x 224</cell></row><row><cell></cell><cell>22.1285</cell><cell>398.3229</cell><cell>99</cell><cell>0.9868</cell><cell>224 x 224</cell></row><row><cell></cell><cell>23.2953</cell><cell>304.4737</cell><cell>99</cell><cell>0.9902</cell><cell>224 x 224</cell></row><row><cell></cell><cell>22.4291</cell><cell>371.6785</cell><cell>99</cell><cell>0.9852</cell><cell>224 x 224</cell></row><row><cell></cell><cell>24.0840</cell><cell>329.9128</cell><cell>99</cell><cell>0.9903</cell><cell>224 x 224</cell></row><row><cell></cell><cell>18.6732</cell><cell>882.5930</cell><cell>99</cell><cell>0.9681</cell><cell>224 x 224</cell></row><row><cell></cell><cell>19.3975</cell><cell>847.0221</cell><cell>99</cell><cell>0.9747</cell><cell>224 x 224</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2 .</head><label>2</label><figDesc>Individual Test Case ResultsWhere 𝑡𝑝,𝑓𝑝, 𝑓𝑛, 𝑎𝑛𝑑 𝑡𝑛 refer to true positive, false positive, false negative, and true negative. ACC in (1) means overall testing accuracy, TPR in (2) means true positive rate, TNR in (3) refers to true negative rate while PPV in (</figDesc><table><row><cell></cell><cell>a</cell><cell>b</cell><cell></cell></row><row><cell>Test</cell><cell>Predicted results using</cell><cell>Predicted results using</cell><cell>Ground</cell></row><row><cell>Case</cell><cell>Inception-v3 Network trained</cell><cell>Inception-v3 Network trained</cell><cell>truth</cell></row><row><cell></cell><cell>on original images</cell><cell>on processed images</cell><cell>Results</cell></row><row><cell>1</cell><cell>Benign -Low risk -97.8%</cell><cell>Benign -Low risk -84.6%</cell><cell>Low risk</cell></row><row><cell>2</cell><cell cols="3">Malignant-High risk -91.8% Malignant -High risk -89.1% High risk</cell></row><row><cell>3</cell><cell>Benign -Low risk -98.4%</cell><cell>Benign -Low risk -96.9%</cell><cell>Low risk</cell></row><row><cell>4</cell><cell>Benign -Low risk -98.4%</cell><cell>Benign -Low risk -98.4%</cell><cell>Low risk</cell></row><row><cell cols="4">5 Malignant -High risk -96.4% Malignant -High risk -95.7% Low risk</cell></row><row><cell cols="4">6 Malignant -High risk -98.7% Malignant -High risk -96.2% High risk</cell></row><row><cell cols="4">7 Malignant -High risk -98.8% Malignant -High risk -97.8% High risk</cell></row><row><cell cols="4">8 Malignant -High risk -99.4% Malignant -High risk -99.3% High risk</cell></row><row><cell>9</cell><cell>Benign -Low risk -71.2%</cell><cell>Benign -Low risk -60.7%</cell><cell>Low risk</cell></row><row><cell cols="4">10 Malignant -High risk -85.9% Malignant -High risk -76.8% Low risk</cell></row><row><cell cols="4">11 Malignant -High risk -99.5% Malignant -High risk -99.3% High risk</cell></row><row><cell cols="4">12 Malignant -High risk -98.5% Malignant -High risk -99.2% High risk</cell></row><row><cell cols="4">13 Malignant -High risk -70.9% Malignant -High risk -87.4% High risk</cell></row><row><cell>14</cell><cell>Benign-Low risk -74.8%</cell><cell cols="2">Malignant -High risk -92.3 % High risk</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 3 .</head><label>3</label><figDesc>Overall Quantitative Metrics Results on Test Data</figDesc><table><row><cell>Quantitative</cell><cell>Inception-v3 network</cell><cell>Inception-v3 Network trained</cell></row><row><cell>measures</cell><cell>trained on original images</cell><cell>on refined (processed) images</cell></row><row><cell>Accuracy</cell><cell>81%</cell><cell>86%</cell></row><row><cell>Sensitivity</cell><cell>87.5%</cell><cell>89%</cell></row><row><cell>Specificity</cell><cell>77%</cell><cell>83%</cell></row><row><cell>Precision</cell><cell>70%</cell><cell>80%</cell></row><row><cell>F1 Score</cell><cell>77%</cell><cell>84%</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<ptr target="http://www.skincancer.org/skin-cancer-infor-mation/skin-cancer-facts" />
		<title level="m">Skin Cancer Facts and Figures</title>
				<imprint>
			<date type="published" when="2019-10-04">on 04 th Oct 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<ptr target="http://hubpages.com/education/Human-Skin-The-largest-organ-of-the-Integumentary-System" />
		<title level="m">Organs of the Body</title>
				<imprint>
			<date type="published" when="2019-10-05">on 05 th Oct 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<ptr target="https://skinvision.com/en/articles/types-of-skin-moles-and-how-to-know-if-they-re-safe" />
		<title level="m">Types of Skin Moles</title>
				<imprint>
			<date type="published" when="2019-10-01">on 01 st Oct 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<ptr target="http://www.cancer.org/cancer/skincancer-mela-noma/detailedguide/melanoma-skin-cancer-risk-factors" />
		<title level="m">Skin Cancer Risk Factors</title>
				<imprint>
			<date type="published" when="2019-10-01">Accessed on 01 st Oct 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Vision-based classification of skin cancer using deep learning</title>
		<author>
			<persName><forename type="first">Simon</forename><surname>Kalouche</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrew</forename><surname>Ng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">John</forename><surname>Duchi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">conducted on Stanfords Machine Learning course (CS</title>
				<meeting><address><addrLine>taught</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015">2015. 2016</date>
			<biblScope unit="volume">229</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Breast Cancer Classification from Histopathological Images with Inception Recurrent Residual Convolutional Neural Network</title>
		<author>
			<persName><forename type="first">Md</forename><surname>Alom</surname></persName>
		</author>
		<author>
			<persName><surname>Zahangir</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of digital imaging</title>
		<imprint>
			<biblScope unit="page" from="1" to="13" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Dermatologist-level classification of skin cancer with deep neural networks</title>
		<author>
			<persName><forename type="first">Andre</forename><surname>Esteva</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Nature</title>
		<imprint>
			<biblScope unit="volume">542</biblScope>
			<biblScope unit="page">115</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<ptr target="https://www.kaggle.com/fanconic/skin-cancer-ma-lignant-vs-benign" />
		<title level="m">Kaggle Skin Cancer Dataset</title>
				<imprint>
			<date type="published" when="2019-09-24">on 24 th September 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<ptr target="https://www.isicarchive.com" />
		<title level="m">ISIC Archive Dataset</title>
				<imprint>
			<date type="published" when="2019-09-24">on 24 th September 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">DullRazor: &quot;A software approach to hair removal from images</title>
		<author>
			<persName><forename type="first">T</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Ng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Gallagher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Coldman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Mclean</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Published in the Computers in Biology and Medicine</title>
		<imprint>
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Rethinking the inception architecture for computer vision</title>
		<author>
			<persName><forename type="first">Christian</forename><surname>Szegedy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE conference on computer vision and pattern recognition</title>
				<meeting>the IEEE conference on computer vision and pattern recognition</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Imagenet classification with deep convolutional neural networks</title>
		<author>
			<persName><forename type="first">Alex</forename><surname>Krizhevsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ilya</forename><surname>Sutskever</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Geoffrey</forename><forename type="middle">E</forename><surname>Hinton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in neural information processing systems</title>
				<imprint>
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Mobilenets: Efficient convolutional neural networks for mobile vision applications</title>
		<author>
			<persName><forename type="first">Andrew</forename><forename type="middle">G</forename><surname>Howard</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1704.04861</idno>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<ptr target="https://www.tensorflow.org" />
		<title level="m">TensorFlow Deep Learning Platform</title>
				<imprint>
			<date type="published" when="2019-09-27">Last accessed on 27 th September 2019</date>
		</imprint>
	</monogr>
	<note>Web Site</note>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Understanding sensitivity, specificity, and predictive values</title>
		<author>
			<persName><forename type="first">M</forename><surname>Stojanovi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Vojnosanit Pregl</title>
		<imprint>
			<biblScope unit="volume">71</biblScope>
			<biblScope unit="issue">no11</biblScope>
			<biblScope unit="page" from="1062" to="1065" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Deep residual learning for image recognition</title>
		<author>
			<persName><forename type="first">Kaiming</forename><surname>He</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE conference on computer vision and pattern recognition</title>
				<meeting>the IEEE conference on computer vision and pattern recognition</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Xception: Deep learning with depthwise separable convolutions</title>
		<author>
			<persName><forename type="first">François</forename><surname>Chollet</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE conference on computer vision and pattern recognition</title>
				<meeting>the IEEE conference on computer vision and pattern recognition</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Computer-aided diagnosis in medical imaging: historical review, current status, and future potential</title>
		<author>
			<persName><forename type="first">Kunio</forename><surname>Doi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Published in Computerized medical imaging and graphics</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="198" to="211" />
			<date type="published" when="2007">2007. 2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<ptr target="https://towardsdatascience.com/multi-class-met-rics-made-simple-part-ii-the-f1-score-ebe8b2c2ca1" />
		<title level="m">F1 Score in Machine Learning</title>
				<imprint>
			<date type="published" when="2019-10-29">on 29 th Oct 2019</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
