<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <fpage>78</fpage>
      <lpage>90</lpage>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
Workshop
Proceedings</p>
      <p>
        ceur-ws.org
ISSN 1613-0073
rate of death is 0.3% and the rate of acquiring the condition is 2.9%. Melanoma patients, on
the other hand, who receive therapy at an early stage have a 99% probability of surviving the
disease. One in five adults in the nation of America will get cancer of the skin at some point
in their life, according to estimates. Skin cancer comes in two main varieties non-melanoma
and melanoma, with cancer being the deadlier of the two. The phrases ”basal cell carcinoma”
(BCC) and ”squamous cell carcinoma” (SCC) refer to the two most often diagnosed sub-types
of non-melanoma skin cancer. These names are abbreviated as ”BCC” and ”SCC,” respectively.
While basal cell carcinoma is by far the most common kind of skin cancer, seldom results in
death, it has the potential to leave a person terribly deformed. Squamous cell carcinoma, often
known as SCC, is the form of skin cancer that occurs second most frequently. Together, BCC
and SCC are responsible for approximately 95% of the total number of non-melanoma skin
cancers [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Figure 1 provides illustrations of a variety of skin malignancies, including some
that are more prevalent than others, include Merkel cell carcinoma, Kaposi’s sarcoma, and basal
cell carcinoma.
      </p>
      <p>
        Dermoscopy is a process that does not involve the use of any invasive procedures and is
utilised by medical professionals in order to inspect the skin and detect any abnormalities.
During this stage of the process, medical specialists examine the skin lesion in question in
order to look for melanoma warning signs such as the lesion’s colour, texture, irregular border,
form, size, and so on. Melanoma is notoriously dificult to accurately diagnose, necessitating
the services of an experienced dermatologist who possesses a significant amount of both
education and practical expertise. There are dangers involved in depending exclusively on
visual examination, as the research indicates that even among qualified dermatologists, the
accuracy rate is only between 50% to 60% [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. This is due to the fact that skin lesions often
exhibit a broad variety of sizes, shapes, and boundary characteristics; they frequently lack
contrast when compared to the skin around them; and there is often present background noise,
including skin hair, lubricants, air, and bubbles. Therefore, developing a reliable CAD system
for the early detection and diagnosis of melanoma cancer is an immediate need. Melanoma
will have a lower mortality rate as a result of two factors: an increase in the rate at which it is
detected and an improvement in the ability to identify the disease at an earlier stage [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The
majority of these strategies require a significant amount of time and are challenging to apply
in clinical settings, both of which reduce their overall utility and generalizability. The use of
deep learning-based approaches, and in particular CNN employing deep learning techniques,
has become more commonplace in object recognition tasks [
        <xref ref-type="bibr" rid="ref5 ref6 ref7 ref8">5, 6, 7, 8</xref>
        ] in recent years, replacing
systems that depend on manually-crafted features. This shift occurred as a result of the rise of
CNNs. The most important advantage ofered by CNN is the incredible visualisation capabilities
it provides to every given classification or detection task depending on the data it was trained
on [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. This paper describes the methods that the researchers used to gather and analyse their
data, as well as the models and procedures that they used to train the model, test it and evaluate
its performance in the context of melanoma detection.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Literature Review</title>
      <p>
        A deep neural network design that is based on transfer learning techniques has been presented
by Jaisakthi S M et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] for the purpose of accurately classifying diferent types of skin cancer
into melanoma and non-melanoma categories. The authors have employed the EficientNet
design to automatically scale the network’s depth, breadth, and resolution in order to learn
more complicated and fine-grained patterns from lesion images. In addition to this, they have
added more data to their dataset in order to address the issue of class imbalance, and they have
made use of metadata information in order to refine the classification outcomes. The scientists
have carried out a number of studies in which a variety of transfer models were utilised, and
they discovered that EficientNet variations performed better than other topologies. They used
an area under the ROC curve (AUC-ROC) to analyse the performance of the suggested system.
The result was a score of 0.9681, which was achieved with optimal fine-tuning of EficientNet-B6
using ranger optimizer. Zhen Yu and colleagues [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] established a framework for automating
the early diagnosis of melanoma by employing successive dermoscopic pictures. There are three
main components to the proposed method: the lesion position module, which aligns lesion
images from diferent times into the same coordinate systems to determine the lesion progress
region; the spatio-temporal networks, which learn spatio-temporal characteristics based on
aligned successive images by using a closely linked two stream network; and the early
identiifcation module, which achieves early melanoma diagnosis by using the acquired knowledge.
A supervised machine learning method has been presented for determining the presence of
melanoma by Malik Bader Alazzam et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Deep learning principles in dermoscopy images
and sampling balancing techniques are used in this technique. This research aims to provide
medical practitioners with a second opinion on melanoma diagnoses by assessing the eficacy of
using algorithms for machine learning in conjunction with imbalanced base training approaches.
The research study makes use of two hundred dermoscopy images from which patterns of skin
lesions were able to be retrieved by applying the ABCD rule in conjunction with the VGG19,
VGG16, Inception, and ResNet convolutional neural network architectures. The sensitivity of
the random forest classifier was close to 93% after employing choosing attributes with GS and
training data balancing using the Synthetic Minority Oversampling Technique and the Edited
Nearest Neighbour rule, and the kappa index (k index) was close to 78%. In addition, the kappa
index (k index) for the random forest classifier was 78%. Using YOLOv4-DarkNet and Active
Contour, the authors of this study, Saleh albahli et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], ofer a method for the detection and
segmentation of melanomas in the skin. This method involves cleaning dermoscopic pictures of
artefacts such as hairs, gel bubbles, and clinical marks; sharpening image regions; and utilising
YOLOv4 object detector to diferentiate between infected and non-infected areas. After that, the
parts of the melanoma that have been infected are retrieved using active contour segmentation.
The strategy that has been developed obtains a Jaccard coeficient of 0.989 and an average dice
score of 1. This method is evaluated using the ISIC2018 and ISIC2016 datasets, and it is compared
to the most recent and cutting-edge methods for melanoma identification and segmentation. A
three-step methodology is used by Qaiser Abbas et al. [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], which involves preprocessing and
data augmentation as the first stage, feature extraction as the second stage, and classification
and prediction as the third stage. Data for dermoscopy images were obtained from a hospital
afiliated with a university in South Korea, and then a number of preprocessing techniques were
utilised in order to eliminate dermoscopy artefacts. The deep learning models were then trained
using the dataset after it had been preprocessed. The methodology is laid out in the form of
a flowchart, which can be found in the paper. Table 1. Shows the comparisons of literature
review.
      </p>
      <sec id="sec-2-1">
        <title>Reference No.</title>
      </sec>
      <sec id="sec-2-2">
        <title>Methodology Used</title>
      </sec>
      <sec id="sec-2-3">
        <title>Advantages</title>
      </sec>
      <sec id="sec-2-4">
        <title>Disadvantages [10] [11] [12]</title>
        <p>
          [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]
[
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]
        </p>
      </sec>
      <sec id="sec-2-5">
        <title>Deep Neural Net- Accurate classifica- Requires a large</title>
        <p>work (Transfer tion of skin cancer dataset for optimal
Learning - Eficient- into melanoma performance.
Net) and non-melanoma</p>
        <p>categories.</p>
        <p>Framework for Automated early di- Successive
derEarly Diagnosis of agnosis of melanoma moscopic images
Melanoma using successive der- needed for accurate
moscopic pictures. diagnosis.</p>
        <p>Supervised Machine Second opinion for Relies on
derLearning (Random melanoma diagnoses moscopy images and
Forest Classifier) using machine learn- may not cover other
ing algorithms. diagnostic methods.</p>
        <p>YOLOv4-DarkNet Segmentation &amp; De- Requires cleaning
and Active Contour tection and segmen- dermoscopic images
for Detection tation of melanomas of artefacts and
with high accuracy. sharpening image
regions.</p>
        <p>Three-Step Method- Obtained der- Dependent on
speology (Preprocessing, moscopy images cific preprocessing
Feature Extraction, from a hospital in techniques used and
Classification) South Korea. their efectiveness.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Techniques for Diagnosis</title>
      <p>This part of the paper discusses the two methods that are used to determine whether or not
melanoma symptoms are present: the first method is a physical diagnostic, and the second
method is a computer-aided diagnostic.</p>
      <sec id="sec-3-1">
        <title>3.1. Physical Diagnostic</title>
        <p>During a physical examination, the presence of any pigmented lesion that exhibits criteria
outlined in the ”ABCDE” mnemonic should raise suspicion for the presence of melanoma shown
in figure 2. Asymmetry, Border irregularity, Colour Variegation, Diameter Greater Than 6 mm,
and Evolution or Timing of the Lesion’s Growth are the five characteristics of melanoma that
the ABCDE technique was supposed to help doctors and patients notice.</p>
        <p>The ABCDE approach was designed to assist doctors and patients spot melanoma more
quickly. If a lesion of this kind is discovered, a comprehensive examination of the surrounding
area must be performed in order to look for further metastatic foci or satellite lesions. After a
worrisome lesion has been thoroughly analyzed, the rest of the patient’s skin, including the
scalp, perineum, interdigital space, genitalia, and subungual regions, should be examined for
any more suspect lesions. This should be done as soon as possible. Every lesion that appears to
be benign needs to be noted, and the lymph node basins need to be palpated for any signs of
lymphadenopathy.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Computer-Aided Diagnostic</title>
        <p>This section provides an overview of the sequential processes that are involved in the image
processing with deep learning algorithms. The steps involved in determining whether melanoma
symptoms are present using medical dermoscopy are outlined in Figure 3.</p>
        <p>
          Image Acquisition: Image acquisition is the procedure of capturing a visual representation of
an object in digital format [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. Image acquisition is a process. To explain it more simply, it is
the action of obtaining a digital representation of the thing that is being sought after.
        </p>
        <p>Image Pre-Processing: The actions that are conducted before actually working on a picture to
improve the image information in accordance with the requirements are referred to as ”image
pre-processing,” and the phrase the image pre processing has been employed to describe such
steps.</p>
        <p>Image Augmentation: Image Augmentation refers to the technique of modifying an image,
which is typically done to compensate for a shortage of data that is readily available. During
this stage of the process, we will manipulate the picture in such a way as to trick the computer
into thinking it is looking at an entirely new image by altering either its orientation or its colour
scheme.</p>
        <p>Feature Extraction: A procedure known as feature extraction can be used to reduce the total
number of features in a data set. In order to accomplish this, we must first get rid of the obsolete
functions, and then re-create new ones based on the defunct ones. The vast majority of the data
that was originally contained within the features that have been updated has been condensed
into these new ones.</p>
        <p>Image Classification: The field of object recognition is one in which deep learning has at last
reached its full potential. This is because deep learning was designed to be implemented with
many layers of artificially trained neural networks, each of which is responsible for extracting a
distinct collection of features from the image that is to be classified. This is because of the fact
that deep learning was intended to be performed with numerous layers of neural networks.</p>
        <p>Using the support of modern computer technologies, a diagnosis of symptoms and signs of
skin cancer may be made rapidly, without dificulty, and at a price that is within most people’s
budgets. Whether the symptoms of skin cancer are brought on by melanoma or another kind
of skin cancer can be determined in a number of diferent ways, all of which do not involve
intrusive procedures.</p>
        <p>Figure 4 depicts the standard procedure for identifying skin cancer, which includes
multiple processes such as image acquisition, preliminary processing, augmentation, extraction
of features, and classification. Recent years have seen a revolutionary shift in the machine
learning industry thanks to the advent of deep learning, which was previously untouched by the
phenomenon. Methods that utilise artificial neural networks are at the centre of the machine
learning subfield that is usually recognised as being the most cutting-edge. The investigation of
how the human brain performs its functions served as inspiration for the development of these
algorithms.</p>
        <p>In these situations, the performance of systems based on deep learning has been superior
to that of approaches of machine learning that are more traditionally used. In recent years,
numerous deep learning strategies have been implemented in the creation of automated cancer
detection systems. In this work, deep learning-based approaches for detecting skin cancer were
investigated, and both their benefits and drawbacks were thoroughly discussed. In order to
accomplish this goal, This paper presents a systematic overview of prior work on the topic of
using deep learning techniques to identify skin cancer. Some examples of these techniques
are convolutional neural networks, generative adversarial networks, Kohonen self-organizing
neural networks, and artificial neural networks.</p>
      </sec>
      <sec id="sec-3-3">
        <title>4.1. Dataset</title>
        <p>
          A significant barrier to the efective use of deep learning is the absence of a dataset that is
adequate for the task at hand. Because of this, there will be significant challenges to overcome,
as any learning algorithm requires a sizeable amount of training data in order to assess the
efectiveness of the algorithm. There is a determined efort being made to construct archives
that will someday store the largest collection of medical images in the world, but at the same
time, it is imperative that the privacy of patients be maintained. Researchers are currently
turning to using images gathered from hospitals and institutes that specialise in cancer research
so that they may make their computer models into practise. Researchers typically work with a
small data collection, which can introduce some degree of bias into their conclusions; in order
to combat this issue, they sometimes turn to pre-processing. To boost the total amount of data
acquired, researchers are increasingly turning to data augmentation techniques such as scaling,
rotation, flipping, and illumination correction. Two of the most popular databases are available
online: ISIC and PH2. Expanding the use of digital skin imaging to help reduce the death rate
from skin cancer is the goal of the International Skin Imaging Collaboration (ISIC), a scientific
and business partnership. To further test and evaluate proposed standards, ISIC has built and
maintained a public, open-source database of skin pictures.. This was done in order to facilitate
the testing and evaluation process. The purpose of this collection is to serve as a repository
for diagnostic images that can be put to use in the education, investigation, and assessment of
automated diagnostic systems.
[
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]
[
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]
[
          <xref ref-type="bibr" rid="ref18 ref19">18, 19</xref>
          ]
[
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]
[
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]
[
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]
        </p>
        <sec id="sec-3-3-1">
          <title>Data Set</title>
          <p>ISIC 2016
ISIC 2017
ISIC 2018
ISIC 2019
ISIC 2020
PH2</p>
        </sec>
        <sec id="sec-3-3-2">
          <title>Training Data Testing Date 900 2000</title>
          <p>The Dermatology Department at the Hospital Pedro Hispano in Matosinhos, Portugal
maintains a database known as PH2 including dermoscopy images supplied by patients. The PH2
dataset was created for scientific study and standardised practise to facilitate comparison studies
of dermoscopy image segmentation and classification methods. There are almost 200 pictures
here depicting diferent melanocyte-induced skin lesions. These pictures show a total of 160
benign moles, including 80 typical nevi, 80 atypical nevi, and 40 melanomas. The number of
photos used for each task and the used data set are detailed in Table 4.1. It has been claimed
that from 2016 through 2020, the International Skin Imaging Collaboration (ISIC) will be the
group leading a competition at the International Symposium on Biomedical Imaging (ISBI).</p>
        </sec>
      </sec>
      <sec id="sec-3-4">
        <title>4.2. Evaluation Standards</title>
        <p>When constructing a model using deep learning, accuracy should at all times be the main concern.
However, while approaching a classification problem, it is essential to take into account both
the accuracy and the frequency of any misclassifications that may have occurred. Because of
this, it is essential to have a technique that can determine what percentage of categories are
accurate and what percentage are not proper. The confusion matrix is a tool that can assist
with this. It’s an N-by-N matrix that scores how successfully a model handles a classification
problem. The larger the number, the more accurate the rating. Figure 5 depicts the confusion
matrix for a scenario that involves a pair of distinct classes. The following equations are used
to calculate Accuracy, Sensitivity, and Specificity from the Confusion Matrix.</p>
      </sec>
      <sec id="sec-3-5">
        <title>4.3. Comparison of Results</title>
        <p>The current state of the art in skin lesion segmentation is compared in terms of accuracy,
sensitivity, and specificity in this section.</p>
        <p>Advantages and Disadvantages
Due to the proportion of publicly
available dermatological data sets,
which are typically small and
contain obstructions, its applicability
to dermatology is limited.</p>
        <p>To achieve optimal segmentation,
the network makes accurate
predictions for each pixel. The network
sufers from a lack of sensitivity to
data-sensitivity metrics.</p>
        <p>There is no need for additional
postprocessing procedures because the
provided model accurately captures
the lesion region.</p>
        <p>The proposed LFN shows
remarkable ability to meet the challenge by
obtaining dermatoscopy features
with the highest average accuracy
and sensitivity.</p>
        <p>The technique achieved a
segmentation accuracy of over 90% despite
the presence of artefacts like hair
and air/oil bubbles.</p>
        <p>Aids in raising the overall
segmentation accuracy. The consistent
results. In terms of sensitivities, could
not get optimal results. There is a
lack of suficient edge precision.</p>
        <p>The suggested segmentation
method yields more accurate
results than other methods in the
literature.</p>
        <p>The implementation was more
accurate, sensitive, and specific
during the training and testing phases
compared to state-of-the-art
methods.</p>
        <p>The approach was highly efective
in its broad application, and it
excelled at the finer points where
precision was most important.</p>
        <p>
          It provided a more precise
segmentation, which will aid in
automatically identifying the location of the
skin lesion for subsequent analysis
by dermatologists.
[
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]
        </p>
        <p>Method
DCGAN</p>
        <p>Dataset
ISIC 2017</p>
        <p>Accuracy</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>5. Conclusion</title>
      <p>In this research, a literature review is conducted on the subject of using neural networks to
detect and classify skin malignancies. These procedures do not cause any discomfort and are not
harmful in any way. In skin cancer diagnosis, just a few of the tasks required include preparing
data, dividing images into segments, extracting features, and categorising them. The primary
focus of this research was on the categorization of lesion pictures using ANNs, CNNs, KNNs,
and RBFNs as the respective network types. Each and every algorithm has both positive and
negative aspects. The success or failure of the project hinges almost entirely on the classification
strategy you use. The Convolutional Neural Network (CNN), on the other hand, yields higher
results when classifying image data since it is more closely connected to computer vision
than other types of neural networks. The majority of studies on skin cancer diagnosis have
concentrated on determining whether or not a specific lesion image contained malignant cells.
However, current research cannot answer patients’ concerns regarding whether or not a specific
skin cancer symptom occurs just on one side of the body. The studies done up to this point
have all dealt with the specific issue of signal picture classification. In order to investigate this
frequently asked question, future studies may make use of full-body photographs. The process
of taking pictures will be automated and sped up thanks to autonomous full-body photography.
The idea of auto-organization is relatively new to the field of deep learning, having only recently
come into existence. Auto-organization is a sort of unsupervised learning that searches for
characteristics and finds relations or patterns in the image samples contained inside a dataset.
The enhanced feature representation that may be recovered by expert systems is a direct result
of the utilisation of auto-organizational methods, which are classified as convolutional neural
networks. The auto-organization paradigm is one that has not yet moved past the testing and
prototyping stage. However, having a thorough grasp of it can assist in the development of
more accurate image processing systems in the future. This is especially important in the field
of medical imaging, where a thorough examination of even the most minute details is essential
to arriving at an accurate diagnosis.</p>
      <p>References</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1] Cancer statistics center, https://cancerstatisticscenter.cancer.org.,
          <year>2023</year>
          . Accessed:
          <fpage>2024</fpage>
          -5-3.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>G.</given-names>
            <surname>Alwakid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Gouda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Humayun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. U.</given-names>
            <surname>Sama</surname>
          </string-name>
          ,
          <article-title>Melanoma detection using deep learningbased classifications</article-title>
          ,
          <source>Healthcare (Basel) 10</source>
          (
          <year>2022</year>
          )
          <fpage>2481</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Bansal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Garg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Soni</surname>
          </string-name>
          ,
          <article-title>Detection of melanoma in dermoscopic images by integrating features extracted using handcrafted and deep learning models</article-title>
          ,
          <source>Comput. Ind. Eng</source>
          .
          <volume>168</volume>
          (
          <year>2022</year>
          )
          <fpage>108060</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Adegun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Viriri</surname>
          </string-name>
          ,
          <article-title>Deep learning-based system for automatic melanoma detection</article-title>
          ,
          <source>IEEE Access 8</source>
          (
          <year>2020</year>
          )
          <fpage>7160</fpage>
          -
          <lpage>7172</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Chaudhary</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Agrawal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Madaan</surname>
          </string-name>
          ,
          <article-title>Bank cheque validation using image processing</article-title>
          , in: Advanced Informatics for Computing Research: Third International Conference, ICAICR 2019, Shimla, India, June 15-16,
          <year>2019</year>
          ,
          <string-name>
            <given-names>Revised</given-names>
            <surname>Selected</surname>
          </string-name>
          <string-name>
            <surname>Papers</surname>
          </string-name>
          ,
          <source>Part I 3</source>
          , Springer,
          <year>2019</year>
          , pp.
          <fpage>148</fpage>
          -
          <lpage>159</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>V.</given-names>
            <surname>Madaan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Agrawal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bologa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Prodan</surname>
          </string-name>
          ,
          <article-title>Xcovnet: chest x-ray image classification for covid-19 early detection using convolutional neural networks</article-title>
          ,
          <source>New Generation Computing</source>
          <volume>39</volume>
          (
          <year>2021</year>
          )
          <fpage>583</fpage>
          -
          <lpage>597</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>N.</given-names>
            <surname>Mohod</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Agrawal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Madaan</surname>
          </string-name>
          ,
          <article-title>Yolov4 vs yolov5: Object detection on surveillance videos</article-title>
          ,
          <source>in: International Conference on Advanced Network Technologies and Intelligent Computing</source>
          , Springer,
          <year>2022</year>
          , pp.
          <fpage>654</fpage>
          -
          <lpage>665</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>N.</given-names>
            <surname>Mohod</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Agrawal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Madaan</surname>
          </string-name>
          ,
          <article-title>A novel approach for surveillance compression using neural network technique</article-title>
          ,
          <source>International Research Journal of Multidisciplinary Technovation</source>
          <volume>6</volume>
          (
          <year>2024</year>
          )
          <fpage>77</fpage>
          -
          <lpage>89</lpage>
          . URL: https://journals.asianresassoc.org/index.php/irjmt/article/view/ 1607. doi:
          <volume>10</volume>
          .54392/irjmt2436.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Qin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Lei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Melanoma recognition in dermoscopy images via aggregated deep convolutional features</article-title>
          ,
          <source>IEEE Trans. Biomed. Eng</source>
          .
          <volume>66</volume>
          (
          <year>2019</year>
          )
          <fpage>1006</fpage>
          -
          <lpage>1016</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Jaisakthi</surname>
            , Mirunalini,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Aravindan</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Appavu</surname>
          </string-name>
          ,
          <article-title>Classification of skin cancer from dermoscopic images using deep neural network architectures, Multimed</article-title>
          .
          <source>Tools Appl</source>
          .
          <volume>82</volume>
          (
          <year>2023</year>
          )
          <fpage>15763</fpage>
          -
          <lpage>15778</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. D.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kelly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Mclean</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bonnington</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Mar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ge</surname>
          </string-name>
          ,
          <article-title>Early melanoma diagnosis with sequential dermoscopic images</article-title>
          ,
          <source>IEEE Trans. Med. Imaging</source>
          <volume>41</volume>
          (
          <year>2022</year>
          )
          <fpage>633</fpage>
          -
          <lpage>646</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>M. B. Alazzam</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Alassery</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Almulihi</surname>
          </string-name>
          ,
          <article-title>Diagnosis of melanoma using deep learning</article-title>
          ,
          <source>Math. Probl. Eng</source>
          .
          <year>2021</year>
          (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Albahli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Nida</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Irtaza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. H.</given-names>
            <surname>Yousaf</surname>
          </string-name>
          , M. T. Mahmood,
          <article-title>Melanoma lesion detection and segmentation using YOLOv4-DarkNet and active contour</article-title>
          ,
          <source>IEEE Access 8</source>
          (
          <year>2020</year>
          )
          <fpage>198403</fpage>
          -
          <lpage>198414</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Abbas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ramzan</surname>
          </string-name>
          , M. U. Ghani,
          <article-title>Acral melanoma detection using dermoscopic images and convolutional neural networks</article-title>
          ,
          <source>Vis. Comput. Ind. Biomed. Art</source>
          <volume>4</volume>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Agrawal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Madaan</surname>
          </string-name>
          ,
          <article-title>Classification of uterine cervical cancer histology image using active contour region based segmentation</article-title>
          ,
          <source>International Journal of Control Theory and Applications</source>
          <volume>9</volume>
          (
          <year>2016</year>
          )
          <fpage>31</fpage>
          -
          <lpage>40</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>N. C. F.</given-names>
            <surname>Codella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Gutman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Celebi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Helba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Marchetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. W.</given-names>
            <surname>Dusza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kalloo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Liopyris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Mishra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kittler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Halpern</surname>
          </string-name>
          ,
          <article-title>Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC)</article-title>
          ,
          <source>in: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI</source>
          <year>2018</year>
          ), IEEE,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>N. C. F.</given-names>
            <surname>Codella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Gutman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Celebi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Helba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Marchetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. W.</given-names>
            <surname>Dusza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kalloo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Liopyris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Mishra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kittler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Halpern</surname>
          </string-name>
          ,
          <article-title>Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC)</article-title>
          ,
          <source>in: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI</source>
          <year>2018</year>
          ), IEEE,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>N.</given-names>
            <surname>Codella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Rotemberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Tschandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Celebi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dusza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Gutman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Helba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kalloo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Liopyris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Marchetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kittler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Halpern</surname>
          </string-name>
          ,
          <article-title>Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (isic</article-title>
          ),
          <year>2019</year>
          . arXiv:
          <year>1902</year>
          .03368.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>P.</given-names>
            <surname>Tschandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Rosendahl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kittler</surname>
          </string-name>
          ,
          <article-title>The HAM10000 dataset, a large collection of multisource dermatoscopic images of common pigmented skin lesions</article-title>
          ,
          <source>Sci. Data</source>
          <volume>5</volume>
          (
          <year>2018</year>
          )
          <fpage>180161</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>P.</given-names>
            <surname>Tschandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Rosendahl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kittler</surname>
          </string-name>
          ,
          <article-title>The HAM10000 dataset, a large collection of multisource dermatoscopic images of common pigmented skin lesions</article-title>
          ,
          <source>Sci. Data</source>
          <volume>5</volume>
          (
          <year>2018</year>
          )
          <fpage>180161</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>V.</given-names>
            <surname>Rotemberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kurtansky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Betz-Stablein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Cafery</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Chousakos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Codella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Combalia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dusza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Guitera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Gutman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Halpern</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Helba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kittler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kose</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Langer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lioprys</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Malvehy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Musthaq</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Nanda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Reiter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Shih</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Stratigos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Tschandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Weber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. P.</given-names>
            <surname>Soyer</surname>
          </string-name>
          ,
          <article-title>A patient-centric dataset of images and metadata for identifying melanomas using clinical context</article-title>
          ,
          <source>Sci. Data</source>
          <volume>8</volume>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mendonca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. M.</given-names>
            <surname>Ferreira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Marques</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. R. S.</given-names>
            <surname>Marcal</surname>
          </string-name>
          , J. Rozeira, PH2
          <article-title>- a dermoscopic image database for research and benchmarking</article-title>
          ,
          <source>in: 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)</source>
          , IEEE,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>D.</given-names>
            <surname>Bisla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Choromanska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. S.</given-names>
            <surname>Berman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Stein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Polsky</surname>
          </string-name>
          ,
          <article-title>Towards automated melanoma detection with deep learning: Data purification and augmentation</article-title>
          , in: 2019 IEEE/CVF Conference on
          <article-title>Computer Vision and Pattern Recognition Workshops (CVPRW)</article-title>
          , IEEE,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>C.</given-names>
            <surname>Kaul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Manandhar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Pears</surname>
          </string-name>
          ,
          <string-name>
            <surname>Focusnet:</surname>
          </string-name>
          <article-title>An attention-based fully convolutional network for medical image segmentation</article-title>
          ,
          <source>in: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI</source>
          <year>2019</year>
          ), IEEE,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>G. M.</given-names>
            <surname>Venkatesh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. G.</given-names>
            <surname>Naresh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Little</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. E.</given-names>
            <surname>Oconnor</surname>
          </string-name>
          ,
          <article-title>A deep residual architecture for skin lesion segmentation”</article-title>
          ,
          <source>Lecture Notes in Computer Science</source>
          <volume>11041</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <article-title>Skin lesion analysis towards melanoma detection using deep learning network</article-title>
          ,
          <source>Sensors (Basel) 18</source>
          (
          <year>2018</year>
          )
          <fpage>556</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>A.</given-names>
            <surname>Youssef</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. D.</given-names>
            <surname>Bloisi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Muscio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Pennisi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Nardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Facchiano</surname>
          </string-name>
          ,
          <article-title>Deep convolutional pixel-wise labeling for skin lesion image segmentation</article-title>
          ,
          <source>in: 2018 IEEE International Symposium on Medical Measurements and Applications (MeMeA)</source>
          , IEEE,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Segmentation of dermoscopy image using adversarial networks</article-title>
          ,
          <source>Multimed. Tools Appl</source>
          .
          <volume>78</volume>
          (
          <year>2019</year>
          )
          <fpage>10965</fpage>
          -
          <lpage>10981</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>L.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. R.</given-names>
            <surname>Janghel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. P.</given-names>
            <surname>Sahu</surname>
          </string-name>
          ,
          <article-title>Designing a retrieval-based diagnostic aid using efective features to classify skin lesion in dermoscopic images</article-title>
          ,
          <source>Procedia Comput. Sci</source>
          .
          <volume>167</volume>
          (
          <year>2020</year>
          )
          <fpage>2172</fpage>
          -
          <lpage>2180</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>V.</given-names>
            <surname>Jose-Agustin</surname>
          </string-name>
          Almaraz-Damian,
          <article-title>Sergiy sadovnychiy and heydy castillejos-fernandez, melanoma and nevus skin lesion classification using handcraft and deep learning feature fusion via mutual information measures”</article-title>
          ,
          <source>Entropy</source>
          <volume>22</volume>
          (????).
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>M.</given-names>
            <surname>Rizzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Guaragnella</surname>
          </string-name>
          ,
          <article-title>Skin lesion segmentation using image bit-plane multilayer approach</article-title>
          , Appl. Sci. (Basel)
          <volume>10</volume>
          (
          <year>2020</year>
          )
          <fpage>3045</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>