CEUR ceur-ws.org Workshop ISSN 1613-0073 Proceedings 78 rate of death is 0.3% and the rate of acquiring the condition is 2.9%. Melanoma patients, on the other hand, who receive therapy at an early stage have a 99% probability of surviving the disease. One in five adults in the nation of America will get cancer of the skin at some point in their life, according to estimates. Skin cancer comes in two main varieties non-melanoma and melanoma, with cancer being the deadlier of the two. The phrases ”basal cell carcinoma” (BCC) and ”squamous cell carcinoma” (SCC) refer to the two most often diagnosed sub-types of non-melanoma skin cancer. These names are abbreviated as ”BCC” and ”SCC,” respectively. While basal cell carcinoma is by far the most common kind of skin cancer, seldom results in death, it has the potential to leave a person terribly deformed. Squamous cell carcinoma, often known as SCC, is the form of skin cancer that occurs second most frequently. Together, BCC and SCC are responsible for approximately 95% of the total number of non-melanoma skin cancers [2]. Figure 1 provides illustrations of a variety of skin malignancies, including some that are more prevalent than others, include Merkel cell carcinoma, Kaposi’s sarcoma, and basal cell carcinoma. Figure 1: The histology of melanoma, basal cell carcinoma, squamous cell carcinoma, and other subtypes of skin cancer. Dermoscopy is a process that does not involve the use of any invasive procedures and is utilised by medical professionals in order to inspect the skin and detect any abnormalities. During this stage of the process, medical specialists examine the skin lesion in question in order to look for melanoma warning signs such as the lesion’s colour, texture, irregular border, form, size, and so on. Melanoma is notoriously difficult to accurately diagnose, necessitating the services of an experienced dermatologist who possesses a significant amount of both education and practical expertise. There are dangers involved in depending exclusively on visual examination, as the research indicates that even among qualified dermatologists, the accuracy rate is only between 50% to 60% [3]. This is due to the fact that skin lesions often exhibit a broad variety of sizes, shapes, and boundary characteristics; they frequently lack contrast when compared to the skin around them; and there is often present background noise, including skin hair, lubricants, air, and bubbles. Therefore, developing a reliable CAD system for the early detection and diagnosis of melanoma cancer is an immediate need. Melanoma 79 will have a lower mortality rate as a result of two factors: an increase in the rate at which it is detected and an improvement in the ability to identify the disease at an earlier stage [4]. The majority of these strategies require a significant amount of time and are challenging to apply in clinical settings, both of which reduce their overall utility and generalizability. The use of deep learning-based approaches, and in particular CNN employing deep learning techniques, has become more commonplace in object recognition tasks [5, 6, 7, 8] in recent years, replacing systems that depend on manually-crafted features. This shift occurred as a result of the rise of CNNs. The most important advantage offered by CNN is the incredible visualisation capabilities it provides to every given classification or detection task depending on the data it was trained on [9]. This paper describes the methods that the researchers used to gather and analyse their data, as well as the models and procedures that they used to train the model, test it and evaluate its performance in the context of melanoma detection. 2. Literature Review A deep neural network design that is based on transfer learning techniques has been presented by Jaisakthi S M et al. [10] for the purpose of accurately classifying different types of skin cancer into melanoma and non-melanoma categories. The authors have employed the EfficientNet design to automatically scale the network’s depth, breadth, and resolution in order to learn more complicated and fine-grained patterns from lesion images. In addition to this, they have added more data to their dataset in order to address the issue of class imbalance, and they have made use of metadata information in order to refine the classification outcomes. The scientists have carried out a number of studies in which a variety of transfer models were utilised, and they discovered that EfficientNet variations performed better than other topologies. They used an area under the ROC curve (AUC-ROC) to analyse the performance of the suggested system. The result was a score of 0.9681, which was achieved with optimal fine-tuning of EfficientNet-B6 using ranger optimizer. Zhen Yu and colleagues [11] established a framework for automating the early diagnosis of melanoma by employing successive dermoscopic pictures. There are three main components to the proposed method: the lesion position module, which aligns lesion images from different times into the same coordinate systems to determine the lesion progress region; the spatio-temporal networks, which learn spatio-temporal characteristics based on aligned successive images by using a closely linked two stream network; and the early identi- fication module, which achieves early melanoma diagnosis by using the acquired knowledge. A supervised machine learning method has been presented for determining the presence of melanoma by Malik Bader Alazzam et al. [12]. Deep learning principles in dermoscopy images and sampling balancing techniques are used in this technique. This research aims to provide medical practitioners with a second opinion on melanoma diagnoses by assessing the efficacy of using algorithms for machine learning in conjunction with imbalanced base training approaches. The research study makes use of two hundred dermoscopy images from which patterns of skin lesions were able to be retrieved by applying the ABCD rule in conjunction with the VGG19, VGG16, Inception, and ResNet convolutional neural network architectures. The sensitivity of the random forest classifier was close to 93% after employing choosing attributes with GS and training data balancing using the Synthetic Minority Oversampling Technique and the Edited 80 Nearest Neighbour rule, and the kappa index (k index) was close to 78%. In addition, the kappa index (k index) for the random forest classifier was 78%. Using YOLOv4-DarkNet and Active Contour, the authors of this study, Saleh albahli et al. [13], offer a method for the detection and segmentation of melanomas in the skin. This method involves cleaning dermoscopic pictures of artefacts such as hairs, gel bubbles, and clinical marks; sharpening image regions; and utilising YOLOv4 object detector to differentiate between infected and non-infected areas. After that, the parts of the melanoma that have been infected are retrieved using active contour segmentation. The strategy that has been developed obtains a Jaccard coefficient of 0.989 and an average dice score of 1. This method is evaluated using the ISIC2018 and ISIC2016 datasets, and it is compared to the most recent and cutting-edge methods for melanoma identification and segmentation. A three-step methodology is used by Qaiser Abbas et al. [14], which involves preprocessing and data augmentation as the first stage, feature extraction as the second stage, and classification and prediction as the third stage. Data for dermoscopy images were obtained from a hospital affiliated with a university in South Korea, and then a number of preprocessing techniques were utilised in order to eliminate dermoscopy artefacts. The deep learning models were then trained using the dataset after it had been preprocessed. The methodology is laid out in the form of a flowchart, which can be found in the paper. Table 1. Shows the comparisons of literature review. Reference No. Methodology Used Advantages Disadvantages [10] Deep Neural Net- Accurate classifica- Requires a large work (Transfer tion of skin cancer dataset for optimal Learning - Efficient- into melanoma performance. Net) and non-melanoma categories. [11] Framework for Automated early di- Successive der- Early Diagnosis of agnosis of melanoma moscopic images Melanoma using successive der- needed for accurate moscopic pictures. diagnosis. [12] Supervised Machine Second opinion for Relies on der- Learning (Random melanoma diagnoses moscopy images and Forest Classifier) using machine learn- may not cover other ing algorithms. diagnostic methods. [13] YOLOv4-DarkNet Segmentation & De- Requires cleaning and Active Contour tection and segmen- dermoscopic images for Detection tation of melanomas of artefacts and with high accuracy. sharpening image regions. [14] Three-Step Method- Obtained der- Dependent on spe- ology (Preprocessing, moscopy images cific preprocessing Feature Extraction, from a hospital in techniques used and Classification) South Korea. their effectiveness. 81 3. Techniques for Diagnosis This part of the paper discusses the two methods that are used to determine whether or not melanoma symptoms are present: the first method is a physical diagnostic, and the second method is a computer-aided diagnostic. 3.1. Physical Diagnostic During a physical examination, the presence of any pigmented lesion that exhibits criteria outlined in the ”ABCDE” mnemonic should raise suspicion for the presence of melanoma shown in figure 2. Asymmetry, Border irregularity, Colour Variegation, Diameter Greater Than 6 mm, and Evolution or Timing of the Lesion’s Growth are the five characteristics of melanoma that the ABCDE technique was supposed to help doctors and patients notice. Figure 2: Melanoma Diagnosis Using the ABCDE System. The ABCDE approach was designed to assist doctors and patients spot melanoma more quickly. If a lesion of this kind is discovered, a comprehensive examination of the surrounding area must be performed in order to look for further metastatic foci or satellite lesions. After a worrisome lesion has been thoroughly analyzed, the rest of the patient’s skin, including the scalp, perineum, interdigital space, genitalia, and subungual regions, should be examined for any more suspect lesions. This should be done as soon as possible. Every lesion that appears to be benign needs to be noted, and the lymph node basins need to be palpated for any signs of lymphadenopathy. 3.2. Computer-Aided Diagnostic This section provides an overview of the sequential processes that are involved in the image processing with deep learning algorithms. The steps involved in determining whether melanoma 82 symptoms are present using medical dermoscopy are outlined in Figure 3. Figure 3: Stages involved in image processing. Image Acquisition: Image acquisition is the procedure of capturing a visual representation of an object in digital format [15]. Image acquisition is a process. To explain it more simply, it is the action of obtaining a digital representation of the thing that is being sought after. Image Pre-Processing: The actions that are conducted before actually working on a picture to improve the image information in accordance with the requirements are referred to as ”image pre-processing,” and the phrase the image pre processing has been employed to describe such steps. Image Augmentation: Image Augmentation refers to the technique of modifying an image, which is typically done to compensate for a shortage of data that is readily available. During this stage of the process, we will manipulate the picture in such a way as to trick the computer into thinking it is looking at an entirely new image by altering either its orientation or its colour scheme. Feature Extraction: A procedure known as feature extraction can be used to reduce the total number of features in a data set. In order to accomplish this, we must first get rid of the obsolete functions, and then re-create new ones based on the defunct ones. The vast majority of the data that was originally contained within the features that have been updated has been condensed into these new ones. Image Classification: The field of object recognition is one in which deep learning has at last reached its full potential. This is because deep learning was designed to be implemented with many layers of artificially trained neural networks, each of which is responsible for extracting a distinct collection of features from the image that is to be classified. This is because of the fact that deep learning was intended to be performed with numerous layers of neural networks. Using the support of modern computer technologies, a diagnosis of symptoms and signs of skin cancer may be made rapidly, without difficulty, and at a price that is within most people’s budgets. Whether the symptoms of skin cancer are brought on by melanoma or another kind of skin cancer can be determined in a number of different ways, all of which do not involve 83 intrusive procedures. Figure 4: Melanoma detection using deep learning models. Figure 4 depicts the standard procedure for identifying skin cancer, which includes mul- tiple processes such as image acquisition, preliminary processing, augmentation, extraction of features, and classification. Recent years have seen a revolutionary shift in the machine learning industry thanks to the advent of deep learning, which was previously untouched by the phenomenon. Methods that utilise artificial neural networks are at the centre of the machine learning subfield that is usually recognised as being the most cutting-edge. The investigation of how the human brain performs its functions served as inspiration for the development of these algorithms. In these situations, the performance of systems based on deep learning has been superior to that of approaches of machine learning that are more traditionally used. In recent years, numerous deep learning strategies have been implemented in the creation of automated cancer detection systems. In this work, deep learning-based approaches for detecting skin cancer were investigated, and both their benefits and drawbacks were thoroughly discussed. In order to accomplish this goal, This paper presents a systematic overview of prior work on the topic of using deep learning techniques to identify skin cancer. Some examples of these techniques are convolutional neural networks, generative adversarial networks, Kohonen self-organizing neural networks, and artificial neural networks. 84 4. Data Set, Evaluation Standards and Comparison of Results 4.1. Dataset A significant barrier to the effective use of deep learning is the absence of a dataset that is adequate for the task at hand. Because of this, there will be significant challenges to overcome, as any learning algorithm requires a sizeable amount of training data in order to assess the effectiveness of the algorithm. There is a determined effort being made to construct archives that will someday store the largest collection of medical images in the world, but at the same time, it is imperative that the privacy of patients be maintained. Researchers are currently turning to using images gathered from hospitals and institutes that specialise in cancer research so that they may make their computer models into practise. Researchers typically work with a small data collection, which can introduce some degree of bias into their conclusions; in order to combat this issue, they sometimes turn to pre-processing. To boost the total amount of data acquired, researchers are increasingly turning to data augmentation techniques such as scaling, rotation, flipping, and illumination correction. Two of the most popular databases are available online: ISIC and PH2. Expanding the use of digital skin imaging to help reduce the death rate from skin cancer is the goal of the International Skin Imaging Collaboration (ISIC), a scientific and business partnership. To further test and evaluate proposed standards, ISIC has built and maintained a public, open-source database of skin pictures.. This was done in order to facilitate the testing and evaluation process. The purpose of this collection is to serve as a repository for diagnostic images that can be put to use in the education, investigation, and assessment of automated diagnostic systems. Reference Data Set Training Data Testing Date [16] ISIC 2016 900 379 [17] ISIC 2017 2000 150 [18, 19] ISIC 2018 2594 1000 [20] ISIC 2019 25331 8238 [21] ISIC 2020 33126 10982 [22] PH2 200 - The Dermatology Department at the Hospital Pedro Hispano in Matosinhos, Portugal main- tains a database known as PH2 including dermoscopy images supplied by patients. The PH2 dataset was created for scientific study and standardised practise to facilitate comparison studies of dermoscopy image segmentation and classification methods. There are almost 200 pictures here depicting different melanocyte-induced skin lesions. These pictures show a total of 160 benign moles, including 80 typical nevi, 80 atypical nevi, and 40 melanomas. The number of photos used for each task and the used data set are detailed in Table 4.1. It has been claimed that from 2016 through 2020, the International Skin Imaging Collaboration (ISIC) will be the group leading a competition at the International Symposium on Biomedical Imaging (ISBI). 85 4.2. Evaluation Standards When constructing a model using deep learning, accuracy should at all times be the main concern. However, while approaching a classification problem, it is essential to take into account both the accuracy and the frequency of any misclassifications that may have occurred. Because of this, it is essential to have a technique that can determine what percentage of categories are accurate and what percentage are not proper. The confusion matrix is a tool that can assist with this. It’s an N-by-N matrix that scores how successfully a model handles a classification problem. The larger the number, the more accurate the rating. Figure 5 depicts the confusion matrix for a scenario that involves a pair of distinct classes. The following equations are used to calculate Accuracy, Sensitivity, and Specificity from the Confusion Matrix. Figure 5: Confusion Matrix. TP + TN Accuracy = (1) TP + FP + FN + TN TP Sensitivity = (2) TP + FN TN Specificity = (3) TN + FP 86 4.3. Comparison of Results The current state of the art in skin lesion segmentation is compared in terms of accuracy, sensitivity, and specificity in this section. Reference Method Dataset Accuracy Advantages and Disadvantages [23] DCGAN ISIC 2017 76.15 Due to the proportion of publicly available dermatological data sets, which are typically small and con- tain obstructions, its applicability to dermatology is limited. [24] FocusNet ISIC 2017 92.14 To achieve optimal segmentation, the network makes accurate predic- tions for each pixel. The network suffers from a lack of sensitivity to data-sensitivity metrics. [24] U-Net ISIC 2017 93.60 There is no need for additional post- processing procedures because the provided model accurately captures the lesion region. [25] LIN ISIC 2017 95.00 The proposed LFN shows remark- able ability to meet the challenge by obtaining dermatoscopy features with the highest average accuracy and sensitivity. [26] Pixel wise PH2 86.90 The technique achieved a segmen- tation accuracy of over 90% despite the presence of artefacts like hair and air/oil bubbles. [27] Adver- ISIC 2016 97.00 Aids in raising the overall segmen- sarial tation accuracy. The consistent re- network sults. In terms of sensitivities, could not get optimal results. There is a lack of sufficient edge precision. [28] SVM PH2 92.50 The suggested segmentation method yields more accurate results than other methods in the literature. [29] CNN ISIC 2018 92.40 The implementation was more ac- curate, sensitive, and specific dur- ing the training and testing phases compared to state-of-the-art meth- ods. [30] DFCN PH2 95.37 The approach was highly effective in its broad application, and it ex- celled at the finer points where pre- cision was most important. [31] MRF and PH2 91.51 It provided a more precise segmen- stochastic tation, which will aid in automati- region cally identifying the location of the merging skin lesion for subsequent analysis by dermatologists. 87 5. Conclusion In this research, a literature review is conducted on the subject of using neural networks to detect and classify skin malignancies. These procedures do not cause any discomfort and are not harmful in any way. In skin cancer diagnosis, just a few of the tasks required include preparing data, dividing images into segments, extracting features, and categorising them. The primary focus of this research was on the categorization of lesion pictures using ANNs, CNNs, KNNs, and RBFNs as the respective network types. Each and every algorithm has both positive and negative aspects. The success or failure of the project hinges almost entirely on the classification strategy you use. The Convolutional Neural Network (CNN), on the other hand, yields higher results when classifying image data since it is more closely connected to computer vision than other types of neural networks. The majority of studies on skin cancer diagnosis have concentrated on determining whether or not a specific lesion image contained malignant cells. However, current research cannot answer patients’ concerns regarding whether or not a specific skin cancer symptom occurs just on one side of the body. The studies done up to this point have all dealt with the specific issue of signal picture classification. In order to investigate this frequently asked question, future studies may make use of full-body photographs. The process of taking pictures will be automated and sped up thanks to autonomous full-body photography. The idea of auto-organization is relatively new to the field of deep learning, having only recently come into existence. Auto-organization is a sort of unsupervised learning that searches for characteristics and finds relations or patterns in the image samples contained inside a dataset. The enhanced feature representation that may be recovered by expert systems is a direct result of the utilisation of auto-organizational methods, which are classified as convolutional neural networks. The auto-organization paradigm is one that has not yet moved past the testing and prototyping stage. However, having a thorough grasp of it can assist in the development of more accurate image processing systems in the future. This is especially important in the field of medical imaging, where a thorough examination of even the most minute details is essential to arriving at an accurate diagnosis. References [1] Cancer statistics center, https://cancerstatisticscenter.cancer.org., 2023. Accessed: 2024-5-3. [2] G. Alwakid, W. Gouda, M. Humayun, N. U. Sama, Melanoma detection using deep learning- based classifications, Healthcare (Basel) 10 (2022) 2481. [3] P. Bansal, R. Garg, P. Soni, Detection of melanoma in dermoscopic images by integrating features extracted using handcrafted and deep learning models, Comput. Ind. Eng. 168 (2022) 108060. [4] A. A. Adegun, S. Viriri, Deep learning-based system for automatic melanoma detection, IEEE Access 8 (2020) 7160–7172. [5] D. Chaudhary, P. Agrawal, V. Madaan, Bank cheque validation using image processing, in: Advanced Informatics for Computing Research: Third International Conference, ICAICR 2019, Shimla, India, June 15–16, 2019, Revised Selected Papers, Part I 3, Springer, 2019, pp. 148–159. 88 [6] V. Madaan, A. Roy, C. Gupta, P. Agrawal, A. Sharma, C. Bologa, R. Prodan, Xcovnet: chest x-ray image classification for covid-19 early detection using convolutional neural networks, New Generation Computing 39 (2021) 583–597. [7] N. Mohod, P. Agrawal, V. Madaan, Yolov4 vs yolov5: Object detection on surveillance videos, in: International Conference on Advanced Network Technologies and Intelligent Computing, Springer, 2022, pp. 654–665. [8] N. Mohod, P. Agrawal, V. Madaan, A novel approach for surveillance compression using neural network technique, International Research Journal of Multidisciplinary Technova- tion 6 (2024) 77–89. URL: https://journals.asianresassoc.org/index.php/irjmt/article/view/ 1607. doi:10.54392/irjmt2436 . [9] Z. Yu, X. Jiang, F. Zhou, J. Qin, D. Ni, S. Chen, B. Lei, T. Wang, Melanoma recognition in dermoscopy images via aggregated deep convolutional features, IEEE Trans. Biomed. Eng. 66 (2019) 1006–1016. [10] Jaisakthi, Mirunalini, C. Aravindan, R. Appavu, Classification of skin cancer from dermo- scopic images using deep neural network architectures, Multimed. Tools Appl. 82 (2023) 15763–15778. [11] Z. Yu, J. Nguyen, T. D. Nguyen, J. Kelly, C. Mclean, P. Bonnington, L. Zhang, V. Mar, Z. Ge, Early melanoma diagnosis with sequential dermoscopic images, IEEE Trans. Med. Imaging 41 (2022) 633–646. [12] M. B. Alazzam, F. Alassery, A. Almulihi, Diagnosis of melanoma using deep learning, Math. Probl. Eng. 2021 (2021) 1–9. [13] S. Albahli, N. Nida, A. Irtaza, M. H. Yousaf, M. T. Mahmood, Melanoma lesion detection and segmentation using YOLOv4-DarkNet and active contour, IEEE Access 8 (2020) 198403–198414. [14] Q. Abbas, F. Ramzan, M. U. Ghani, Acral melanoma detection using dermoscopic images and convolutional neural networks, Vis. Comput. Ind. Biomed. Art 4 (2021). [15] M. Sharma, S. K. Singh, P. Agrawal, V. Madaan, Classification of uterine cervical cancer histology image using active contour region based segmentation, International Journal of Control Theory and Applications 9 (2016) 31–40. [16] N. C. F. Codella, D. Gutman, M. E. Celebi, B. Helba, M. A. Marchetti, S. W. Dusza, A. Kalloo, K. Liopyris, N. Mishra, H. Kittler, A. Halpern, Skin lesion analysis toward melanoma detec- tion: A challenge at the 2017 international symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC), in: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), IEEE, 2018. [17] N. C. F. Codella, D. Gutman, M. E. Celebi, B. Helba, M. A. Marchetti, S. W. Dusza, A. Kalloo, K. Liopyris, N. Mishra, H. Kittler, A. Halpern, Skin lesion analysis toward melanoma detec- tion: A challenge at the 2017 international symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC), in: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), IEEE, 2018. [18] N. Codella, V. Rotemberg, P. Tschandl, M. E. Celebi, S. Dusza, D. Gutman, B. Helba, A. Kalloo, K. Liopyris, M. Marchetti, H. Kittler, A. Halpern, Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (isic), 2019. arXiv:1902.03368 . [19] P. Tschandl, C. Rosendahl, H. Kittler, The HAM10000 dataset, a large collection of multi- 89 source dermatoscopic images of common pigmented skin lesions, Sci. Data 5 (2018) 180161. [20] P. Tschandl, C. Rosendahl, H. Kittler, The HAM10000 dataset, a large collection of multi- source dermatoscopic images of common pigmented skin lesions, Sci. Data 5 (2018) 180161. [21] V. Rotemberg, N. Kurtansky, B. Betz-Stablein, L. Caffery, E. Chousakos, N. Codella, M. Com- balia, S. Dusza, P. Guitera, D. Gutman, A. Halpern, B. Helba, H. Kittler, K. Kose, S. Langer, K. Lioprys, J. Malvehy, S. Musthaq, J. Nanda, O. Reiter, G. Shih, A. Stratigos, P. Tschandl, J. Weber, H. P. Soyer, A patient-centric dataset of images and metadata for identifying melanomas using clinical context, Sci. Data 8 (2021). [22] T. Mendonca, P. M. Ferreira, J. S. Marques, A. R. S. Marcal, J. Rozeira, PH2 - a dermoscopic image database for research and benchmarking, in: 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, 2013. [23] D. Bisla, A. Choromanska, R. S. Berman, J. A. Stein, D. Polsky, Towards automated melanoma detection with deep learning: Data purification and augmentation, in: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE, 2019. [24] C. Kaul, S. Manandhar, N. Pears, Focusnet: An attention-based fully convolutional net- work for medical image segmentation, in: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), IEEE, 2019. [25] G. M. Venkatesh, Y. G. Naresh, S. Little, N. E. Oconnor, A deep residual architecture for skin lesion segmentation”, Lecture Notes in Computer Science 11041 (2018). [26] Y. Li, L. Shen, Skin lesion analysis towards melanoma detection using deep learning network, Sensors (Basel) 18 (2018) 556. [27] A. Youssef, D. D. Bloisi, M. Muscio, A. Pennisi, D. Nardi, A. Facchiano, Deep convolutional pixel-wise labeling for skin lesion image segmentation, in: 2018 IEEE International Symposium on Medical Measurements and Applications (MeMeA), IEEE, 2018. [28] Y. Peng, N. Wang, Y. Wang, M. Wang, Segmentation of dermoscopy image using adversarial networks, Multimed. Tools Appl. 78 (2019) 10965–10981. [29] L. Singh, R. R. Janghel, S. P. Sahu, Designing a retrieval-based diagnostic aid using effective features to classify skin lesion in dermoscopic images, Procedia Comput. Sci. 167 (2020) 2172–2180. [30] V. Jose-Agustin Almaraz-Damian, Sergiy sadovnychiy and heydy castillejos-fernandez, melanoma and nevus skin lesion classification using handcraft and deep learning feature fusion via mutual information measures”, Entropy 22 (????). [31] M. Rizzi, C. Guaragnella, Skin lesion segmentation using image bit-plane multilayer approach, Appl. Sci. (Basel) 10 (2020) 3045. 90