=Paper= {{Paper |id=Vol-3892/paper19 |storemode=property |title=Artificial Intelligence-Based Method for Face Skin Diagnostic |pdfUrl=https://ceur-ws.org/Vol-3892/paper19.pdf |volume=Vol-3892 |authors=Olga Pavlova,Vitalii Alekseiko,Vladyslav Karabaieva,Andrii Kuzmin |dblpUrl=https://dblp.org/rec/conf/iddm/PavlovaAKK24 }} ==Artificial Intelligence-Based Method for Face Skin Diagnostic== https://ceur-ws.org/Vol-3892/paper19.pdf
                                Artificial Intelligence-based                                                                  method                         for             face      skin
                                diagnostic⋆
                                Olga Pavlova1,∗,†, Vitalii Alekseiko1,∗,†, Vladyslav Karabaiev1,2,† and Andrii Kuzmin1,†
                                1
                                    Khmelnytskyi National University, Institutska str., 11, Khmelnytskyi, 29016, Ukraine
                                2
                                    Healthy Face Clinic, Stepana Bandery str., 5a, Khmelnytskyi, 29000, Ukraine



                                                   Abstract
                                                   The skin of the face is a complex organ, the condition of which can vary due to various factors such as
                                                   genetics, lifestyle, environmental conditions and age. An accurate assessment of the condition of the skin
                                                   is critical for the correct selection of care and treatment methods, which stimulates the development of
                                                   technologies for its analysis. Recent advances in artificial intelligence (AI) provide new opportunities for
                                                   automated skin analysis, which improves the accuracy of diagnosis and the efficiency of procedures.
                                                   This work focuses on the review and analysis of the application of facial skin analysis systems integrated
                                                   with artificial intelligence algorithms. It is important to understand the working principles of such
                                                   systems and their potential in practical application both in cosmetology and in other medical fields. The
                                                   purpose of this work is to study the technical aspects of creating such skin analyzers, in particular their
                                                   software part, and to evaluate their practical effectiveness based on modern machine learning algorithms.

                                                   Keywords 1
                                                   Artificial Intelligence (AI), facial diagnostic, decision support, IT solutions for medicine



                                1. Introduction
                                Skin health is a critical aspect of overall well-being, and early detection of skin conditions plays a
                                vital role in effective treatment and prevention. Facial skin, in particular, is highly susceptible to
                                various dermatological issues, such as acne, hyperpigmentation, dryness, and signs of aging.
                                Traditional skin diagnostic methods often require clinical expertise, specialized equipment, and
                                time-consuming procedures. In recent years, advancements in artificial intelligence (AI) have
                                offered new opportunities to enhance skin diagnostic processes, providing efficient and accurate
                                solutions. Artificial intelligence, in particular machine learning, is opening new horizons in skin
                                diagnostics, allowing the creation of systems capable of analyzing skin images and making
                                recommendations for care and treatment based on identified problems.
                                    At this stage of technology development, there are already devices that allow you to assess the
                                condition of the skin, both companies and science offer new approaches to facial skin diagnostics.
                                This paper proposes an AI-based method for face skin diagnostics, utilizing data obtained from a
                                smart skin analyser. The smart skin analyser collects comprehensive facial skin data, including
                                moisture levels, pigmentation patterns, pore size, and other relevant features. By integrating this
                                data with AI algorithms, the proposed method aims to automate and optimize the diagnostic
                                process, delivering reliable and consistent results.
                                    Our approach leverages neural network application and image processing algorithms to analyse
                                the facial skin's characteristics. The proposed method identifies various skin conditions and
                                provides personalized recommendations based on the analysed data and offers a non-invasive,
                                efficient, and accessible solution for users seeking professional-level skin analysis and care in real-
                                time.



                                IDDM’2024: 7th International Conference on Informatics & Data-Driven Medicine, November 14-16, 2024, Birmingham, UK.
                                ∗ Corresponding author.
                                † These authors contributed equally.
                                     pavlovao@khmnu.edu.ua (O. Pavlova); vitalii.alekseiko@gmail.com (V.Alekseiko); vladkarabaev@gmail.com (V.
                                Karabaiev); andriy1731@gmail.com(A.Kuzmin).
                                   0000-0003-2905-0215 (O. Pavlova); 0000-0003-1562-9154 (V.Alekseiko); 0009-0005-6489-225X (A.Kuzmin).
                                              © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).


CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
2. Related works
In the course of the study, an analysis of the latest scientific publications in the field of skin
diagnostic was carried out.
   In [1] AI-based facial skin diagnosis system (Dr. AMORE®) uses facial images of Korean women
to analyse wrinkles, pigmentation, skin pores, and other skin red spots. The system is trained using
clinical expert evaluations and deep learning.
   The aim of [2] is to evaluate the current state of AI-based techniques used in combination with
non-invasive diagnostic imaging modalities including reflectance confocal microscopy (RCM),
optical coherence tomography (OCT), and dermoscopy. It also aimed to determine whether the
application of AI-based techniques can lead to improved diagnostic accuracy of melanoma.
   The objective of [3] is to design a system that combines metaheuristic optimizers with various
AI based classifiers to detect and diagnose skin diseases. In order to accomplish this objective,
numerical and image datasets have been taken, pre-processed, and visually analysed in order to
comprehend their patterns.
   In [4] the propensity of skin cancer to metastasize highlights the importance of early detection
for successful treatment. This narrative review explores the evolving role of artificial intelligence
(AI) in diagnosing head and neck skin cancers from both radiological and pathological perspectives.
   The proposed in [5] model has the potential to aid qualified healthcare professionals in the
diagnosis of melanoma. Furthermore, the authors propose a mobile application to facilitate
melanoma detection in home environments, providing added convenience and accessibility.
   The paper [6] delves into unimodal models’ methodologies, applications, and shortcomings
while exploring how multimodal models can enhance accuracy and reliability.
   The study [7] presents an automated skin lesion detection and classification technique utilizing
optimized stacked sparse autoencoder (OSSAE) based feature extractor with backpropagation
neural network (BPNN), named the OSSAE-BPNN technique.
   The insights in [8] demonstrate the bias towards deep learning methods and the shortage of
studies on rare and precancerous skin lesions.
   In paper [9] five different algorithms of artificial intelligence have been selected and used to
skin disease dataset.
   The purpose of the study [10] was to assess the diagnostic accuracy of the teledermoscopy
method using the FotoFinder device as well as the Moleanalyzer Pro artificial intelligence (AI)
Assistant and to compare them with the face-to-face clinical examination for the diagnosis of
melanoma confirmed with histopathology.
   In [11] we propose a methodology for consideration of civil-legal grounds in medical decision-
making process.
   The paper [12] proposes a health recommender system for smart cities. The methodology
proposes the smart distribution of healthcare institutions which are located the closest to the
patient.
   It is impossible not to notice that along with scientific developments, many new devices are
appearing on the market that allow to measure skin parameters and even predict the result after
surgical correction in plastic surgery.
   For example, the VECTRA H2 imaging system [13] is a portable hardware skin diagnosis system
with volumetric body imaging for use in cosmetology, aesthetic medicine, and dermatology. The
features of VECTRA H2 from Canfield Scientific:
   - Automatic merging: three face or body shots are automatically merged into one 3D image
        by VECTRA software.
   - Accurate assessment of contours: the gray visualization mode allows you to evaluate the
        contours of the face and body without being distracted by color when planning and
        studying the result of corrective procedures.
   - Face and body measurement in automatic mode: volumetric visualization (3D mode) and
        digital data help your patients understand the underlying problems.
   - Breast Sculptor software application: technology for creating three-dimensional breast
        models, based on selected implants, taking into account gravity, shape and location.
   - Visual comparison: visualization of several breast augmentation surgery scenarios by
        parameters, sizes and style of implants.
  -     Visualization of expectations: illustrative display of benefits after breast augmentation
        surgery.
    - Mastopexy program. A software application for simulating lifting operations taking into
        account areas of skin excision.
    - Volumetric measurements of the body: automatic measurement of the circumference and
        volume of body contours.
    - Quantification of the subcutaneous structures of the face: Canfield's patented technology
        shares the unique color shades of red and brown facial skin. This allows you to get a
        complete picture of the skin condition and improve the quality of imaging.
    - Measurement of volume change: volume data is automatically measured with one click of
        the mouse in grayscale mode with parallel color display of changes in facial contours.
    - Markerless tracking: a dynamic assessment of changes in the surface of the facial skin is
        carried out: alignment, direction and final result.
    - Full picture of changes: the program creates a holistic picture of change, reflecting all the
        hopes and expectations of your patient.
    3D LifeViz® Mini [14] is the most compact 3D system for skin analysis and modelling, a
convenient solution for cosmetologists, dermatologists, cosmetic and plastic surgeons. Analyses the
condition of the patient's skin according to 6 parameters and reproduces the image of the face on
the screen in 3D format. The patient can see what his face could look like after contour plastic
surgery or surgery. The system is based on a special type of stereophotogrammetry, where 2D
images are automatically combined into a three-dimensional representation.
    LifeViz® technology allows you to quantitatively change the volume and determine the small
details of the skin surface with extreme accuracy. An example of an image created by 3D LifeViz®
is presented in Figure 1.




Figure 1: Example of the image created by 3D LifeViz® Mini system for skin analysis and
modelling [14]

Taking into account the relevance of the problem of application modern information technologies
in medical area, namely, analysis of the results taken from the Smart skin analyzer [15] for facial
skin defects detection it was decided to develop a methodology of neural network application for
solving this issue.
Therefore, the purpose of the research is:
1) to consider convolutional neural network architectures for medical image analysis;
2) to evaluate the effectiveness of models for the task of detecting and classifying skin defects;
3) to consider possible ways of improving the performance of models by changing the input
parameters.
3. Methodology
The data for the experiment were obtained using the skin analyzer AISIA [15], which is presented
in Figure 2. It provides the analysis of the skin according to the following parameters:
   - Pore ​ ​ size (spectral visualization of RGB pores).
   - The presence of blackheads and postacne: an analysis of all spots of a round shape with a
       color darker than usual.
   - Age changes: the degree of wrinkles and the depth of creases.
   - Skin texture: imaging changes in relief, texture of the dermis, as well as predicting the
       degree of future changes.
   - The level of secretion of sebum and the localization of black spots.
   - Pigmentation zones: an image not only of the actual state, but also of forecasting future
       formations.
   - Hydration.
   - Areas of sensitivity: determination of zonal sensitivity, its susceptibility to allergic reactions.
   - Brown zones: an image of metabolic processes in cells, areas of current recovery.
   - Injury by UV rays: an image of pigmentation at different levels of the epidermis. Fixation of
       the size and depth of such spots.
   - Diagnosis of age-related changes: a picture of the future aging of the dermis, wrinkles, if the
       client does not change his care.




Figure 2: Smart portable skin analyzer AISIA [15]

During the research, a wide range of methods was used, including general scientific methods:
theoretical (modeling, analysis, synthesis), empirical (observation, comparison, experiment).
Medical diagnostic methods were also used to form the dataset, and artificial intelligence tools
were used, in particular, machine learning models for image analysis.
   As machine learning technologies demonstrate their effectiveness in the analysis of medical
images [16,17], we consider it appropriate to consider basic convolutional neural network (CNN)
models for facial skin defect detection.

3.1.    Residual Networks

The main idea of ResNet is to learn the residual mapping [22]:
                                     F(x) = H(x) − x,                                             (1)
  where:
  F(x) – residual mapping,
  H(x) – desired mapping,
  x – input.
  Thus, H(x) from formula (1) can be represented as:
                                     H(x) = F(x) + x,                                             (2)
    A typical residual block consists of two or more convolutional layers with batch normalization
(BN) and ReLU activation:
                                     y = F(x; {Wi}) + x,                                     (3)
    where:
    Wi – weights of the convolutional layers;
    y – output of the block.
    Key Properties of ResNet are Identity Mapping and Training Deep Networks. The skip
connections enable identity mapping, facilitating gradient flow and addressing the vanishing
gradient problem. ResNets can have hundreds or thousands of layers while remaining easier to
train compared to traditional deep networks.

3.2.   Dense Convolutional Network

In DenseNet, each layer receives inputs from all preceding layers. The output of layer can be
computed as:
                                 �� = �� (�0 , �1 , . . . , ��−1 ),                        (4)
   where:
   Hl – operations performed by the layer;
   xi – feature maps from all preceding layers.
   Instead of adding the inputs as in ResNet, DenseNet concatenates feature maps:
                                   �� = ��−1 , ��−2 , . . . , �0 ,                         (5)
   Key Properties of DenseNet are Feature Reuse and Gradient Flow [18]. DenseNet emphasizes
feature reuse, which reduces the number of parameters while still maintaining high accuracy. The
dense connectivity pattern allows gradients to flow through many paths during backpropagation,
enhancing learning.

3.3.   EfficientNet

EfficientNet uses a compound scaling method, which balances the scaling of depth d, width w, and
resolution r. The scaling can be defined as:
                                   � = �� , � = �� , � = �� ,                              (6)
   where:
   k – constant;
   α, β, γ – scaling coefficients.
   EfficientNet starts from a baseline model and scales it. For instance, the total number of
parameters P in the model can be expressed as:
                                       � = � ∙ �2 ∙ �2 ,                                   (7)
   where c – constant that defines the efficiency of the architecture.
   Key Properties of EfficientNet are Optimized Architecture and Efficiency. The architecture is
optimized through neural architecture search, allowing for a balance between model size and
performance. EfficientNet achieves state-of-the-art accuracy with fewer parameters compared to
previous models [19].

3.4.   MobileNet

MobileNet introduces depthwise separable convolutions, which factor the standard convolution
into two separate layers: Depthwise Convolution and Pointwise Convolution [20].
   Depthwise Convolution can be represented as a single filter, which is applied to each input
channel.
                                     �� = � ⊙ �� ,                                         (8)
   Pointwise Convolution can be represented as a 1×1 convolution that combines the output from
the depthwise layer.
                              � = ���������(��) = �� ∙ �� ,                                (9)
   Key Properties of MobileNet are Efficiency and Width Multiplier. The reduction in the number
of parameters and computations compared to standard convolutions makes MobileNet highly
suitable for mobile and edge devices. MobileNet allows for a width multiplier α to reduce the
number of channels in each layer, further optimizing the model size.

3.5.    Dataset Structure

The Skin Disease Classification Dataset from Kaggle [21] was used to test the performance of the
models. This dataset contains a collection of photographs of human faces divided into three distinct
classes: acne, bags under the eyes, and facial redness. To ensure a comprehensive presentation and
accurate classification, it is advisable to consider three photos for each person. These images
include a front view along with side profiles on both the right and left. This multi-angle approach
not only increases the variability of the data, but also helps the models learn to identify and
differentiate the subtle nuances of skin diseases that may be missed in a single photo. The dataset's
structured format and comprehensive documentation make it an invaluable resource for
developing, training, and fine-tuning machine learning algorithms aimed at classifying skin
diseases, ultimately contributing to advances in dermatology diagnostics and personalized skin
care solutions. All data are presented in a generalized form and used exclusively within the scope
of scientific research. The work used data from open resources, supplemented with medical data
from clinical practice. Before using medical data, permission for their use was obtained from each
of the patients. The research provides the principles of responsible artificial intelligence.
Confidentiality of information is guaranteed.

4. Models’ comparison
For detecting skin defects such as acne, redness, etc., the choice of CNN architecture will depend
on factors such as dataset size, problem complexity, and available computing resources. The most
commonly used CNN architectures include:
   – Residual Networks (ResNet);
   – Dense Convolutional Network (DenseNet);
   – EfficientNet;
   – MobileNet.
   Table 1 shows a comparison of the capabilities of the presented architectures.
   Using ResNet enables deep learning, thanks to the use of residual connections, which allows
you to train deeper networks without facing the gradient vanishing problem. This can help the
model capture more complex patterns in skin texture and blemishes. The model shows high
performance in the analysis of medical images.
   Pre-trained ResNet models (such as ResNet-50 or ResNet-101) can be fine-tuned on datasets to
improve performance, especially when using limited datasets.
   This model is recommended for complex tasks where high accuracy is required.
   DenseNet connects each layer to all other layers, which promotes feature reuse and results in
stronger gradients. This is useful when detecting subtle skin imperfections such as acne or
discoloration.
   Compared with ResNet, DenseNet achieves high accuracy using fewer parameters, which can
reduce training time while maintaining high performance.
   DenseNet is widely used in medical image classification tasks, so the model can be effective for
dermatological image analysis.
The model is recommended for medium and large data sets with a focus on achieving high
accuracy, especially in conditions of limited computing resources.
   EfficientNet has high performance while using less computation. The model systematically
scales width, depth, and resolution, making it extremely efficient in terms of both accuracy and
computational resources.
   The EfficientNet pre-trained models are very effective, when fine-tuned, for medical imaging
tasks, including skin defect detection. EfficientNet has been proven to outperform models such as
ResNet and DenseNet in various medical image classification tasks and, at the same time, is more
resource efficient.
   The use of the model is recommended for projects that require a balance between high accuracy
and efficiency, especially when working with large-scale images or in cases of limited computing
power.

Table 1
Comparison of the CNN architectures capabilities
     CNN            Key Features         Advantages             Disadvantages           Common
 Architecture                                                                         Applications
   Residual     Depth increases     Avoids vanishing         Large parameter        Image
  Networks      with stacked        gradients.               size for deeper        classification,
                residual blocks.    Deeper networks          versions.              object
                Skip connections    improve                  Computationally        detection, face
                allow gradient      performance.             intensive on larger    recognition
                flow.               Robust and widely        ResNet variants.
                                    used.
    Dense       Dense connections Reduces number of          High                   Image
Convolutional throughout the        parameters.              computational cost     classification,
  Network       network.            Efficient feature        in memory due to       medical image
                Fewer parameters reuse and learning.         dense connections.     analysis
                due to feature      Good for small           More prone to
                reuse.              datasets.                overfitting on small
                Uses growth rate                             data.
                to control new
                features at each
                layer.
 EfficientNet   Optimized for both Good accuracy with        Complex design.         Mobile vision,
                efficiency and      less computation.        Heavier versions            image
                accuracy.           Flexible                 still require           classification,
                Compound scaling architecture for            substantial            object detection
                (balance between    different                computational
                depth, width, and   constraints.             power.
                resolution).        Scalable.
  MobileNet     Depthwise           Highly efficient on      – Lower accuracy        Mobile vision,
                separable           resource-                compared to larger     real-time image
                convolutions to     constrained              networks.                processing
                reduce              devices.                 – Limited model
                computation.        Lightweight and          capacity for
                Optimized for       fast.                    complex tasks.
                mobile and edge      Suitable for mobile
                devices.            applications.

MobileNet is a lightweight and fast model optimized for efficiency and ideal for real-time detection
of skin defects on mobile or embedded devices.
    The model has low computational requirements because it uses depth-separated convolutions,
greatly reducing the number of parameters. Thus, the model is ideal for applications where
computing resources are limited.
    Based on the above, it can be concluded that the model is suitable for deployment on mobile
platforms and is an excellent solution for applications that detect skin defects using a smartphone
camera.

5. Experiments & Results
Testing of the proposed models revealed that the ResNet and EfficientNet models demonstrate low
results on the test sample, while DenseNet and MobileNet perform the task of recognizing skin
defects with high accuracy (Figure 3).
Figure 3: Model Training Accuracy Comparison

The loss rate quantifies the error the model makes on the test set. Usually reflects how well the
predicted probabilities agree with the actual values. This indicator was the lowest for MobileNet,
and DenseNet was somewhat inferior (Figure 4). The loss indicators for ResNet and EfficientNet
turned out to be too high, so it was concluded that it is not appropriate to use these models for the
task of classifying skin defects.
   Although the models show high accuracy and low loss on the training data set, on the
validation set the accuracy is slightly lower and the loss is higher, indicating overfitting of the
model. Thus, we propose to change the approach to the formation of the dataset, by including more
parameters, in particular, adding images made using the ultraviolet spectrum, as well as converting
images into heat maps. This will allow for a more qualitative assessment of the image and
contribute to a more accurate determination of the nature of the defect. In addition, it is advisable
to use sensors to determine skin moisture. The main parameters of the proposed dataset are listed
in Table 2.




Figure 4: Model Training Loss Comparison
 Table 2
 The main parameters of the proposed dataset
           Parameter                         Unit                                Data type
ID                                           None                                   int
Moisture
Sensitivity
Pigment
UV spots
Texture
Blackheads
Pores
Stains                                        %                                     float
Color
UV acne
Sebum
Sebumt
Acne
Wrinkle
Porphyrin
Front photo
Left-side photo
Right-side photo
Front UV photo
Left-side UV photo                           None                                jpg image
Right-side UV photo
Front heatmap photo
Left-side heatmap photo
Right-side heatmap photo

6. Conclusions & Future work
Skin health is an essential component of overall well-being, and the early identification of skin
conditions is crucial for effective treatment and prevention. The facial skin, in particular, is
especially prone to various dermatological concerns, including acne, hyperpigmentation, dryness,
and signs of aging.
   In the course of this study the main models of convolutional neural networks are considered,
the peculiarities of their application in the tasks of medical image analysis, in particular for the
detection of facial skin defects, are analyzed. The advantages and disadvantages of the most widely
used architectures, as well as the features of their application, are described. The models were
tested on a dataset including medical images. The effectiveness of the models was evaluated
according to the accuracy and loss metrics. On the basis of the conducted research, the structure of
the dataset is proposed, based on a larger number of parameters, which will significantly improve
the accuracy of the models by selecting the most relevant features and analyzing them.
   For further research, there are plans to systematically identify and extract the key features that
are critical for effectively classifying skin defects through the application of Convolutional Neural
Networks (CNNs). This involves conducting an in-depth analysis of the various characteristics
present in the dataset, such as texture, color variations, and the specific patterns associated with
each skin condition. By leveraging techniques like feature extraction and selection, we aim to
isolate the most informative attributes that contribute to the accurate identification of skin defects.
This process will not only enhance the performance of the CNN models but also improve their
interpretability, allowing for better understanding and insight into how these models make
classification decisions. Furthermore, this feature extraction phase will enable us to refine the
model architectures and optimize hyperparameters, leading to more robust and generalizable
outcomes. Ultimately, this research will contribute to the development of more precise and reliable
diagnostic tools in dermatology, potentially transforming the way skin conditions are assessed and
treated in clinical practice.

7. Acknowledgements
The authors would like to thank Healthy Face Clinic and Dr. Vladyslav Karabaiev for providing the
equipment for data gathering that made the experiments and this work possible.

8. Declaration on Generative AI
It was used ChatGPT and Microsoft Copilot to rephrase sentences to improve style. The text was
carefully checked by the authors after using these software. The authors take full responsibility for
the publication.

9. References
[1] Park, H., Park, S. R., Lee, S., Hwang, J., Lee, M., Jang, S. I.& Kim, E. (2024). Development and
     application of artificial intelligence‐based facial skin image diagnosis system: Changes in facial
     skin characteristics with ageing in Korean women. International Journal of Cosmetic Science,
     46(2), 199-208.
[2] Patel, R. H., Foltz, E. A., Witkowski, A., & Ludzik, J. (2023). Analysis of artificial intelligence-
     based approaches applied to non-invasive imaging for early detection of melanoma: a
     systematic review. Cancers, 15(19), 4694.
[3] Singh, J., Sandhu, J. K., & Kumar, Y. (2024). An analysis of detection and diagnosis of different
     classes of skin diseases using artificial intelligence-based learning approaches with hyper
     parameters. Archives of Computational Methods in Engineering, 31(2), 1051-1078.
[4] Semerci, Z. M., Toru, H. S., Çobankent Aytekin, E., Tercanlı, H., Chiorean, D. M., Albayrak, Y.,
     & Cotoi, O. S. (2024). The Role of Artificial Intelligence in Early Diagnosis and Molecular
     Classification of Head and Neck Skin Cancers: A Multidisciplinary Approach. Diagnostics,
     14(14), 1477.
[5] Orhan, H., & Yavşan, E. (2023). Artificial intelligence-assisted detection model for melanoma
     diagnosis using deep learning techniques. Mathematical Modelling and Numerical Simulation
     with Applications, 3(2), 159-169.
[6] Strzelecki, M., Kociołek, M., Strąkowska, M., Kozłowski, M., Grzybowski, A., & Szczypiński, P.
     M. (2024). Artificial Intelligence in the detection of skin cancer: state of the art. Clinics in
     Dermatology.
[7] Ogudo, K. A., Surendran, R., & Khalaf, O. I. (2023). Optimal Artificial Intelligence Based
     Automated Skin Lesion Detection and Classification Model. Computer Systems Science &
     Engineering, 44(1).
[8] Rezk, E., Haggag, M., Eltorki, M., & El-Dakhakhni, W. (2023). A comprehensive review of
     artificial intelligence methods and applications in skin cancer diagnosis and treatment:
     Emerging trends and challenges. Healthcare Analytics, 100259.
[9] Pattnayak, P., Patnaik, S., Gourisaria, M. K., Singh, S., Barik, L., & Patra, S. S. (2024, August).
     Analysis and Detection of Skin Disorders using Artificial Intelligence-based learning. In 2024
     Second International Conference on Networks, Multimedia and Information Technology
     (NMITCON) (pp. 1-5). IEEE.
[10] Yazdanparast, T., Shamsipour, M., Ayatollahi, A., Delavar, S., Ahmadi, M., Samadi, A., & Firooz,
     A. (2024). Comparison of the Diagnostic Accuracy of Teledermoscopy, Face-to-Face
     Examinations and Artificial Intelligence in the Diagnosis of Melanoma. Indian Journal of
     Dermatology, 69(4), 296-300.
[11] Hnatchuk Y., Hovorushchenko T., Pavlova O. Methodology for the development and
     application of clinical decisions support information technologies with consideration of civil-
     legal grounds. Radioelectronic and Computer Systems. pp. 33-44
[12] Bouhissi, H.E., Tagzirt, D., Bouredjioua, F., Pavlova O. Health Recommender System for Smart
     Cities. CEUR Workshop Proceedings, 2023, 3426, pp. 334–343
[13] Vectra H2 official website. URL: https://beautix.com.ua/equipment/diagnostika_skin/vectra_h2
     (Last accessed September 26, 2024).
[14] Lascos Aesthetic Medicine: Lifeviz Pro Mini. URL: https://www.lascos.com.ua/apparaty/3d-
     photo-cameri/lifeviz-pro-mini-1030785657 (Last accessed September 26, 2024).
[15] Aisia           3D            Smart          Face           Skin           Analyzer.           URL:
     https://medunion.com/product/rFaTtwEYXdWC/China-Aisia-3D-Smart-Face-Skin-Analyzer-
     for-Salon-Hot-Skin-Scanner-Facial-Analyzer.html (Last accessed September 26, 2024).
[16] D. R. Sarvamangala, & R. V. Kulkarni. Convolutional neural networks in medical image
     understanding:        a    survey.     Evolutionary      Intelligence,     15(1),     1–22.    2021.
     https://doi.org/10.1007/s12065-020-00540-3
[17] Q. Zhou, Z. Huang, M. Ding, & X. Zhang. Medical Image Classification Using Light-Weight
     CNN With Spiking Cortical Model Based Attention Module, in IEEE Journal of Biomedical and
     Health      Informatics,       vol.   27,     no.    4,      pp.     1991-2002,      April     2023.
     https://doi.org/10.1109/JBHI.2023.3241439
[18] H. A. Ahmed, H. M. Hama, S. I. Jalal, & M. H. Ahmed. Deep learning in grapevine leaves
     varieties classification based on dense convolutional network. Journal of Image and Graphics,
     11(1), 98–103. 2023. https://doi.org/10.18178/joig.11.1.98-103
[19] M. A. Talukder, M. A. Layek, M. Kazi, M. A. Uddin, & S. Aryal. Empowering COVID-19
     detection: Optimizing performance through fine-tuned EfficientNet deep learning architecture.
     Computers          in       Biology       and       Medicine,         168,        2023.      107789.
     https://doi.org/10.1016/j.compbiomed.2023.107789
[20] M. S. Al Reshan, K. S. Gill, V. Anand, S. Gupta, H. Alshahrani, A. Sulaiman, & A. Shaikh.
     Detection of Pneumonia from Chest X-ray Images Utilizing MobileNet Model. Healthcare,
     11(11), 2023. 1561. https://doi.org/10.3390/healthcare11111561
[21] Skin     Disease      Classification   Dataset.    Kaggle.      2023,    November        16.   URL:
     https://www.kaggle.com/datasets/trainingdatapro/skin-defects-acne-redness-and-bags-under-
     the-eyes (Last accessed September 26, 2024).
[22] L. Abdelrahman, M. Al Ghamdi, F. Collado-Mesa, & M. Abdel-Mottaleb. Convolutional neural
     networks for breast cancer detection in mammography: A survey. Computers in Biology and
     Medicine, 131, 2021. 104248. https://doi.org/10.1016/j.compbiomed.2021.104248