<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Artificial Intelligence-based method for face skin diagnostic ⋆</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Olga</forename><surname>Pavlova</surname></persName>
							<email>pavlovao@khmnu.edu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Khmelnytskyi National University</orgName>
								<address>
									<addrLine>Institutska str., 11</addrLine>
									<postCode>29016</postCode>
									<settlement>Khmelnytskyi</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Vitalii</forename><surname>Alekseiko</surname></persName>
							<email>vitalii.alekseiko@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">Khmelnytskyi National University</orgName>
								<address>
									<addrLine>Institutska str., 11</addrLine>
									<postCode>29016</postCode>
									<settlement>Khmelnytskyi</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Vladyslav</forename><surname>Karabaiev</surname></persName>
							<email>vladkarabaev@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">Khmelnytskyi National University</orgName>
								<address>
									<addrLine>Institutska str., 11</addrLine>
									<postCode>29016</postCode>
									<settlement>Khmelnytskyi</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">Healthy Face Clinic</orgName>
								<address>
									<addrLine>Stepana Bandery str., 5a</addrLine>
									<postCode>29000</postCode>
									<settlement>Khmelnytskyi</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Andrii</forename><surname>Kuzmin</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Khmelnytskyi National University</orgName>
								<address>
									<addrLine>Institutska str., 11</addrLine>
									<postCode>29016</postCode>
									<settlement>Khmelnytskyi</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Artificial Intelligence-based method for face skin diagnostic ⋆</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">5C623E7B68106A1558499E49F7D64661</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:10+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Artificial Intelligence (AI)</term>
					<term>facial diagnostic</term>
					<term>decision support</term>
					<term>IT solutions for medicine</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The skin of the face is a complex organ, the condition of which can vary due to various factors such as genetics, lifestyle, environmental conditions and age. An accurate assessment of the condition of the skin is critical for the correct selection of care and treatment methods, which stimulates the development of technologies for its analysis. Recent advances in artificial intelligence (AI) provide new opportunities for automated skin analysis, which improves the accuracy of diagnosis and the efficiency of procedures. This work focuses on the review and analysis of the application of facial skin analysis systems integrated with artificial intelligence algorithms. It is important to understand the working principles of such systems and their potential in practical application both in cosmetology and in other medical fields. The purpose of this work is to study the technical aspects of creating such skin analyzers, in particular their software part, and to evaluate their practical effectiveness based on modern machine learning algorithms.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Skin health is a critical aspect of overall well-being, and early detection of skin conditions plays a vital role in effective treatment and prevention. Facial skin, in particular, is highly susceptible to various dermatological issues, such as acne, hyperpigmentation, dryness, and signs of aging. Traditional skin diagnostic methods often require clinical expertise, specialized equipment, and time-consuming procedures. In recent years, advancements in artificial intelligence (AI) have offered new opportunities to enhance skin diagnostic processes, providing efficient and accurate solutions. Artificial intelligence, in particular machine learning, is opening new horizons in skin diagnostics, allowing the creation of systems capable of analyzing skin images and making recommendations for care and treatment based on identified problems.</p><p>At this stage of technology development, there are already devices that allow you to assess the condition of the skin, both companies and science offer new approaches to facial skin diagnostics. This paper proposes an AI-based method for face skin diagnostics, utilizing data obtained from a smart skin analyser. The smart skin analyser collects comprehensive facial skin data, including moisture levels, pigmentation patterns, pore size, and other relevant features. By integrating this data with AI algorithms, the proposed method aims to automate and optimize the diagnostic process, delivering reliable and consistent results.</p><p>Our approach leverages neural network application and image processing algorithms to analyse the facial skin's characteristics. The proposed method identifies various skin conditions and provides personalized recommendations based on the analysed data and offers a non-invasive, efficient, and accessible solution for users seeking professional-level skin analysis and care in realtime.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related works</head><p>In the course of the study, an analysis of the latest scientific publications in the field of skin diagnostic was carried out.</p><p>In <ref type="bibr" target="#b0">[1]</ref> AI-based facial skin diagnosis system (Dr. AMORE®) uses facial images of Korean women to analyse wrinkles, pigmentation, skin pores, and other skin red spots. The system is trained using clinical expert evaluations and deep learning.</p><p>The aim of <ref type="bibr" target="#b1">[2]</ref> is to evaluate the current state of AI-based techniques used in combination with non-invasive diagnostic imaging modalities including reflectance confocal microscopy (RCM), optical coherence tomography (OCT), and dermoscopy. It also aimed to determine whether the application of AI-based techniques can lead to improved diagnostic accuracy of melanoma.</p><p>The objective of <ref type="bibr" target="#b2">[3]</ref> is to design a system that combines metaheuristic optimizers with various AI based classifiers to detect and diagnose skin diseases. In order to accomplish this objective, numerical and image datasets have been taken, pre-processed, and visually analysed in order to comprehend their patterns.</p><p>In <ref type="bibr" target="#b3">[4]</ref> the propensity of skin cancer to metastasize highlights the importance of early detection for successful treatment. This narrative review explores the evolving role of artificial intelligence (AI) in diagnosing head and neck skin cancers from both radiological and pathological perspectives.</p><p>The proposed in <ref type="bibr" target="#b4">[5]</ref> model has the potential to aid qualified healthcare professionals in the diagnosis of melanoma. Furthermore, the authors propose a mobile application to facilitate melanoma detection in home environments, providing added convenience and accessibility.</p><p>The paper <ref type="bibr" target="#b5">[6]</ref> delves into unimodal models' methodologies, applications, and shortcomings while exploring how multimodal models can enhance accuracy and reliability.</p><p>The study <ref type="bibr" target="#b6">[7]</ref> presents an automated skin lesion detection and classification technique utilizing optimized stacked sparse autoencoder (OSSAE) based feature extractor with backpropagation neural network (BPNN), named the OSSAE-BPNN technique.</p><p>The insights in <ref type="bibr" target="#b7">[8]</ref> demonstrate the bias towards deep learning methods and the shortage of studies on rare and precancerous skin lesions.</p><p>In paper <ref type="bibr" target="#b8">[9]</ref> five different algorithms of artificial intelligence have been selected and used to skin disease dataset.</p><p>The purpose of the study <ref type="bibr" target="#b9">[10]</ref> was to assess the diagnostic accuracy of the teledermoscopy method using the FotoFinder device as well as the Moleanalyzer Pro artificial intelligence (AI) Assistant and to compare them with the face-to-face clinical examination for the diagnosis of melanoma confirmed with histopathology.</p><p>In <ref type="bibr" target="#b10">[11]</ref> we propose a methodology for consideration of civil-legal grounds in medical decisionmaking process.</p><p>The paper <ref type="bibr" target="#b11">[12]</ref> proposes a health recommender system for smart cities. The methodology proposes the smart distribution of healthcare institutions which are located the closest to the patient.</p><p>It is impossible not to notice that along with scientific developments, many new devices are appearing on the market that allow to measure skin parameters and even predict the result after surgical correction in plastic surgery.</p><p>For example, the VECTRA H2 imaging system <ref type="bibr" target="#b12">[13]</ref> is a portable hardware skin diagnosis system with volumetric body imaging for use in cosmetology, aesthetic medicine, and dermatology. The features of VECTRA H2 from Canfield Scientific:</p><p>-Automatic merging: three face or body shots are automatically merged into one 3D image by VECTRA software. -Accurate assessment of contours: the gray visualization mode allows you to evaluate the contours of the face and body without being distracted by color when planning and studying the result of corrective procedures. -Face and body measurement in automatic mode: volumetric visualization (3D mode) and digital data help your patients understand the underlying problems. -Breast Sculptor software application: technology for creating three-dimensional breast models, based on selected implants, taking into account gravity, shape and location. -Visual comparison: visualization of several breast augmentation surgery scenarios by parameters, sizes and style of implants.</p><p>- hopes and expectations of your patient. 3D LifeViz® Mini <ref type="bibr" target="#b13">[14]</ref> is the most compact 3D system for skin analysis and modelling, a convenient solution for cosmetologists, dermatologists, cosmetic and plastic surgeons. Analyses the condition of the patient's skin according to 6 parameters and reproduces the image of the face on the screen in 3D format. The patient can see what his face could look like after contour plastic surgery or surgery. The system is based on a special type of stereophotogrammetry, where 2D images are automatically combined into a three-dimensional representation.</p><p>LifeViz® technology allows you to quantitatively change the volume and determine the small details of the skin surface with extreme accuracy. An example of an image created by 3D LifeViz® is presented in Figure <ref type="figure">1</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 1:</head><p>Example of the image created by 3D LifeViz® Mini system for skin analysis and modelling <ref type="bibr" target="#b13">[14]</ref> Taking into account the relevance of the problem of application modern information technologies in medical area, namely, analysis of the results taken from the Smart skin analyzer <ref type="bibr" target="#b14">[15]</ref> for facial skin defects detection it was decided to develop a methodology of neural network application for solving this issue. Therefore, the purpose of the research is: 1) to consider convolutional neural network architectures for medical image analysis; 2) to evaluate the effectiveness of models for the task of detecting and classifying skin defects; 3) to consider possible ways of improving the performance of models by changing the input parameters.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Methodology</head><p>The data for the experiment were obtained using the skin analyzer AISIA <ref type="bibr" target="#b14">[15]</ref>, which is presented in Figure <ref type="figure" target="#fig_2">2</ref>. It provides the analysis of the skin according to the following parameters:</p><p>-Pore size (spectral visualization of RGB pores).</p><p>-The presence of blackheads and postacne: an analysis of all spots of a round shape with a color darker than usual. -Age changes: the degree of wrinkles and the depth of creases.</p><p>-Skin texture: imaging changes in relief, texture of the dermis, as well as predicting the degree of future changes.  During the research, a wide range of methods was used, including general scientific methods: theoretical (modeling, analysis, synthesis), empirical (observation, comparison, experiment). Medical diagnostic methods were also used to form the dataset, and artificial intelligence tools were used, in particular, machine learning models for image analysis.</p><p>As machine learning technologies demonstrate their effectiveness in the analysis of medical images <ref type="bibr" target="#b15">[16,</ref><ref type="bibr" target="#b16">17]</ref>, we consider it appropriate to consider basic convolutional neural network (CNN) models for facial skin defect detection.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Residual Networks</head><p>The main idea of ResNet is to learn the residual mapping <ref type="bibr" target="#b21">[22]</ref>:</p><formula xml:id="formula_0">F(x) = H(x) − x,<label>(1)</label></formula><p>where: F(x) -residual mapping, H(x) -desired mapping, x -input. Thus, H(x) from formula (1) can be represented as:</p><formula xml:id="formula_1">H(x) = F(x) + x,<label>(2)</label></formula><p>A typical residual block consists of two or more convolutional layers with batch normalization (BN) and ReLU activation: </p><formula xml:id="formula_2">y = F(x; {W i }) + x,<label>(3) where</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Dense Convolutional Network</head><p>In DenseNet, each layer receives inputs from all preceding layers. The output of layer can be computed as:</p><p>= ( 0 , 1 , . . . , −1 ), (4) where: H l -operations performed by the layer; x i -feature maps from all preceding layers. Instead of adding the inputs as in ResNet, DenseNet concatenates feature maps:</p><formula xml:id="formula_3">= −1 , −2 , . . . , 0 ,<label>(5)</label></formula><p>Key Properties of DenseNet are Feature Reuse and Gradient Flow <ref type="bibr" target="#b17">[18]</ref>. DenseNet emphasizes feature reuse, which reduces the number of parameters while still maintaining high accuracy. The dense connectivity pattern allows gradients to flow through many paths during backpropagation, enhancing learning.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">EfficientNet</head><p>EfficientNet uses a compound scaling method, which balances the scaling of depth d, width w, and resolution r. The scaling can be defined as:</p><p>= , = , = , (6) where: k -constant; α, β, γ -scaling coefficients. EfficientNet starts from a baseline model and scales it. For instance, the total number of parameters P in the model can be expressed as:</p><formula xml:id="formula_4">= • 2 • 2 ,<label>(7)</label></formula><p>where c -constant that defines the efficiency of the architecture. Key Properties of EfficientNet are Optimized Architecture and Efficiency. The architecture is optimized through neural architecture search, allowing for a balance between model size and performance. EfficientNet achieves state-of-the-art accuracy with fewer parameters compared to previous models <ref type="bibr" target="#b18">[19]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">MobileNet</head><p>MobileNet introduces depthwise separable convolutions, which factor the standard convolution into two separate layers: Depthwise Convolution and Pointwise Convolution <ref type="bibr" target="#b19">[20]</ref>.</p><p>Depthwise Convolution can be represented as a single filter, which is applied to each input channel.</p><p>= ⊙ , (8) Pointwise Convolution can be represented as a 1×1 convolution that combines the output from the depthwise layer.</p><p>= ( ) = • , (9) Key Properties of MobileNet are Efficiency and Width Multiplier. The reduction in the number of parameters and computations compared to standard convolutions makes MobileNet highly suitable for mobile and edge devices. MobileNet allows for a width multiplier α to reduce the number of channels in each layer, further optimizing the model size.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.5.">Dataset Structure</head><p>The Skin Disease Classification Dataset from Kaggle <ref type="bibr" target="#b20">[21]</ref> was used to test the performance of the models. This dataset contains a collection of photographs of human faces divided into three distinct classes: acne, bags under the eyes, and facial redness. To ensure a comprehensive presentation and accurate classification, it is advisable to consider three photos for each person. These images include a front view along with side profiles on both the right and left. This multi-angle approach not only increases the variability of the data, but also helps the models learn to identify and differentiate the subtle nuances of skin diseases that may be missed in a single photo. The dataset's structured format and comprehensive documentation make it an invaluable resource for developing, training, and fine-tuning machine learning algorithms aimed at classifying skin diseases, ultimately contributing to advances in dermatology diagnostics and personalized skin care solutions. All data are presented in a generalized form and used exclusively within the scope of scientific research. The work used data from open resources, supplemented with medical data from clinical practice. Before using medical data, permission for their use was obtained from each of the patients. The research provides the principles of responsible artificial intelligence. Confidentiality of information is guaranteed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Models' comparison</head><p>For detecting skin defects such as acne, redness, etc., the choice of CNN architecture will depend on factors such as dataset size, problem complexity, and available computing resources. The most commonly used CNN architectures include:</p><p>-Residual Networks (ResNet); -Dense Convolutional Network (DenseNet); -EfficientNet; -MobileNet. Table <ref type="table" target="#tab_0">1</ref> shows a comparison of the capabilities of the presented architectures. Using ResNet enables deep learning, thanks to the use of residual connections, which allows you to train deeper networks without facing the gradient vanishing problem. This can help the model capture more complex patterns in skin texture and blemishes. The model shows high performance in the analysis of medical images.</p><p>Pre-trained ResNet models (such as ResNet-50 or ResNet-101) can be fine-tuned on datasets to improve performance, especially when using limited datasets. This model is recommended for complex tasks where high accuracy is required. DenseNet connects each layer to all other layers, which promotes feature reuse and results in stronger gradients. This is useful when detecting subtle skin imperfections such as acne or discoloration.</p><p>Compared with ResNet, DenseNet achieves high accuracy using fewer parameters, which can reduce training time while maintaining high performance.</p><p>DenseNet is widely used in medical image classification tasks, so the model can be effective for dermatological image analysis. The model is recommended for medium and large data sets with a focus on achieving high accuracy, especially in conditions of limited computing resources.</p><p>EfficientNet has high performance while using less computation. The model systematically scales width, depth, and resolution, making it extremely efficient in terms of both accuracy and computational resources.</p><p>The EfficientNet pre-trained models are very effective, when fine-tuned, for medical imaging tasks, including skin defect detection. EfficientNet has been proven to outperform models such as ResNet and DenseNet in various medical image classification tasks and, at the same time, is more resource efficient.</p><p>The use of the model is recommended for projects that require a balance between high accuracy and efficiency, especially when working with large-scale images or in cases of limited computing power. -Lower accuracy compared to larger networks.</p><p>-Limited model capacity for complex tasks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Mobile vision, real-time image processing</head><p>MobileNet is a lightweight and fast model optimized for efficiency and ideal for real-time detection of skin defects on mobile or embedded devices. The model has low computational requirements because it uses depth-separated convolutions, greatly reducing the number of parameters. Thus, the model is ideal for applications where computing resources are limited.</p><p>Based on the above, it can be concluded that the model is suitable for deployment on mobile platforms and is an excellent solution for applications that detect skin defects using a smartphone camera.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Experiments &amp; Results</head><p>Testing of the proposed models revealed that the ResNet and EfficientNet models demonstrate low results on the test sample, while DenseNet and MobileNet perform the task of recognizing skin defects with high accuracy (Figure <ref type="figure" target="#fig_4">3</ref>). The loss rate quantifies the error the model makes on the test set. Usually reflects how well the predicted probabilities agree with the actual values. This indicator was the lowest for MobileNet, and DenseNet was somewhat inferior (Figure <ref type="figure" target="#fig_5">4</ref>). The loss indicators for ResNet and EfficientNet turned out to be too high, so it was concluded that it is not appropriate to use these models for the task of classifying skin defects.</p><p>Although the models show high accuracy and low loss on the training data set, on the validation set the accuracy is slightly lower and the loss is higher, indicating overfitting of the model. Thus, we propose to change the approach to the formation of the dataset, by including more parameters, in particular, adding images made using the ultraviolet spectrum, as well as converting images into heat maps. This will allow for a more qualitative assessment of the image and contribute to a more accurate determination of the nature of the defect. In addition, it is advisable to use sensors to determine skin moisture. The main parameters of the proposed dataset are listed in Table <ref type="table" target="#tab_1">2</ref>.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusions &amp; Future work</head><p>Skin health is an essential component of overall well-being, and the early identification of skin conditions is crucial for effective treatment and prevention. The facial skin, in particular, is especially prone to various dermatological concerns, including acne, hyperpigmentation, dryness, and signs of aging.</p><p>In the course of this study the main models of convolutional neural networks are considered, the peculiarities of their application in the tasks of medical image analysis, in particular for the detection of facial skin defects, are analyzed. The advantages and disadvantages of the most widely used architectures, as well as the features of their application, are described. The models were tested on a dataset including medical images. The effectiveness of the models was evaluated according to the accuracy and loss metrics. On the basis of the conducted research, the structure of the dataset is proposed, based on a larger number of parameters, which will significantly improve the accuracy of the models by selecting the most relevant features and analyzing them.</p><p>For further research, there are plans to systematically identify and extract the key features that are critical for effectively classifying skin defects through the application of Convolutional Neural Networks (CNNs). This involves conducting an in-depth analysis of the various characteristics present in the dataset, such as texture, color variations, and the specific patterns associated with each skin condition. By leveraging techniques like feature extraction and selection, we aim to isolate the most informative attributes that contribute to the accurate identification of skin defects. This process will not only enhance the performance of the CNN models but also improve their interpretability, allowing for better understanding and insight into how these models make classification decisions. Furthermore, this feature extraction phase will enable us to refine the model architectures and optimize hyperparameters, leading to more robust and generalizable outcomes. Ultimately, this research will contribute to the development of more precise and reliable diagnostic tools in dermatology, potentially transforming the way skin conditions are assessed and treated in clinical practice.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>Visualization of expectations: illustrative display of benefits after breast augmentation surgery. -Mastopexy program. A software application for simulating lifting operations taking into account areas of skin excision. -Volumetric measurements of the body: automatic measurement of the circumference and volume of body contours. -Quantification of the subcutaneous structures of the face: Canfield's patented technology shares the unique color shades of red and brown facial skin. This allows you to get a complete picture of the skin condition and improve the quality of imaging. -Measurement of volume change: volume data is automatically measured with one click of the mouse in grayscale mode with parallel color display of changes in facial contours. -Markerless tracking: a dynamic assessment of changes in the surface of the facial skin is carried out: alignment, direction and final result. -Full picture of changes: the program creates a holistic picture of change, reflecting all the</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>-</head><label></label><figDesc>The level of secretion of sebum and the localization of black spots. -Pigmentation zones: an image not only of the actual state, but also of forecasting future formations. -Hydration. -Areas of sensitivity: determination of zonal sensitivity, its susceptibility to allergic reactions. -Brown zones: an image of metabolic processes in cells, areas of current recovery. -Injury by UV rays: an image of pigmentation at different levels of the epidermis. Fixation of the size and depth of such spots. -Diagnosis of age-related changes: a picture of the future aging of the dermis, wrinkles, if the client does not change his care.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Smart portable skin analyzer AISIA<ref type="bibr" target="#b14">[15]</ref> </figDesc><graphic coords="4,181.80,323.64,251.28,188.28" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head></head><label></label><figDesc>: W i -weights of the convolutional layers; y -output of the block. Key Properties of ResNet are Identity Mapping and Training Deep Networks. The skip connections enable identity mapping, facilitating gradient flow and addressing the vanishing gradient problem. ResNets can have hundreds or thousands of layers while remaining easier to train compared to traditional deep networks.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Model Training Accuracy Comparison</figDesc><graphic coords="8,111.36,62.40,377.40,244.08" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Model Training Loss Comparison</figDesc><graphic coords="8,109.56,507.96,381.36,246.60" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Comparison of the CNN architectures capabilities CNN Architecture</figDesc><table><row><cell></cell><cell>Key Features</cell><cell>Advantages</cell><cell>Disadvantages</cell><cell>Common</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>Applications</cell></row><row><cell>Residual</cell><cell>Depth increases</cell><cell>Avoids vanishing</cell><cell>Large parameter</cell><cell>Image</cell></row><row><cell>Networks</cell><cell>with stacked</cell><cell>gradients.</cell><cell>size for deeper</cell><cell>classification,</cell></row><row><cell></cell><cell>residual blocks.</cell><cell>Deeper networks</cell><cell>versions.</cell><cell>object</cell></row><row><cell></cell><cell>Skip connections</cell><cell>improve</cell><cell>Computationally</cell><cell>detection, face</cell></row><row><cell></cell><cell>allow gradient</cell><cell>performance.</cell><cell>intensive on larger</cell><cell>recognition</cell></row><row><cell></cell><cell>flow.</cell><cell>Robust and widely</cell><cell>ResNet variants.</cell><cell></cell></row><row><cell></cell><cell></cell><cell>used.</cell><cell></cell><cell></cell></row><row><cell>Dense</cell><cell>Dense connections</cell><cell>Reduces number of</cell><cell>High</cell><cell>Image</cell></row><row><cell>Convolutional</cell><cell>throughout the</cell><cell>parameters.</cell><cell>computational cost</cell><cell>classification,</cell></row><row><cell>Network</cell><cell>network.</cell><cell>Efficient feature</cell><cell>in memory due to</cell><cell>medical image</cell></row><row><cell></cell><cell>Fewer parameters</cell><cell>reuse and learning.</cell><cell>dense connections.</cell><cell>analysis</cell></row><row><cell></cell><cell>due to feature</cell><cell>Good for small</cell><cell>More prone to</cell><cell></cell></row><row><cell></cell><cell>reuse.</cell><cell>datasets.</cell><cell>overfitting on small</cell><cell></cell></row><row><cell></cell><cell>Uses growth rate</cell><cell></cell><cell>data.</cell><cell></cell></row><row><cell></cell><cell>to control new</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell>features at each</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell>layer.</cell><cell></cell><cell></cell><cell></cell></row><row><cell>EfficientNet</cell><cell>Optimized for both</cell><cell>Good accuracy with</cell><cell>Complex design.</cell><cell>Mobile vision,</cell></row><row><cell></cell><cell>efficiency and</cell><cell>less computation.</cell><cell>Heavier versions</cell><cell>image</cell></row><row><cell></cell><cell>accuracy.</cell><cell>Flexible</cell><cell>still require</cell><cell>classification,</cell></row><row><cell></cell><cell>Compound scaling</cell><cell>architecture for</cell><cell>substantial</cell><cell>object detection</cell></row><row><cell></cell><cell>(balance between</cell><cell>different</cell><cell>computational</cell><cell></cell></row><row><cell></cell><cell>depth, width, and</cell><cell>constraints.</cell><cell>power.</cell><cell></cell></row><row><cell></cell><cell>resolution).</cell><cell>Scalable.</cell><cell></cell><cell></cell></row><row><cell>MobileNet</cell><cell>Depthwise</cell><cell>Highly efficient on</cell><cell></cell><cell></cell></row><row><cell></cell><cell>separable</cell><cell>resource-</cell><cell></cell><cell></cell></row><row><cell></cell><cell>convolutions to</cell><cell>constrained</cell><cell></cell><cell></cell></row><row><cell></cell><cell>reduce</cell><cell>devices.</cell><cell></cell><cell></cell></row><row><cell></cell><cell>computation.</cell><cell>Lightweight and</cell><cell></cell><cell></cell></row><row><cell></cell><cell>Optimized for</cell><cell>fast.</cell><cell></cell><cell></cell></row><row><cell></cell><cell>mobile and edge</cell><cell>Suitable for mobile</cell><cell></cell><cell></cell></row><row><cell></cell><cell>devices.</cell><cell>applications.</cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>The main parameters of the proposed dataset</figDesc><table><row><cell>Parameter</cell><cell>Unit</cell><cell>Data type</cell></row><row><cell>ID</cell><cell>None</cell><cell>int</cell></row><row><cell>Moisture</cell><cell></cell><cell></cell></row><row><cell>Sensitivity</cell><cell></cell><cell></cell></row><row><cell>Pigment</cell><cell></cell><cell></cell></row><row><cell>UV spots</cell><cell></cell><cell></cell></row><row><cell>Texture</cell><cell></cell><cell></cell></row><row><cell>Blackheads</cell><cell></cell><cell></cell></row><row><cell>Pores</cell><cell></cell><cell></cell></row><row><cell>Stains</cell><cell>%</cell><cell>float</cell></row><row><cell>Color</cell><cell></cell><cell></cell></row><row><cell>UV acne</cell><cell></cell><cell></cell></row><row><cell>Sebum</cell><cell></cell><cell></cell></row><row><cell>Sebumt</cell><cell></cell><cell></cell></row><row><cell>Acne</cell><cell></cell><cell></cell></row><row><cell>Wrinkle</cell><cell></cell><cell></cell></row><row><cell>Porphyrin</cell><cell></cell><cell></cell></row><row><cell>Front photo</cell><cell></cell><cell></cell></row><row><cell>Left-side photo</cell><cell></cell><cell></cell></row><row><cell>Right-side photo</cell><cell></cell><cell></cell></row><row><cell>Front UV photo</cell><cell></cell><cell></cell></row><row><cell>Left-side UV photo</cell><cell>None</cell><cell>jpg image</cell></row><row><cell>Right-side UV photo</cell><cell></cell><cell></cell></row><row><cell>Front heatmap photo</cell><cell></cell><cell></cell></row><row><cell>Left-side heatmap photo</cell><cell></cell><cell></cell></row><row><cell>Right-side heatmap photo</cell><cell></cell><cell></cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Acknowledgements</head><p>The authors would like to thank Healthy Face Clinic and Dr. Vladyslav Karabaiev for providing the equipment for data gathering that made the experiments and this work possible.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8.">Declaration on Generative AI</head><p>It was used ChatGPT and Microsoft Copilot to rephrase sentences to improve style. The text was carefully checked by the authors after using these software. The authors take full responsibility for the publication.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Development and application of artificial intelligence-based facial skin image diagnosis system: Changes in facial skin characteristics with ageing in Korean women</title>
		<author>
			<persName><forename type="first">H</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">R</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hwang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">I</forename><surname>Jang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Kim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Cosmetic Science</title>
		<imprint>
			<biblScope unit="volume">46</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="199" to="208" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Analysis of artificial intelligencebased approaches applied to non-invasive imaging for early detection of melanoma: a systematic review</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">H</forename><surname>Patel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">A</forename><surname>Foltz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Witkowski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ludzik</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Cancers</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">19</biblScope>
			<biblScope unit="page">4694</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">An analysis of detection and diagnosis of different classes of skin diseases using artificial intelligence-based learning approaches with hyper parameters</title>
		<author>
			<persName><forename type="first">J</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">K</forename><surname>Sandhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Kumar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Archives of Computational Methods in Engineering</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="1051" to="1078" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">The Role of Artificial Intelligence in Early Diagnosis and Molecular Classification of Head and Neck Skin Cancers: A Multidisciplinary Approach</title>
		<author>
			<persName><forename type="first">Z</forename><forename type="middle">M</forename><surname>Semerci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">S</forename><surname>Toru</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Çobankent Aytekin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Tercanlı</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Chiorean</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Albayrak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">S</forename><surname>Cotoi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Diagnostics</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">14</biblScope>
			<biblScope unit="page">1477</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Artificial intelligence-assisted detection model for melanoma diagnosis using deep learning techniques</title>
		<author>
			<persName><forename type="first">H</forename><surname>Orhan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Yavşan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Mathematical Modelling and Numerical Simulation with Applications</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="159" to="169" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Artificial Intelligence in the detection of skin cancer: state of the art</title>
		<author>
			<persName><forename type="first">M</forename><surname>Strzelecki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kociołek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Strąkowska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kozłowski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Grzybowski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">M</forename><surname>Szczypiński</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Clinics in Dermatology</title>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Optimal Artificial Intelligence Based Automated Skin Lesion Detection and Classification Model</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">A</forename><surname>Ogudo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Surendran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">I</forename><surname>Khalaf</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computer Systems Science &amp; Engineering</title>
		<imprint>
			<biblScope unit="volume">44</biblScope>
			<biblScope unit="issue">1</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A comprehensive review of artificial intelligence methods and applications in skin cancer diagnosis and treatment: Emerging trends and challenges</title>
		<author>
			<persName><forename type="first">E</forename><surname>Rezk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Haggag</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Eltorki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>El-Dakhakhni</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Healthcare Analytics</title>
		<imprint>
			<biblScope unit="page">100259</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Analysis and Detection of Skin Disorders using Artificial Intelligence-based learning</title>
		<author>
			<persName><forename type="first">P</forename><surname>Pattnayak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Patnaik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Gourisaria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Barik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Patra</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Second International Conference on Networks, Multimedia and Information Technology (NMITCON)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2024-08">2024. August. 2024</date>
			<biblScope unit="page" from="1" to="5" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Comparison of the Diagnostic Accuracy of Teledermoscopy, Face-to-Face Examinations and Artificial Intelligence in the Diagnosis of Melanoma</title>
		<author>
			<persName><forename type="first">T</forename><surname>Yazdanparast</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Shamsipour</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ayatollahi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Delavar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ahmadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Samadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Firooz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Indian Journal of Dermatology</title>
		<imprint>
			<biblScope unit="volume">69</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="296" to="300" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Methodology for the development and application of clinical decisions support information technologies with consideration of civillegal grounds</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Hnatchuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Hovorushchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Pavlova</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Radioelectronic and Computer Systems</title>
		<imprint>
			<biblScope unit="page" from="33" to="44" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Health Recommender System for Smart Cities</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">E</forename><surname>Bouhissi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Tagzirt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Bouredjioua</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Pavlova</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proceedings</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="334" to="343" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<ptr target="https://beautix.com.ua/equipment/diagnostika_skin/vectra_h2" />
		<title level="m">Vectra H2 official website</title>
				<imprint>
			<date type="published" when="2024-09-26">September 26, 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<ptr target="https://www.lascos.com.ua/apparaty/3d-photo-cameri/lifeviz-pro-mini-1030785657" />
		<title level="m">Lascos Aesthetic Medicine: Lifeviz Pro Mini</title>
				<imprint>
			<date type="published" when="2024-09-26">September 26, 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<ptr target="https://medunion.com/product/rFaTtwEYXdWC/China-Aisia-3D-Smart-Face-Skin-Analyzer-for-Salon-Hot-Skin-Scanner-Facial-Analyzer.html" />
		<title level="m">Aisia 3D Smart Face Skin Analyzer</title>
				<imprint>
			<date type="published" when="2024-09-26">September 26, 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Convolutional neural networks in medical image understanding: a survey</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">R</forename><surname>Sarvamangala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">V</forename><surname>Kulkarni</surname></persName>
		</author>
		<idno type="DOI">10.1007/s12065-020-00540-3</idno>
		<ptr target="https://doi.org/10.1007/s12065-020-00540-3" />
	</analytic>
	<monogr>
		<title level="j">Evolutionary Intelligence</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="22" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Medical Image Classification Using Light-Weight CNN With Spiking Cortical Model Based Attention Module</title>
		<author>
			<persName><forename type="first">Q</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ding</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
		<idno type="DOI">10.1109/JBHI.2023.3241439</idno>
		<ptr target="https://doi.org/10.1109/JBHI.2023.3241439" />
	</analytic>
	<monogr>
		<title level="j">IEEE Journal of Biomedical and Health Informatics</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="1991" to="2002" />
			<date type="published" when="2023-04">April 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Deep learning in grapevine leaves varieties classification based on dense convolutional network</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">A</forename><surname>Ahmed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">M</forename><surname>Hama</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">I</forename><surname>Jalal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">H</forename><surname>Ahmed</surname></persName>
		</author>
		<idno type="DOI">10.18178/joig.11.1.98-103</idno>
		<ptr target="https://doi.org/10.18178/joig.11.1.98-103" />
	</analytic>
	<monogr>
		<title level="j">Journal of Image and Graphics</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="98" to="103" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Empowering COVID-19 detection: Optimizing performance through fine-tuned EfficientNet deep learning architecture</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Talukder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Layek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kazi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Uddin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">&amp;</forename><forename type="middle">S</forename></persName>
		</author>
		<idno type="DOI">10.1016/j.compbiomed.2023.107789</idno>
		<ptr target="https://doi.org/10.1016/j.compbiomed.2023.107789" />
	</analytic>
	<monogr>
		<title level="j">Computers in Biology and Medicine</title>
		<imprint>
			<biblScope unit="volume">168</biblScope>
			<biblScope unit="page">107789</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Detection of Pneumonia from Chest X-ray Images Utilizing MobileNet Model</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Al Reshan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">S</forename><surname>Gill</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Anand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Alshahrani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sulaiman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">&amp;</forename><forename type="middle">A</forename><surname>Shaikh</surname></persName>
		</author>
		<idno type="DOI">10.3390/healthcare11111561</idno>
		<idno>2023. 1561</idno>
		<ptr target="https://doi.org/10.3390/healthcare11111561" />
	</analytic>
	<monogr>
		<title level="j">Healthcare</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="issue">11</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Skin Disease Classification Dataset</title>
		<ptr target="https://www.kaggle.com/datasets/trainingdatapro/skin-defects-acne-redness-and-bags-under-the-eyes" />
	</analytic>
	<monogr>
		<title level="j">Kaggle</title>
		<imprint>
			<date type="published" when="2023-11-16">2023. November 16. September 26, 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Convolutional neural networks for breast cancer detection in mammography: A survey</title>
		<author>
			<persName><forename type="first">L</forename><surname>Abdelrahman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Al Ghamdi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Collado-Mesa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">&amp;</forename><forename type="middle">M</forename><surname>Abdel-Mottaleb</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.compbiomed.2021.104248</idno>
		<ptr target="https://doi.org/10.1016/j.compbiomed.2021.104248" />
	</analytic>
	<monogr>
		<title level="j">Computers in Biology and Medicine</title>
		<imprint>
			<biblScope unit="volume">131</biblScope>
			<biblScope unit="page">104248</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
