<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Exploring Conditions of Image Samples Formation for Person Identification Information Technology</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Oleksii</forename><surname>Bychkov</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Kateryna</forename><surname>Merkulova</surname></persName>
						</author>
						<author role="corresp">
							<persName><forename type="first">Yelyzaveta</forename><surname>Zhabska</surname></persName>
							<email>y.zhabska@gmail.com</email>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="institution">Taras Shevchenko National University of Kyiv</orgName>
								<address>
									<addrLine>Volodymyrska str. 64/13</addrLine>
									<postCode>01601</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="department">Information Technology and Implementation (IT&amp;I-2023)</orgName>
								<address>
									<addrLine>November 20-21</addrLine>
									<postCode>2023</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Exploring Conditions of Image Samples Formation for Person Identification Information Technology</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">1C0295347A7AA3FCACD82865AD133E3F</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T20:01+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Information technology, face recognition, biometric identification 0000-0002-9378-9535 (O. Bychkov)</term>
					<term>0000-0001-6347-5191 (K. Merkulova)</term>
					<term>0000-0002-9917-3723 (Y. Zhabska)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper describes the research the algorithm underlying in the basis of information technology of face recognition and person identification with an aim to improve its performance by exploring the conditions of forming image samples and effect caused by properties of the images containing in the samples on the algorithm efficiency. Researched algorithm is based on Haar features as method of localizing of face area on the image, Gabor wavelets as method of face image processing, 1-dimensional local binary patterns and histogram of oriented gradients as methods of face image feature extraction. During the experimental research several sets of experiments were conducted on face images from several different databases that contain images captured under constrained and unconstrained environmental conditions. After first set of experiments, that were conducted on image samples formed by extracting images with unrecognizable face areas and expanding the etalon samples of images captured under unconstrained conditions, the performance of the algorithm was improved on 7.5-45%. Although the influence of image format and resolution on algorithm performance was explored. As a result of experiments, it was established that format conversion might have an impact on identification accuracy rate by increasing it on 5% after converting images to JPG format. Resolution conversion improved algorithm performance on 5-20% on initial image samples from databases of images captured in constrained and unconstrained conditions and 5-35% on expanded image samples from databases of images captured in unconstrained conditions. After all, it was found that reforming of etalon image samples by expanding it and resolution conversion of the images have the biggest impact on the algorithm performance. As a result, the highest identification accuracy of 95% on the images from the SCface and FERET databases was obtained.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Face recognition technologies have emerged as a powerful tool for identifying individuals by analyzing their facial features. Over the recent years, face recognition has garnered substantial attention owing to its utility across diverse domains, encompassing applications such as security and surveillance, biometric authentication, human-computer interaction, healthcare, and other pertinent security-related fields <ref type="bibr" target="#b0">[1]</ref>. As with most biometric applications, changes in appearance caused by an unconstrained environment tend to cause problems with face recognition. Let's consider some problems that need to be solved in the near future <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>:</p><p>1. Occlusion. A face can be photographed in an arbitrary pose in a certain environment and without any user input, so it is possible that the image will only contain a partial face.</p><p>2. Aging of the face. As the age gap between the query image and the reference image of the same person increases, the accuracy of recognition systems usually decreases.</p><p>3. A single sample. In real-world applications (e.g., passports, immigration systems), only one model of each person is registered in the database and available for the recognition task.</p><p>4. Video surveillance. Camera focus issues that can lead to image blur, low-resolution or compression errors, and blocky effects.</p><p>Face recognition technologies have become widely used in many applications <ref type="bibr" target="#b2">[3]</ref>. Some of the most common areas of application nowadays are criminal investigations and identification of missing individuals. Moreover, face recognition technology can already be deployed on drones as part of special operations missions to help operators with intelligence gathering, reconnaissance, and identifying targets <ref type="bibr" target="#b3">[4]</ref>. The fact of these means that face recognition technologies are increasingly being used in sensitive areas, as military operation, where any mistake can be crucial. Therefore, it is important to explore in detail information technologies of face recognition and identification to reduce the possibility of failure occurrence before they will be widely used in any purposes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Task definition and solution methods</head><p>In our previous studies <ref type="bibr" target="#b4">[5]</ref> there was introduced and explored a novel information technology for face recognition and identification based on local-texture descriptors. This technology relies on an algorithm that incorporates several methods such as Haar features, Gabor wavelet transform, local binary patterns in a 1-dimensional space and histogram of oriented gradients. These methods are employed for tasks of face localizing, image processing, extraction of feature vectors, respectively. More elaborate explanation of the algorithm's workings is provided as follows.</p><p>The algorithm takes a portrait image of a person as its input. This image is then transformed into grayscale and scanned to localize the face region. This localized face region is subsequently singled out for more targeted processing. To localize the face region the Haar features are used <ref type="bibr" target="#b5">[6]</ref>. These features encompass a collection of elemental patterns, comprising both white and black blocks. Upon training the image features fj, it becomes feasible to derive the threshold value θj and the modulated comparability value modulo pj. The basic classifier can be described as:</p><formula xml:id="formula_0">ℎ 𝑗 (𝑥) = { 1, 𝑖𝑓 𝑝 𝑗 𝑓 𝑗 (𝑥) &lt; 𝑝 𝑗 𝜃 𝑗 , 0, 𝑒𝑙𝑠𝑒.<label>(1)</label></formula><p>The outcome of applying the classifier to identify faces within an image comprises a collection of rectangles. These rectangles essentially represent a series of coordinates, denoting the corners of the region where a human face is positioned within the image. Following this detection, the original image matrix undergoes a reduction, retaining only the elements corresponding to the area where the human face is positioned.</p><p>Once the face region of the image is recognized, it undergoes processing through the Gabor wavelet transform <ref type="bibr" target="#b6">[7]</ref>. This transformation is applied multiple times to the facial area image, varying certain parameters of the wavelet function. The objective is to aggregate all the outcomes to generate a comprehensive global facial representation of the person.</p><p>Within the Gabor representation <ref type="bibr" target="#b7">[8]</ref>, the arbitrary function F(x) is expanded by considering both symmetric and asymmetric elementary signals.</p><formula xml:id="formula_1">𝑆 𝑠 (𝑥) = 𝑒𝑥𝑝 [− (𝑥 − 𝑥 𝑚 ) 2 4𝜎 2 ] 𝑐𝑜𝑠[2𝜋𝑓 𝑛 (𝑥 − 𝑥 𝑚 )]<label>(2)</label></formula><formula xml:id="formula_2">𝑆 𝑎 (𝑥) = 𝑒𝑥𝑝 [− (𝑥 − 𝑥 𝑚 ) 2 4𝜎 2 ] 𝑠𝑖𝑛[2𝜋𝑓 𝑛 (𝑥 − 𝑥 𝑚 )]<label>(3)</label></formula><p>These signals are centered at the position x = xm and at the spatial frequency f = fn with a Gaussian envelope described by the standard deviation 𝜎.</p><p>The complex Gabor function in the spatial domain can be denoted as follows:</p><p>𝑔(𝑥, 𝑦) = 𝑠(𝑥, 𝑦)𝜔 𝜏 (𝑥, 𝑦),</p><p>where s(x, y) is a complex sinusoid known as the carried and ωτ(x, y) is a 2D Gaussian function known as the envelope function. Complex sinusoid can be described with the following:</p><p>𝑠(𝑥, 𝑦) = exp (𝑗(2𝜋𝐹 0 (𝑥 cos 𝜔 0 + 𝑦 sin 𝜔 0 ) + 𝑃)).</p><p>(</p><formula xml:id="formula_4">)<label>5</label></formula><p>where parameters x • cos ω0 and y • cos ω0 define spatial frequency of the sinusoid in polar coordinates, F0 is magnitude and ω0 defines direction. Gaussian function can be defined as:</p><formula xml:id="formula_5">𝜔 𝜏 (𝑥, 𝑦) = 𝐾 𝑒𝑥𝑝(−𝜋(𝑎 2 (𝑥 − 𝑥 0 ) 𝜏 2 + 𝑏 2 (𝑦 − 𝑦 0 ) 𝜏 2 )),<label>(6)</label></formula><p>where (x0, y0) is the peak of the function, a and b are the Gaussian scaling parameters, and the subscript τ denotes the rotation operation as follows:</p><p>(𝑥 − 𝑥 0 ) 𝜏 = (𝑥 − 𝑥 0 ) cos 𝜃 + (𝑦 − 𝑦 0 ) sin 𝜃,</p><p>(𝑦 − 𝑦 0 ) 𝜏 = −(𝑥 − 𝑥 0 ) sin 𝜃 + (𝑦 − 𝑦 0 ) cos 𝜃.</p><p>A family of two-dimensional Gabor wavelets that satisfies wavelet theory and neurophysiological constraints for simple cells can be obtained using the following formulas:</p><formula xml:id="formula_8">𝜓(𝑥, 𝑦, 𝜔 0 , 𝜃) = 𝜔 0 √2𝜋𝜅 𝑒 − 𝜔 0 2 8𝜅 2 (4(𝑥 cos 𝜃+𝑦 sin 𝜃) 2 +(−𝑥 sin 𝜃+𝑦 cos 𝜃) 2 ) • [𝑒 𝑖(𝜔 0 𝑥 cos 𝜃+𝜔 0 𝑦 sin 𝜃) − 𝑒 − 𝜅 2 2 ],<label>(9)</label></formula><p>where ω0 is the radial frequency in radians per unit length and T is the orientation of the wavelet in radians. The Gabor wavelet is centered at the position (x = 0, y = 0), and the normalization coefficient is such that &lt;ψ, ψ&gt; = 1, i.e. normalized by L 2 . κ is a constant, with κ ≈ π for a one-octave frequency range and κ ≈ 2.5 for a 1.5-octave frequency range. The technique of employing local binary patterns in a 1-dimensional space <ref type="bibr" target="#b8">[9]</ref> is harnessed to derive a feature vector from the comprehensive face image acquired following the Gabor wavelet transformation. This approach has demonstrated robust and efficient performance outcomes even when dealing with variations in angles of rotation and lighting conditions. It involves generating a 1dimensional row projection for each image matrix level, which serves as a descriptor for capturing and examining the texture within the facial image. The computation of the local binary patterns descriptor can be achieved using the following formula:</p><formula xml:id="formula_9">1DLBP = ∑n=0 N-1 S(gn -g0) • 2 n , (<label>10</label></formula><formula xml:id="formula_10">)</formula><p>where g0 is a value of the central element, gn is the value of 1-dimensional neighboring element and S(x) is defined with the following:</p><formula xml:id="formula_11">𝑆(𝑥) = { 1 𝑖𝑓 𝑥 ≥ 0; 0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒.<label>(11)</label></formula><p>Based on preceding research, it was deduced that employing a single method for feature vector extraction is less effective when compared to their combination. Consequently, the decision was made to enhance the feature vector obtained from 1-dimensional local binary patterns with another vector derived through the creation of a histogram of oriented gradients <ref type="bibr" target="#b9">[10]</ref>. The essence of this technique lies in preserving information regarding image shape characteristics within histograms that pertain to object boundaries found within sub-ranges of images post wavelet transformation. The count of object boundary orientations falling within specific ranges is depicted by each interval in the histogram. In the context of a grayscale image, derivatives are computed along both the x and y axes for every pixel. The magnitude of the gradient can be expressed as follows:</p><formula xml:id="formula_12">|𝐺| = √𝐼 2 (𝑥) + 𝐼 2 (𝑦). (<label>12</label></formula><formula xml:id="formula_13">)</formula><p>The orientation calculation can be described as follows:</p><formula xml:id="formula_14">𝜃 = 𝑎𝑐𝑟𝑡𝑎𝑛 𝐼 𝑦 𝐼 𝑥 . (<label>13</label></formula><formula xml:id="formula_15">)</formula><p>To form the comprehensive feature vector, both the resulting vectors from the 1-dimensional local binary patterns and the histogram of oriented gradients are combined. This resultant global feature vector can then be employed for subsequent person classification and identification based on the input image containing their face. The purpose of this research is to explore the conditions of image sample formation, to which the algorithm is applied, to investigate the algorithm performance in condition of its appliance to images captured in constrained and unconstrained conditions, as well as the ways of improving the resulting identification accuracy rates.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Experimental research</head><p>With an aim to explore and improve performance of the information technology for face recognition and person identification by face image, based on Haar features, Gabor wavelet transform, histograms of oriented gradients (HOG) and local binary patterns in one-dimensional space (1DLBP), it was decided to conduct the experiments on different databases, that contain images captured under different conditions of fixation regarding the intensity of lighting, the presence of cosmetics, makeup or occlusive elements, subjects' age variability, head postures and facial expressions variability, etc. For experimental research several databases were chosen such as Database of Faces (DoF, formerly "The ORL Database of Faces") <ref type="bibr" target="#b10">[11]</ref>, FERET (Face Recognition Technology database) <ref type="bibr" target="#b11">[12]</ref>, SCface <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b13">14]</ref>, CFP (Celebrities in Frontal-Profile data set) <ref type="bibr" target="#b14">[15]</ref>, Tinyface <ref type="bibr" target="#b15">[16]</ref>, LFW (Labeled Faces in the Wild) <ref type="bibr" target="#b16">[17]</ref> and AgeDB <ref type="bibr" target="#b17">[18]</ref>.</p><p>The initial set of experiments was focused on determining the efficiency of the algorithm, underlying in the basis of information technology of face recognition and person identification by face image under exploration, after its appliance to original face images containing in the databases. To conduct the experiment the face images of 40 individuals were used to form the etalon and test samples. Such number was conditioned by the minimum number of individual images of which are contained in one of the databases, which is the Database of Faces. To perform experiments more clear, the face images for the same number of individuals were selected from other databases as well. In Table <ref type="table" target="#tab_0">1</ref> the obtained results of these experiments are presented. During the experimental research on original face images and after analyzing its results, it was established that on some images the algorithm is not capable to localize face region. Moreover, the efficiency of the researched algorithm on the face images from such databases as Database of Faces, FERET and SCface significantly exceeds the algorithm performance on the face images from CFP, Tinyface, LFW and AgeDB databases. For the set of first three databases the identification accuracy rate of the algorithm is greater on 70%-77.5% in comparison to other databases.</p><p>In order to explain such significant variation in algorithm performance results, it is worth to explore in detail the databases on the image from which the experiments were conducted. Database of Faces contains 92×112-pixeled face images of 40 people, which were captured on the dark uniform background in a vertical frontal position of the subject under conditions of varying lighting, facial expressions and facial details. FERET (Face Recognition Technology database) is a database that contains 1564 sets of 256×384-pixeled images of 1199 people taken during 2 years in a semiconstrained environment with the same physical settings for each session. SCface contains 4,160 static images of 130 people with the size of 75×100, 108x144 and 168×224 pixels captured on the same background, under unconstrained lighting conditions, with variation of fixed head positions. CFP dataset contains frontal and profile images for 500 individuals from open access captured in both constrained and unconstrained environments, with certain change of poses while other variations of image characteristics are unrestricted. Tinyface dataset consists of face images of 5,139 individuals with average size of 20×16 pixels captured under unconstrained conditions regarding background, lighting, face positioning and the presence of occlusion. LFW contains 250×250-pixeled images of 5,749 different people captured in the environment, that is as close as possible to the natural environment, and characterized with a wide range of variations in background, lighting, pose, facial expression, race, ethnicity, age, etc. AgeDB contains images for 568 individuals taken with huge age variety of the same subject in unconstrained conditions regarding poses, facial expressions, occlusion, and noise.</p><p>Analyzing the specifics of the conditions, under which the face images from all selected databases were captured, it can be concluded that the researched algorithm performed more efficiently during its appliance on the images captured in semi-constrained or constrained conditions regarding the background, lighting, subject's head position, camera position in relate to the subject, and other physical settings. Also, the images from the Database of Faces, FERET and SCface database are uniformed within the single database, as far as they were not taken in conditions close to the real-world conditions. On the other hand, CFP, Tinyface, LFW and AgeDB databases contain face images, that were captured in unconstrained conditions and more often are not uniformed within one database, as far as they were taken from the open access. Therefore, these databases contain images that are highly variative regarding the background, amount and intensity of lighting, head positions, occlusive elements, age and time intervals of capturing, makeup and cosmetics, etc.</p><p>So, considering the aforementioned problems, that significantly affect the performance of the algorithm under research, it was decided to perform the next set of experiments with several changes relatively to the first set of experiments. To overcome the problem of inability to localize face region on some of the images, the etalon and test image samples was reformed by extracting those images, in which the face region in not recognizable due to extreme angle of head rotation, excessive lighting or other conditions, that make face features not fully visible on the image. As for the images captured under unconstrained conditions, the etalon image sample was expanded with other images with recognizable face features of the same individuals, which face images were already in the sample. The results of experiments are presented in Table <ref type="table" target="#tab_1">2</ref>. Comparison diagram of described two sets of experiments is depicted in Figure <ref type="figure">1</ref>. As can be seen, the extraction of images, in which the face region was not recognizable, allowed to increase the identification accuracy rate of the algorithm after its appliance on the images from the Database of Faces on 7.5%, FERET on 20%, CFP on 27.5%, Tinyface on 7.5%, LFW on 45% and AgeDB on 25%. Also, the expansion of etalon image samples for databases that contain images captured in unconstrained conditions has led to the improvement of the efficiency of the algorithm on 15% for the CFP and AgeDB databases, while result for Tinyface database remained the same and for LFW database decreased on 10%.</p><p>Obtained results of experiments are varying in the wide range of identification accuracy rates. Such variability may be caused by the fact that the selected databases contain images with different file format and resolution. Therefore, it was decided to conduct the experiments in order to determine whether it possible to improve the algorithm performance by eliminating image properties variability and by that, possibly, reduce the range of algorithm performance results. Also, as was described in our previous research <ref type="bibr" target="#b18">[19]</ref>, variety in quality of the images, which is as well determined by format and resolution, may cause the failure of identification process if images in etalon and test image samples are not unified. On the assumption that such properties of images as format and resolution may affect the researched algorithm performance, it was decided to convert original images from selected databases to the most common formats and resolutions and those on which the algorithm efficiency was the highest among all sets of experiments and conduct the experiments by applying the algorithm on converted images.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 1: Experimental results obtained on initial samples with face region extractable images in</head><p>To perform the experiments on format-converted images the following formats were selected: JPG, PNG, and BMP. Experimental results obtained after appliance of the researched algorithm to the samples of images with format conversion are presented in Table <ref type="table" target="#tab_2">3</ref>. Comparative diagram of above-described set of experiments demonstrated in Figure <ref type="figure" target="#fig_0">2</ref>. Analyzing the results, it can be concluded that the format conversion was effective only after appliance of the algorithm under research to the samples of images from Database of Facesthe identification accuracy rate of the algorithm increased on 5% after converting images to JPG format. Also, variation of results is observed in case of LFW-efficiency of the algorithm has decreased on 5% in case of expanded etalon images sample, while the rate of identification accuracy remained the same in case of initial number of images in the sample. The performance of the algorithm after its appliance on the images samples from other databases have not been influenced by format conversionidentification accuracy rates for samples of images from those databases remained stable for all experiments. Nevertheless, even such minor changes in identification accuracy rates indicate that the format conversion in individual cases of the algorithm appliance can be crucial for success of identification process. The next set of experiments was conducted with resolution conversion. The values of resolution were chosen with regard to those images, on which the previous experiments showed the highest rates of identification accuracy. Since all of the selected databases contain images with different resolutions, and simultaneous change of height and width may cause the alteration of face features depicted in the image, which are essential for successful performance of the algorithm, it was decided to automatically define the width value of image resolution in relation to the value of height that was set as constant.</p><p>According to this, the following values of resolution were chosen to form the image samples: width ×91, width × 100, width × 128, width × 144. Results obtained on resolution-converted images are presented in Table <ref type="table" target="#tab_3">4</ref>.</p><p>Analyzing the results, presented in the diagram in Figure <ref type="figure" target="#fig_1">3</ref> and obtained after appliance of the researched algorithm on the samples of resolution-converted images in compare to the results of previously described experiments, as can be seen from the identification accuracy rates of the algorithm changed in the following way for the samples of face images from databases.  Database of Facesincreased on 5%-10%, FERETdecreased on 5%-35%, SCfacedecreased on 10-30%, CFPincreased on 5%-15% on initial and 5-10% on expanded samples, Tinyfaceincreased on 10-15% on initial and 25-35% on expanded samples, LFWdecreased on 10-20% on initial sample and increased on 25% on expanded sample, AgeDBincreased on 10-20% on initial sample and decreased on 5-10% on expanded sample. Also, after resolution conversion preserving the aspect ratio, some of the images were not face features extractable.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusion</head><p>This work is devoted to the research of information technology of face recognition and person identification based on such methods as Haar features, Gabor wavelet transform, histograms of oriented gradients (HOG) and local binary patterns in one-dimensional space (1DLBP). The purpose of the research is to improve the performance of the algorithm underlying in the basis of information technology by exploring the conditions of image sample formation and effect of image properties on the algorithm efficiency.</p><p>To conduct the experiments there were selected several databases of face images. During the research it was found that the performance of the algorithm is affected by the presence in the etalon and test samples of images, in which it is difficult to localize face region due to such conditions of image capturing as extreme angle of rotation of a subject's head, excessive lighting, etc. Also, algorithm performance can be reduced during its appliance on samples of images captured under unconstrained conditions regarding the background, lighting, subject's head position, camera position in relate to the subject, and other physical settings. After extracting from the etalon and test samples of images with unrecognizable face regions and expanding the etalon samples of image from databases that were formed under unconstrained conditions, the accuracy identification rate of the algorithm was improved for images from the Database of Faces on 7.5%, FERETon 20%, CFPon 27.5%, Tinyfaceon 7.5%, LFWon 45%, AgeDBon 25%.</p><p>The next sets of experiments were performed with conversion of such image properties as format and resolution with an aim to explore the possibility to reduce the variety of algorithm performance results on samples of images from different databases. As a result of experiments, it was established that format of images, to which the explored algorithm was applied, in individual cases affects the efficiency -the identification accuracy rate increased on 5% after converting images from the Database of Faces images to JPG format and decreased on 5% in case of expanded etalon sample of images from LFW database. The conversion of resolution has also affected the algorithm performance. The results for were increased on 5%-10% for samples of images from the Database of Faces, decreased on 5-35% for samples from FERET, decreased on 10-30% for samples from SCface, increased on 5%-15% on initial and 5-10% on expanded samples from CFP, increased on 10-15% on initial and 25-35% on expanded samples from Tinyface, decreased on 10-20% on initial sample and increased on 25% on expanded sample from LFW, increased on 10-20% on initial sample and decreased on 5-10% on expanded sample from AgeDB.</p><p>The highest identification accuracy rate, which is 95%, was obtained on image sample of initial images from SCface database and on image sample after reforming of the etalon image sample by extracting images with unrecognizable face region.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Experimental results obtained on initial samples with face region extractable images in compare to experimental results obtained on samples with format-converted images and after expansion of etalon samples</figDesc><graphic coords="7,109.25,72.00,376.27,210.85" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Experimental results obtained on initial samples with face region extractable images in compare to experimental results obtained on samples with resolution-converted images and after expansion of etalon samples</figDesc><graphic coords="7,96.50,503.57,401.97,214.45" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="6,122.00,166.54,350.38,193.05" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Results of experiments on initial image samples</figDesc><table><row><cell>Database</cell><cell>Total number of individuals / images</cell><cell>Accuracy / error identification rate</cell><cell>Number of correctly / incorrectly identified images</cell></row><row><cell>DoF</cell><cell>40 / 120</cell><cell>72.5% / 27.5%</cell><cell>29 / 11</cell></row><row><cell>FERET</cell><cell>40 / 99</cell><cell>75% / 25%</cell><cell>30 / 10</cell></row><row><cell>SCface</cell><cell>40 / 160</cell><cell>95% / 5%</cell><cell>38 / 2</cell></row><row><cell>CFP</cell><cell>40 / 120</cell><cell>17.5% / 82.5%</cell><cell>7 / 33</cell></row><row><cell>Tinyface</cell><cell>40 / 120</cell><cell>2.5% / 97.5%</cell><cell>1 / 39</cell></row><row><cell>LFW</cell><cell>40 / 120</cell><cell>10% / 90%</cell><cell>4 / 36</cell></row><row><cell>AgeDB</cell><cell>40 / 120</cell><cell>5% / 95%</cell><cell>2 / 38</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>Results of the experiments performed after reforming of etalon and test image samples</figDesc><table><row><cell>Database</cell><cell>Total number of individuals / images</cell><cell>Accuracy / error identification rate</cell><cell>Number of correctly / incorrectly identified images</cell></row><row><cell>DoF</cell><cell>40 / 120</cell><cell>80% / 20%</cell><cell>32 / 8</cell></row><row><cell>FERET</cell><cell>40 / 90</cell><cell>95% / 5%</cell><cell>38 / 2</cell></row><row><cell>SCface</cell><cell>40 / 136</cell><cell>95% / 5%</cell><cell>38 / 2</cell></row><row><cell>CFP</cell><cell>40 / 120 40 / 364</cell><cell>45% / 55% 60% / 40%</cell><cell>18 / 22 24 / 16</cell></row><row><cell>Tinyface</cell><cell>40 / 80 40 / 138</cell><cell>10% / 90% 10% / 90%</cell><cell>4 / 36 4 / 36</cell></row><row><cell>LFW</cell><cell>40 / 120 40 / 210</cell><cell>55% / 45% 45% / 55%</cell><cell>22 / 18 18 / 22</cell></row><row><cell>AgeDB</cell><cell>40 / 120 40 / 308</cell><cell>30% / 70% 45% / 55%</cell><cell>12 / 28 18 / 22</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3</head><label>3</label><figDesc>Results of the experiments performed on format converted image samples</figDesc><table><row><cell>Database</cell><cell>Total number of individuals / images</cell><cell>Accuracy / error identification rate</cell><cell>Number of correctly / incorrectly identified images</cell></row><row><cell>DoF</cell><cell>JPG</cell><cell>40 / 120</cell><cell>85% / 15%</cell></row><row><cell>FERET</cell><cell>PNG, BMP</cell><cell>40 / 120</cell><cell>80% / 20%</cell></row><row><cell>SCface</cell><cell>JPG, PNG, BMP</cell><cell>40 / 90</cell><cell>95% / 5%</cell></row><row><cell>CFP</cell><cell>JPG, PNG, BMP JPG, PNG, BMP</cell><cell>40 / 136 40 / 120</cell><cell>95% / 5% 45% / 55%</cell></row><row><cell>Tinyface</cell><cell>JPG, PNG, BMP JPG, PNG, BMP</cell><cell>40 / 364 40 / 80</cell><cell>60% / 40% 10% / 90%</cell></row><row><cell>LFW</cell><cell>JPG, PNG, BMP JPG, PNG, BMP</cell><cell>40 / 138 40 / 120</cell><cell>10% / 90% 55% / 45%</cell></row><row><cell>AgeDB</cell><cell>JPG PNG, BMP</cell><cell>40 / 210 40 / 210</cell><cell>40% / 60% 45% / 55%</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 4</head><label>4</label><figDesc>Results of the experiments performed on resolution converted image samples</figDesc><table><row><cell>Database</cell><cell>Total number of individuals / images</cell><cell>Accuracy / error identification rate</cell><cell>Number of correctly / incorrectly identified images</cell></row><row><cell></cell><cell></cell><cell>width x 91</cell><cell></cell></row><row><cell>DoF</cell><cell>40 / 120</cell><cell>85% / 15%</cell><cell>34 / 6</cell></row><row><cell>FERET</cell><cell>40 / 90</cell><cell>65% / 35%</cell><cell>26 / 14</cell></row><row><cell>SCface</cell><cell>40 / 136</cell><cell>65% / 35%</cell><cell>26 / 14</cell></row><row><cell>CFP</cell><cell>40 / 120 40 / 364</cell><cell>50% / 50% 65% / 35%</cell><cell>20 / 20 26 / 14</cell></row><row><cell>Tinyface</cell><cell>40 / 80 40 / 138</cell><cell>25% / 75% 45% / 55%</cell><cell>10 / 30 18 / 22</cell></row><row><cell>LFW</cell><cell>40 / 120 40 / 210</cell><cell>40% / 60% 40% / 60%</cell><cell>16 / 24 16 / 24</cell></row><row><cell>AgeDB</cell><cell>40 / 120 40 / 308</cell><cell>40% / 60% 45% / 55%</cell><cell>16 / 24 18 / 22</cell></row><row><cell></cell><cell></cell><cell>width x 100</cell><cell></cell></row><row><cell>DoF</cell><cell>40 / 120</cell><cell>90% / 10%</cell><cell>36 / 4</cell></row><row><cell>FERET</cell><cell>40 / 90</cell><cell>80% / 20%</cell><cell>32 / 8</cell></row><row><cell>SCface</cell><cell>40 / 136</cell><cell>70% / 30%</cell><cell>28 / 12</cell></row><row><cell>CFP</cell><cell>40 / 120 40 / 364</cell><cell>60% / 40% 70% / 30%</cell><cell>24 / 16 28 / 12</cell></row><row><cell>Tinyface</cell><cell>40 / 80 40 / 138</cell><cell>25% / 75% 45% / 55%</cell><cell>10 / 30 18 / 22</cell></row><row><cell>LFW</cell><cell>40 / 120 40 / 210</cell><cell>35% / 65% 30% / 70%</cell><cell>14 / 26 12 / 28</cell></row><row><cell>AgeDB</cell><cell>40 / 120 40 / 308</cell><cell>40% / 60% 40% / 60%</cell><cell>16 / 24 16 / 24</cell></row><row><cell></cell><cell></cell><cell>width x 128</cell><cell></cell></row><row><cell>DoF</cell><cell>40 / 120</cell><cell>85% 15%</cell><cell>34 / 6</cell></row><row><cell>FERET</cell><cell>40 / 90</cell><cell>85% 15%</cell><cell>34 / 6</cell></row><row><cell>SCface</cell><cell>40 / 136</cell><cell>85% 15%</cell><cell>34 / 6</cell></row><row><cell>CFP</cell><cell>40 / 120 40 / 364</cell><cell>50% / 50% 60% / 40%</cell><cell>20 / 20 24 / 16</cell></row><row><cell>Tinyface</cell><cell>40 / 80 40 / 138</cell><cell>20% / 80% 35% / 65%</cell><cell>8 / 32 14 / 26</cell></row><row><cell>LFW</cell><cell>40 / 120 40 / 210</cell><cell>35% / 65% 70% / 30%</cell><cell>14 / 26 28 / 12</cell></row><row><cell>AgeDB</cell><cell>40 / 120 40 / 308</cell><cell>50% / 50% 45% / 55%</cell><cell>20 / 20 18 / 22</cell></row><row><cell></cell><cell></cell><cell>width x 144</cell><cell></cell></row><row><cell>DoF</cell><cell>40 / 120</cell><cell>85% 15%</cell><cell>34 / 6</cell></row><row><cell>FERET</cell><cell>40 / 90</cell><cell>90% / 10%</cell><cell>36 / 4</cell></row><row><cell>SCface</cell><cell>40 / 136</cell><cell>95% / 5%</cell><cell>38 / 2</cell></row><row><cell>CFP</cell><cell>40 / 120 40 / 364</cell><cell>40% / 60% 65% / 35%</cell><cell>16 / 24 26 / 14</cell></row><row><cell>Tinyface</cell><cell>40 / 80 40 / 138</cell><cell>25% / 75% 35% / 65%</cell><cell>10 / 30 14 / 26</cell></row><row><cell>LFW</cell><cell>40 / 120 40 / 210</cell><cell>45% / 55% 45% / 55%</cell><cell>18 / 22 18 / 22</cell></row><row><cell>AgeDB</cell><cell>40 / 120 40 / 308</cell><cell>40% / 60% 35% / 65%</cell><cell>16 / 24 14 26</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Past, Present, and Future of Face Recognition: A Review</title>
		<author>
			<persName><forename type="first">I</forename><surname>Adjabi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ouahabi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Benzaoui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Taleb-Ahmed</surname></persName>
		</author>
		<idno type="DOI">10.3390/electronics9081188</idno>
	</analytic>
	<monogr>
		<title level="j">Electronics</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page">118</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">50 years of biometric research: Accomplishments, challenges, and opportunities</title>
		<author>
			<persName><forename type="first">A</forename></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Jain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Nandakumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ross</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.patrec.2015.12.013</idno>
		<ptr target="https://doi.org/10.1016/j.patrec.2015.12.013" />
	</analytic>
	<monogr>
		<title level="j">Pattern Recognition Letters</title>
		<idno type="ISSN">0167- 8655</idno>
		<imprint>
			<biblScope unit="volume">79</biblScope>
			<biblScope unit="page" from="80" to="105" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Biometrics for Industry 4.0: a survey of recent applications</title>
		<author>
			<persName><forename type="first">C</forename><surname>Lucia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Zhiwei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Michele</surname></persName>
		</author>
		<idno type="DOI">10.1007/s12652-023-04632-7</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Ambient Intelligence and Humanized Computing</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="11239" to="11261" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<ptr target="https://www.sbir.gov/node/2217727" />
		<title level="m">Implement Face Recognition on Autonomous sUAS for Identification and Intelligence-Gathering</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Development of information technology for person identification in video stream</title>
		<author>
			<persName><forename type="first">O</forename><surname>Bychkov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Merkulova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhabska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Shatyrko</surname></persName>
		</author>
		<ptr target="https://ceur-ws.org/Vol-3018/Paper_7.pdf" />
	</analytic>
	<monogr>
		<title level="m">II International Scientific Symposium &quot;Intelligent Solutions</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="volume">3018</biblScope>
			<biblScope unit="page" from="70" to="80" />
		</imprint>
	</monogr>
	<note>IntSol-2021)</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Multi-Faces Recognition Process Using Haar Cascades and Eigenface Methods</title>
		<author>
			<persName><forename type="first">T</forename><surname>Mantoro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Ayu</surname></persName>
		</author>
		<author>
			<persName><surname>Suhendi</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICMCS.2018.8525935</idno>
	</analytic>
	<monogr>
		<title level="m">6th International Conference on Multimedia Computing and Systems (ICMCS)</title>
				<meeting><address><addrLine>Rabat, Morocco</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="1" to="5" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Face Recognition System Using Feature Extraction Method of 2-D Gabor Wavelet Filter Bank and Distance-Based Similarity Measures</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">R</forename><surname>Isnanto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Zahra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">L</forename><surname>Kurniawan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">P</forename><surname>Windasari</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICIC56845.2022.10007016</idno>
	</analytic>
	<monogr>
		<title level="m">Seventh International Conference on Informatics and Computing (ICIC)</title>
				<meeting><address><addrLine>Denpasar, Bali, Indonesia</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022. 2022</date>
			<biblScope unit="page" from="1" to="4" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Tutorial on Gabor</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Movellan</surname></persName>
		</author>
		<ptr target="https://inc.ucsd.edu/mplab/tutorials/gabor.pdf" />
		<imprint>
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Face Analysis, Description and Recognition using Improved Local Binary Patterns in One Dimensional Space</title>
		<author>
			<persName><forename type="first">A</forename><surname>Benzaoui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Boukrouche</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Control Engineering and Applied Informatics</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="52" to="60" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Histogram of gradient and binarized statistical image features of wavelet subband-based palmprint features extraction</title>
		<author>
			<persName><forename type="first">B</forename><surname>Attallah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Serir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Chahir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Boudjelal</surname></persName>
		</author>
		<idno type="DOI">10.1117/1.JEI.26.6.063006</idno>
	</analytic>
	<monogr>
		<title level="j">J. Electron. Imag</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page">63006</biblScope>
			<date type="published" when="2017-11-08">November 8, 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">The Database of Faces</title>
		<ptr target="https://cam-orl.co.uk/facedatabase.html" />
		<imprint/>
		<respStmt>
			<orgName>AT&amp;T Laboratories Cambridge</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<ptr target="https://www.nist.gov/programs-projects/face-recognition-technology-feret" />
		<title level="m">Face Recognition Technology (FERET</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">SCface -surveillance cameras face database</title>
		<author>
			<persName><forename type="first">M</forename><surname>Grgic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Delac</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Grgic</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Multimedia Tools and Applications Journal</title>
		<imprint>
			<biblScope unit="volume">51</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="863" to="879" />
			<date type="published" when="2011-02">February 2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<ptr target="https://www.scface.org" />
		<title level="m">SCface -Surveillance Cameras Face Database</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<ptr target="http://www.cfpw.io" />
		<title level="m">Celebrities in Frontal-Profile in the Wild</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">TinyFace: Face Recognition in Native Low-resolution Imagery</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Cheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gong</surname></persName>
		</author>
		<ptr target="https://qmul-tinyface.github.io/index.html" />
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<ptr target="http://vis-www.cs.umass.edu/lfw/index.html" />
		<title level="m">Labeled Faces in the Wild</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">AgeDB: The First Manually Collected, In-the-Wild Age Database</title>
		<author>
			<persName><forename type="first">S</forename><surname>Moschoglou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Papaioannou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Sagonas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Deng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Kotsia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zafeiriou</surname></persName>
		</author>
		<idno type="DOI">10.1109/CVPRW.2017.250</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)</title>
				<meeting><address><addrLine>Honolulu, HI, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017. 2017</date>
			<biblScope unit="page" from="1997" to="2005" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Exploring Image Unified Space for Improving Information Technology for Person Identification</title>
		<author>
			<persName><forename type="first">V</forename><surname>Martsenyuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Bychkov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Merkulova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhabska</surname></persName>
		</author>
		<idno type="DOI">10.1109/ACCESS.2023.3297488</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="76347" to="76358" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
