<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Research and Comparative Analysis of Person Identification Information Technology</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oleksii Bychkov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yelyzaveta Zhabska</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kateryna Merkulova</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mykyta Merkulov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>Volodymyrska str. 64/13, Kyiv, 01601</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <fpage>54</fpage>
      <lpage>64</lpage>
      <abstract>
        <p>Face recognition and person identification technologies are increasingly being used in sensitive areas where a false identification can lead to irreparable consequences. Therefore, the research of such technologies in order to improve their efficiency is relevant. Today, most recognition and identification technologies are based on algorithms containing neural networks. However, the use of such approaches requires a large amount of data, high computing power of hardware, and time used for training, which does not allow them to be adapted to rapidly changing real-world conditions. Methods based on local-texture descriptors in contradiction to neural networks based methods do not require a fulfillment of any of previously mentioned conditions. Furthermore, the efficiency of local-texture methods is close to the efficiency of methods based on neural networks under constrained conditions and even exceed its performance in some cases of unconstrained conditions. This paper proposes the research of local-texture descriptors based methods in compare to methods based on neural networks. In this work an approach to person identification was proposed, that is based on local-texture descriptors of face images, eliminating the shortcomings of algorithms based on neural networks. As a result of the experimental study, it was found that the accuracy of identification of the proposed algorithm exceeds the accuracy of identification of algorithms based on neural networks under conditions of different positions of the subject's head by 13.75-16.25% and incomplete visibility of facial features on the image by 10.5-27.5%. Information technology, face recognition, biometric identification, local-texture descriptors</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>military affairs.</p>
      <p>At the present face recognition and identification technologies are one of the most important
technologies used to ensure security in a variety of industries, such as border services, police, and</p>
      <p>
        The most common areas of face identification technologies appliance, according to the research
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], are the following: access control – confirmation of a person’s identity by a facial image;
identification of wanted persons – person identification using surveillance cameras in real time, that
allows to quickly neutralize suspects and increases the level of security in public places; criminal
investigations – confirmation of the suspect’s identity at the scene of the crime based on the image
from surveillance cameras.
      </p>
      <p>
        Constant improvement of face identification technologies allows to use them on an even larger
scale and in more complex conditions. In the future, such technologies may be implemented in
unmanned aerial vehicles of special operations forces for perimeter protection, intelligence gathering,
and rescue missions [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
EMAIL:
y.zhabska@gmail.com
(Y.
      </p>
      <p>Zhabska);
ORCID:
0000-0002-9378-9535</p>
      <p>(O. Bychkov); 0000-0002-9917-3723 (Y. Zhabska); 0000-0001-6347-5191 (K. Merkulova);
️©</p>
      <p>2023 Copyright for this paper by its authors.</p>
      <p>
        Government services of Ukraine and independent organizations use personal identification
technologies during the Russian-Ukrainian war to increase security at checkpoints, identify and detain
Russian criminals, expose military psychological information operations [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>The use of such technologies in sensitive areas, where incorrect identification can lead to
irreparable consequences, such as reputational damage, wrongful conviction, or even human death,
leads to the need to improve identification technologies in order to reduce the probability of
identification errors.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Works and Research Objective</title>
      <p>
        Currently, most works devoted to the research of face recognition and identification technologies
use approaches based on neural networks. According to the results of the analysis of several studies
comparing different approaches to face recognition, it was found that deep convolutional neural
networks provide high accuracy of face recognition by learning more discriminative functions on
large datasets and outperform the recognition performance compared to holistic, geometric, and
localtexture approaches. For example, in the paper [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] thirty-seven studies were analyzed for the period
from 2014, which described algorithms based on such neural network architectures as CNN,
VGGNet, GoogleNet, LeNet, ResNet. The overall recognition accuracy for all studied algorithms
ranges from 97.35% to 99.86%. But effective training of neural networks requires large amounts of
high-quality training data and requires improved hardware, such as GPUs.
      </p>
      <p>
        However, approaches based on neural network methods are not sufficiently flexible and able to
quickly adapt to the conditions of the real world, which can change rapidly. For example, after the
beginning of the coronavirus pandemic, the National Institute of Standards and Technology conducted
a study in July 2020 on the accuracy of recognition of the most common algorithms at that time on
images containing medical masks [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. According to the results of the study, in the conditions of the
need for identification on images of faces partially covered by a mask, the most accurate algorithms
were unable to identify a person in 20% to 50% of cases, which was caused by the inability of
identification algorithms to distinguish facial features from the image. For the period of November
2020, the study was conducted again, as a result of which it was established that some widely used
algorithms do not identify a person in 10-40% of cases [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Since algorithms based on neural
networks require a large amount of high-quality data for training and are also expensive to maintain,
most developers are unable to quickly adapt such algorithms to the fast-moving conditions of the real
world.
      </p>
      <p>
        An alternative to approaches to face recognition and identification based on neural networks can
be a local-texture approach, which is characterized by such advantages as high efficiency of analysis
time and recognition speed. Local-texture methods are easy to integrate, allowing real-time imaging
in complex environments. In addition, these methods are invariant to scale and displacement [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
There are several studies that compare the performance of algorithms based on these descriptors with
algorithms that use neural network methods.
      </p>
      <p>
        Paper [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] describes the comparison of the algorithm based on local binary patterns (LBP) and
histogram of oriented gradients (HOG) with the algorithm based on the CNN neural network
according to the data described in other works. According to the results of this study, the use of the
neural network algorithm CNN allows to obtain an average recognition accuracy rate equal to 99%, in
contrast to the LBPH algorithm with an average rate of 92%. The authors found that according to the
data from the literature reviewed in the paper, the accuracy of the algorithms is affected by the
position of the subject’s head recorded in the image. After applying the LBPH algorithm, the obtained
accuracy was 86% when the head is in a straight position, and 80% when the head is tilted. At the
same time, for the CNN algorithm, the test results showed that the accuracy obtained when the head is
straight is 81.25%, when the head is tilted – 75%, and when the subject is looking down – 43.75%.
Thus, the LBPH-based algorithm is more resistant to the condition of the recognition subject’s head
rotation, while the efficiency of the neural network based algorithm may decrease on about 37.5%
under the same condition. The authors also state that the main difficulty in using neural network
algorithms is that they require many data sets to train them, and accordingly, an efficient way of
collecting data sets is needed.
      </p>
      <p>
        Besides, in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] these above-mentioned methods were compared in terms of the computation time
required for face recognition from the trained data set. As a result of the experiments, the following
computation time was obtained: LBP – 01.065 ms, HOG – 02.330 ms, CNN – 13.743 ms. That is,
local-texture descriptors demonstrate better detection and recognition speed, compared to the neural
network based method. At the same time, according to the authors of the study, with the increase in
the complexity of the images, all methods showed mostly the same rates of recognition accuracy.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] experiments were conducted with the appliance of methods based on ResNet and FaceNet
neural networks to images of faces covered by medical masks, as a result of which the recognition
accuracy of these methods decreased on 42.5% and 26.25%, respectively, compared to the rates
obtained as a result of appliance of methods to images where faces are fully visible.
      </p>
      <p>
        The authors of the paper [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] also noted that methods, such as Gabor wavelet transform and LBP,
have an advantage for extracting detailed features of face images, while methods based on neural
networks have good reliability under unfavorable conditions for recognition. Although neural
networks can be used to identify detailed facial features, the cost of face identification is also very
high because a deeper network model and more training samples are required.
      </p>
      <p>Thus, in contrast to neural networks, methods based on local-texture descriptors do not require a
large amount of data, high computing power of hardware, and time used for training. Moreover, on
images captured under controlled conditions, the efficiency of such methods is close to the efficiency
of methods based on neural networks, and under some unconstrained conditions they even exceed it.
Therefore, methods based on local-texture descriptors should be investigated and improved, in
particular, in works devoted to solving the tasks of recognition and identification of a person based on
a face image.</p>
      <p>
        In the work [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] there was firstly proposed a person identification information technology, based
on an algorithm containing local-texture descriptors. The purpose of this paper is to study the
proposed information technology and compare the results of the algorithm, underlying in its basis,
with the known results of algorithms based on neural networks that were presented in the reviewed
literature.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed Approach</title>
      <p>The algorithm, on the basis of which the proposed information technology of person identification
is built, contains such methods of the local-texture approach as local binary patterns in
onedimensional space (1DLBP) and histogram of oriented gradients (HOG).</p>
      <p>
        As noted in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], feature extraction strategies, that focused on texture knowledge, play a significant
role in pattern recognition and computer vision. Local texture descriptors have attracted much
attention and have been implemented in many applications designed for texture classification, face
recognition, or image indexing. Algorithms for texture selection proposed in the literature are divided
into statistical and structural methods. They are characteristic, resistant to monotonous changes in
gray gradation, poor lighting, brightness dispersion and do not require segmentation. The purpose of
the local descriptor is to transform information at the pixel level into the appropriate form that
acquires the most compelling content, insensitive to various aspects caused by variations in the
environment. In contrast to global descriptors, which compute features directly from the entire image,
local descriptors, which are more effective in unconstrained situations, model features in small local
fragments of the image.
      </p>
      <p>The methods of the local-texture approach are characterized by such advantages as high efficiency
of analysis time and recognition speed. They are easy to integrate, enabling real-time imaging in
complex environments. In addition, these methods are invariant to changes in scale and displacement.</p>
      <p>
        During the analysis of existing algorithms based on local-texture descriptors, it was found that
combining several descriptors significantly increases the efficiency of face recognition algorithms
[
        <xref ref-type="bibr" rid="ref12 ref13">12, 13</xref>
        ]. Therefore, to implement the algorithm described in this paper, that is the basis of the person
identification information technology, it was decided to use a combination of two descriptors. The
first of these methods is a modification of the local binary pattern (LBP) descriptor, that produces a
binary code of a two-dimensional image in one-dimensional space (1DLBP). This descriptor allows to
get fine details and relative relationships between all pixels, and also combines local and global
features of the human face image. Several studies have proven the efficiency of the combination of
descriptors based on LBP with histograms of oriented gradients (HOG) – the combination of these
descriptors allows to obtain an increasement in the recognition accuracy rate. Therefore, to increase
the performance of the 1DLBP descriptor, it was decided to combine it with the HOG descriptor.
      </p>
      <p>
        The proposed algorithm consists of the following stages:
1. Localization of a person’s face in the image. For this, the method of detecting objects in images
is used – a classifier based on Haar features [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. This approach is based on machine learning, where
a cascade function is trained on sets of images in which a human face is captured and sets of images
in which a human face is absent. As a result of learning features fj, it is possible to obtain the limit
value θj and the value of comparability modulo pj. Simple classifier can be described as follows:

( ( ,  )) = ℎ ( ) = {
      </p>
      <p>1,      ( ( ,  )) &lt;     ;</p>
      <p>To improve the efficiency of a simple classifier the AdaBoost learning algorithm is used. It
chooses the classifier ht (for t = 1, ..., T) with the lowest error εt, where εi = 0 if example xi is
classified correctly, εi = 1 otherwise, and βt = εt / 1- εt. After applying this algorithm, the final strong
classifier can be defined as follows with αt = log 1 / βt:

( ( ,  )) = ℎ( ) = {1 
∑

 =1
  ℎ ( ) ≥
1
2
∑

 =1</p>
      <p>;
0 
0</p>
      <p>( ,  ) = exp( (2 ( 0 +  0 ) +  )),
where (u0, v0) and P define spatial frequency and sine wave phase, respectively.</p>
      <p>Concerning the 2D Gaussian function, it can be written as follows:
  ( ,  ) =</p>
      <p>(− ( 2( −  0)2 +  2( −  0)2)),</p>
      <p>
        ( ( ,  )) =  ( ,  )  ( ,  ),
where s(x, y) is a complex sine wave, or carrier, and ωτ(x, y) is a 2D Gaussian function, or envelope
2. Processing of an image containing only a face by Gabor wavelet transform [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Gabor wavelets
have a shape similar to the receptive fields of simple cells of the primary visual cortex, so the
representation of images using Gabor wavelets is based on the principles of image representation in
the human mind [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Due to its biological significance and technical characteristics, this method is
effective for image processing to highlight object edges. The complex Gabor function in the spatial
where (x0, y0) is the peak of the function, a and b are the Gaussian function scaling parameters, and
the index τ denotes the rotation operation.
      </p>
      <p>By changing the parameters of the wavelet, it is possible to obtain several wavelet-transformed
images as a result, examples of which are presented in Figure 2.
(1)
(2)
(3)
(4)
(5)
3. Extracting the vector of image features. The methods of the local-texture approach are
consistently applied to the images formed as a results of Gabor wavelet transform processing.</p>
      <p>The original LBP operator was used for texture discrimination, showing powerful and efficient
performance under conditions of changing rotation angles and illumination. For example, there is a
pixel (x, y)c in a gray image, its LBP texture is calculated by comparing this pixel with neighboring
pixels P at a distance R from the given pixel. The value of LBP((x, y)c) is obtained as:
where S(x) can be described as:

 , ( ,  ) = ∑</p>
      <p>(( ,  ) , − ( ,  )) 2 −1,
 ( ) = {</p>
      <p>1   ≥ 0,
0 
ℎ
method for each Gabor wavelet transformed image, presented in Figure 2, respectively. The x-axis
represents the feature vector value, the y-axis represents the value index in the feature vector
sequence. Each of these vectors contain 512 values, that further get summed up and normalized to
form a first part of global feature vector of an input image.</p>
      <p>
        The HOG method is applied to wavelet-transformed images to extract image shape features. The
number of edges of image objects that have an orientation with a certain range is represented by each
interval within the histogram. Combining the computed histograms in all subranges of the images
allows to form a HOG descriptor containing texture and shape information. To create a histogram of
local gradients, orientation gradients are first calculated for each region of the normalized image. The
gradient is calculated by first convolution filtering with one-dimensional horizontal Dx and vertical Dy
discrete derivative masks. The resulting value is the sum of adjacent pixels, taking into account the
weight of the mask [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]:
      </p>
      <p>( ,  ) = (  ∙  ,   ∙  ),
 ( ) =  ∙   ,  ( ) =  ∙   .
(9)
(10)</p>
      <p>In Figure 4 the examples of HOG feature vector values are presented. It contains the output of
HOG method for each Gabor wavelet transformed image, depicted in Figure 2, respectively, that are
images HOG feature vectors. The x-axis represents the feature vector value, the y-axis represents the
value index in the feature vector sequence. As well as the 1DLPB vectors, HOG vectors contain 512
values, that further get summed up and normalized to form a second part of global feature vector of an
input image. Finally, the vectors formed as a result of applying the 1DLBP and HOG methods to the
image are concatenated, forming a global feature vector of the face image.
4. Classification of the vector of image features.</p>
      <p>The result of the algorithm performance is an identifier that can be used to identify the person
captured in the image that was submitted to the algorithm input.</p>
      <p>Above-described general process of the proposed algorithm performance is presented in Figure 5.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental Research</title>
      <p>Experimental research was conducted in order to establish the efficiency of the proposed algorithm
on images with various parameters and to compare it with the most common algorithms based on
neural networks.</p>
      <p>During the experiments there was used a dataset of face images captured from different distances
and angles of view of the camera on the subject, with various head positions (the subject looks
directly into the camera, the camera is placed above the subject’s head, or the subject look at different
non-fixed points), with changes in lighting, facial expressions (unsmiling or smiling, closed or open
eyes) and in the presence of some face details (such as glasses). Dataset was formed from 136 images
of 40 individuals.</p>
      <p>
        First, let’s compare the result obtained during the experimental study of the proposed algorithm
with the results of the most common algorithms based on neural networks, indicated in the literature
review in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The results of the experiments are presented in Table 1. For comparison, the highest
identification accuracy rates among all sets of experiments were used.
      </p>
      <p>As can be seen in Figure 6, the difference between the efficiency of the proposed algorithm and
algorithms based on neural networks is 2.35-4.86%.</p>
      <p>
        At the second stage of the research, experiments were conducted on images with different
positions of the subject’s head. The dataset consisted of images of faces ranging from left to right
profile in equal steps of 22.5 degrees. Thus, for one subject, the set contained several images with a
viewing angle from -67.5 to +67.5 degrees. The experimental results shown in Table 2 are compared
with the results of the algorithm based on the CNN neural network given in the paper [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>Figure 7 shows a comparative diagram of the results of experiments on images with variability in
the subject’s head position. In this set of experiments, the proposed algorithm based on local-texture
descriptors exceeds the results of the algorithm based on the neural network on 13.75% on images
where the subject’s face is located straight (the subject is looking at the camera) and on 16.25% in
images where the subject’s face is not completely visible (the subject is looking down).</p>
      <p>
        The next set of experiments to study the proposed algorithm was performed on images of faces,
the lower part of which is hidden from the observer, thus simulating the possibility of identifying a
person wearing a medical mask or balaclava. The algorithm identification accuracy rates are
compared with the results of algorithms based on ResNet and FaceNet neural networks, which were
obtained in the course of a previous study [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The results of the experiments are presented in Table 3.
      </p>
      <p>A comparative diagram of the results of experiments on partially hidden face images is shown in
Figure 8. Even though on images, where the subject’s face is fully visible, algorithms based on neural
networks demonstrate higher results, during the appliance to images of a partially hidden face, the
efficiency is significantly reduced: for the algorithm based on the ResNet neural network – on 42.5%,
for the algorithm based on the FaceNet neural network – on 26.25%. In turn, identification accuracy
rate of the proposed algorithm based on local-texture descriptors is the highest among all obtained and
is 82.5%, compared to 55% and 72% obtained after appliance of neural network algorithms. Thus, in
condition of partial visibility of face features, the proposed algorithm is more efficient on 10.5-27.5%
in compare to FaceNet and ResNet based approaches, respectively. The percentage of reduction in
efficiency compared to the result obtained on images of fully visible faces is 12.5%, which is the
smallest rate among all experiments.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>This paper is devoted to the research of the information technology of person identification by face
image, the basis of which is an algorithm containing the methods of the local-texture approach, with
the purpose of establishing its efficiency in comparison with algorithms based on neural networks.</p>
      <p>Based on the analysis of the results of existing studies, it was established that algorithms based on
neural networks, although the accuracy of identification is quite high, require a large amount of
highquality data, high computing power of hardware and time required for training. Because of this,
quickly adapt such algorithms to the rapidly changing conditions of the real world is a rather difficult
task that most developers are unable to perform. Therefore, there is a need to explore alternative
approaches to the person recognition and identification, the efficiency of which will be close to the
efficiency of algorithms based on neural networks, and which, at the same time, will provide better
resistance to changes in environmental conditions.</p>
      <p>The proposed algorithm consists of a classifier based on Haar features for face localization on the
image; Gabor wavelet transform method for face image processing; local-texture descriptors, such as
local binary patterns in one-dimensional space (1DLBP) and histogram of oriented gradients (HOG),
to extract the face image feature vector.</p>
      <p>Based on the results of an experimental research performed on 136 images of 40 individuals with
variations in the position of the subject’s head relative to the camera, it was established that the
proposed algorithm is more resistant to such recognition and identification conditions. On images
where the subject looks directly into the camera, the identification accuracy of the proposed algorithm
is 13.75% higher than that of the neural network based algorithm, and on images where the subject is
looking down – on 16.25%.</p>
      <p>During the analysis of the results obtained after conducting the experiments on the images of a
partially hidden face, it was established that the algorithm based on local-texture methods is more
resistant to the condition of recognition and identification, when the features of the human face are
not fully visible. The identification accuracy of the proposed algorithm on 10.5-27.5% higher than the
accuracy of neural network algorithms. At the same time, the percentage of reduction in identification
accuracy for the proposed algorithm is 12.5%, while for algorithms based on neural networks – from
26.25% to 42.5%.</p>
      <p>Thus, the main scientific contribution of this paper is the results of comparative analysis, based on
which it can be concluded that the efficiency of the researched algorithm, which contains local-texture
descriptors and underlies in the basis of the person identification information technology, exceeds the
efficiency of algorithms based on neural networks under the conditions of different positions of the
subject’s head and partial visibility of facial features in the image.</p>
    </sec>
    <sec id="sec-6">
      <title>6. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zennayi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bourzeix</surname>
          </string-name>
          and
          <string-name>
            <given-names>Z.</given-names>
            <surname>Guennoun</surname>
          </string-name>
          , “
          <article-title>Analyzing the Scientific Evolution of Face Recognition Research</article-title>
          and Its Prominent Subfields,” in IEEE Access, vol.
          <volume>10</volume>
          , pp.
          <fpage>68175</fpage>
          -
          <lpage>68201</lpage>
          ,
          <year>2022</year>
          , doi: 10.1109/ACCESS.
          <year>2022</year>
          .
          <volume>3185137</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Brodsky</surname>
          </string-name>
          , “U.S.
          <article-title>Air Force's Drones Can Now Recognize Faces: How It Works”</article-title>
          ,
          <source>Popular Mechanics, February</source>
          <volume>24</volume>
          ,
          <year>2023</year>
          . URL: https://www.popularmechanics.com/military/a43064899/
          <article-title>ai r-force-drones-facial-recognition/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3] “War in Ukraine”,
          <source>Clearview AI</source>
          . URL: https://www.clearview.ai/ukraine
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>I.</given-names>
            <surname>Adjabi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ouahabi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Benzaoui</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Taleb-Ahmed</surname>
          </string-name>
          , “Past, Present, and
          <article-title>Future of Face Recognition: A Review,”</article-title>
          <source>Electronics</source>
          <year>2020</year>
          ,
          <volume>9</volume>
          , 1188. doi:
          <volume>10</volume>
          .3390/electronics9081188.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ngan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Grother</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Hanaoka</surname>
          </string-name>
          , “
          <article-title>Ongoing Face Recognition Vendor Test (FRVT) Part 6A: Face recognition accuracy with masks using pre- COVID-19 algorithms,”</article-title>
          <source>NIST Interagency/Internal Report (NISTIR)</source>
          ,
          <source>National Institute of Standards and Technology</source>
          , Gaithersburg,
          <string-name>
            <surname>MD</surname>
          </string-name>
          ,
          <year>2020</year>
          . doi:
          <volume>10</volume>
          .6028/NIST.IR.
          <volume>8311</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ngan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Grother</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Hanaoka</surname>
          </string-name>
          , “
          <article-title>Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,”</article-title>
          <source>NIST Interagency/Internal Report (NISTIR)</source>
          ,
          <source>National Institute of Standards and Technology</source>
          , Gaithersburg,
          <string-name>
            <surname>MD</surname>
          </string-name>
          ,
          <year>2020</year>
          . doi:
          <volume>10</volume>
          .6028/NIST.IR.
          <volume>8331</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Budiman</surname>
          </string-name>
          , Fabian,
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Yaputera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Achmad</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Kurniawan</surname>
          </string-name>
          , “
          <article-title>Student attendance with face recognition (LBPH or CNN): Systematic literature review</article-title>
          ,
          <source>” Procedia Computer Science 216</source>
          , pp.
          <fpage>31</fpage>
          -
          <lpage>38</lpage>
          ,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .1016/j.procs.
          <year>2022</year>
          .
          <volume>12</volume>
          .108.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A. P.</given-names>
            <surname>Rajan</surname>
          </string-name>
          and
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Mathew</surname>
          </string-name>
          , “
          <article-title>Evaluation and Applying Feature Extraction Techniques for Face Detection and Recognition,” Indonesian Journal of Electrical Engineering and Informatics (IJEEI) 7 (4</article-title>
          ), pp.
          <fpage>742</fpage>
          -
          <lpage>749</lpage>
          ,
          <year>2019</year>
          . doi:
          <volume>10</volume>
          .52549/ijeei.v7i4.
          <fpage>935</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>O.</given-names>
            <surname>Bychkov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Ivanchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Merkulova</surname>
          </string-name>
          and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhabska</surname>
          </string-name>
          , “
          <article-title>Mathematical Methods for Information Technology of Biometric Identification in Conditions of Incomplete Data,”</article-title>
          <source>Proceedings of the 7th International Conference “Information Technology and Interactions” (IT&amp;I-</source>
          <year>2020</year>
          ),
          <source>CEUR Workshop Proceedings</source>
          , pp.
          <fpage>336</fpage>
          -
          <lpage>349</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Huang</surname>
          </string-name>
          and
          <string-name>
            <given-names>F.</given-names>
            <surname>Qin</surname>
          </string-name>
          , “
          <article-title>Learning features from covariance matrix of gabor wavelet for face recognition under adverse conditions”</article-title>
          .
          <source>Pattern Recognition</source>
          , Vol.
          <volume>119</volume>
          ,
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .1016/j.patcog.
          <year>2021</year>
          .
          <volume>108085</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>O.</given-names>
            <surname>Bychkov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Merkulova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhabska</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Shatyrko</surname>
          </string-name>
          , “
          <article-title>Development of information technology for person identification in video stream,” Proceedings of the II International Scientific Symposium “Intelligent Solutions” (IntSol-</article-title>
          <year>2021</year>
          ),
          <source>CEUR Workshop Proceedings</source>
          ,
          <volume>3018</volume>
          , pp.
          <fpage>70</fpage>
          -
          <lpage>80</lpage>
          ,
          <year>2021</year>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3018</volume>
          /Paper_7.pdf
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ghorbani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. T.</given-names>
            <surname>Targhi and M. M. Dehshibi</surname>
          </string-name>
          , “
          <article-title>HOG and LBP: Towards a robust face recognition system</article-title>
          ,
          <source>” 2015 Tenth International Conference on Digital Information Management (ICDIM)</source>
          , Jeju, Korea (South),
          <year>2015</year>
          , pp.
          <fpage>138</fpage>
          -
          <lpage>141</lpage>
          , doi: 10.1109/ICDIM.
          <year>2015</year>
          .
          <volume>7381860</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>I.</given-names>
            <surname>Chhabra</surname>
          </string-name>
          and
          <string-name>
            <given-names>G.</given-names>
            <surname>Singh</surname>
          </string-name>
          , “
          <article-title>Effective and Fast Face Recognition System Using Complementar OC-LBP and HOG Feature Descriptors With SVM Classifier”</article-title>
          ,
          <source>J. Inf. Technol. Res</source>
          .
          <volume>11</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>91</fpage>
          -
          <lpage>110</lpage>
          ,
          <year>2018</year>
          . doi:
          <volume>10</volume>
          .4018/JITR.2018010106.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mantoro</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>A. Ayu and Suhendi, “Multi-Faces Recognition Process Using Haar Cascades</article-title>
          and Eigenface Methods,
          <source>” 2018 6th International Conference on Multimedia Computing and Systems (ICMCS)</source>
          , Rabat, Morocco,
          <year>2018</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          , doi: 10.1109/ICMCS.
          <year>2018</year>
          .
          <volume>8525935</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>T.</given-names>
            <surname>Gong</surname>
          </string-name>
          , “
          <article-title>Expression Recognition Method of Fusion Gabor Filter and</article-title>
          2DPCA Algorithm,” 2020 International Conference on Computer Information and
          <article-title>Big Data Applications (CIBDA), Guiyang</article-title>
          , China,
          <year>2020</year>
          , pp.
          <fpage>515</fpage>
          -
          <lpage>518</lpage>
          , doi: 10.1109/CIBDA50819.
          <year>2020</year>
          .
          <volume>00121</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>O.</given-names>
            <surname>Bychkov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Merkulova</surname>
          </string-name>
          and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhabska</surname>
          </string-name>
          , “
          <article-title>Improvement of Information Technology for Person Identification for Usage in Energy Smart Systems</article-title>
          ,”
          <source>2022 IEEE 8th International Conference on Energy Smart Systems (ESS)</source>
          , Kyiv, Ukraine,
          <year>2022</year>
          , pp.
          <fpage>199</fpage>
          -
          <lpage>203</lpage>
          , doi: 10.1109/ESS57819.
          <year>2022</year>
          .
          <volume>9969307</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>A.</given-names>
            <surname>Benzaoui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Boukrouche</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Doghmane</surname>
          </string-name>
          and
          <string-name>
            <given-names>H.</given-names>
            <surname>Bourouba</surname>
          </string-name>
          , “
          <article-title>Face recognition using 1DLBP, DWT</article-title>
          and SVM,”
          <year>2015</year>
          3rd International Conference on Control,
          <source>Engineering &amp; Information Technology (CEIT)</source>
          , Tlemcen, Algeria,
          <year>2015</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          , doi: 10.1109/CEIT.
          <year>2015</year>
          .
          <volume>7233002</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>B.</given-names>
            <surname>Attallah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Serir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chahir</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Boudjelal</surname>
          </string-name>
          , “
          <article-title>Histogram of gradient and binarized statistical image features of wavelet subband-based palmprint features extraction,”</article-title>
          <string-name>
            <given-names>J.</given-names>
            <surname>Electron</surname>
          </string-name>
          . Imag.
          <volume>26</volume>
          (
          <issue>6</issue>
          ),
          <year>2017</year>
          . doi:
          <volume>10</volume>
          .1117/1.JEI.
          <volume>26</volume>
          .6.063006.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>