<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Development of Information Technology for Person Identification in Video Stream</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oleksii Bychkov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kateryna Merkulova</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yelyzaveta Zhabska</string-name>
          <email>y.zhabska@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andriy Shatyrko</string-name>
          <email>shatyrko.a@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>Volodymyrs'ka str. 64/13, Kyiv, 01601</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <fpage>70</fpage>
      <lpage>80</lpage>
      <abstract>
        <p>This paper presents the research of methods for development of information technology for person identification in video stream. For the research conduction following methods were selected: anisotropic diffusion as an image preprocessing method, Gabor wavelet transform as an image processing method, histogram of oriented gradients (HOG) and local binary patterns in 1-dimensional space (1DLBP) as the methods of feature vector extraction from the images, square Euclidean distance metric for vector classification. The purpose of the work is to analyze and test these methods in order to develop an algorithm that will form the basis of the information technology of person identification in video stream. Experimental research of the methods was performed using well-known databases, such as The Database of Faces, The FERET Database and SCface Database. The obtained results of experiments indicate that proposed information technology provides the highest identification accuracy rate of 97.5% on the images with low quality and resolution. That result means that the developed information technology can be applied for person identification in real-world conditions when it is necessary to identify person from video stream.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Biometric identification</kwd>
        <kwd>face recognition</kwd>
        <kwd>wavelet transform</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Face recognition technology has long been in use at law enforcement processes, state borders and
on smartphones. Nowadays it becomes a part of public and private areas of living. Hundreds of
municipalities all over the world have installed cameras equipped with face recognition technology,
sometimes promising to send data to central command centers as part of the programs that improves
crime investigation cases. The COVID-19 pandemic catalyzed the fast spread of such solutions. In
China more than 100 cities were equipped with surveillance systems based on face recognition
technology last year. Now municipal network of cameras, installed on the streets, monitors the
movement of people in residential areas of the city, when entering the offices and shops, as well as
when using vehicles. Presumably, information from cameras can be used by the city police and the data
will also be transferred to a special operational center, where computer algorithms will verify the faces
of citizens and check their status in the database of the Ministry of Health. During the pandemic in
January 2020 it was started to use citywide video surveillance system in Moscow, using software
developed and supplied by NtechLab. During the first few weeks of COVID-19 lockdown company
reported that the system found 200 quarantine violators [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>Face recognition is non-invasive biometric technique, so it is the area of interest for small
surveillance systems, as well as for the national security purposes. Face recognition is one of the most
important law enforcement techniques when video or crime scene images are available. Automatic face
recognition technologies have improved the efficiency of the judiciary and simplified the comparison
process.</p>
      <p>Modern face recognition techniques have achieved impressive results with face images of medium
and high quality, but the performance is not satisfactory with low quality images. The main difficulty
in recognizing low-resolution face images is the lack of facial details that distinguish it from the
background. Another problem is that modern methods of face recognition are based on convolutional
neural networks and use convolutional maps of features with a low sampling rate and a large step to
represent a face. It leads to the information lose and inaccurate description of low-quality images.</p>
      <p>
        In March 2021 analytical the company Mordor Intelligence presented the report [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], where the global
face recognition market was valued at USD 3.72 billion in 2020 and will surpass to USD 11.62 billion
in 2026.
      </p>
      <p>
        Another forecasting company Grand View Research published the report on face recognition market
in May 2021. According to this report surveillance and security segments are expected to significantly
increase a compound annual growth rate (CAGR) through all analyzed period. The rate will grow till
face recognition technology had become steadily used high-security areas. For example, security and
surveillance systems are used by the law enforcement authorities with the purpose of criminals’
identification or search for missing persons [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>The purpose of this work is to develop an information technology of person identification in video
stream based on the algorithm that provides high identification results being applied on images with
low quality and resolution.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Task solution methods</title>
      <p>It is well-known that information technology of face recognition and identification can be claimed
efficient and reliable only when it thoroughly tested and validated, preferably on real data sets. Thus, it
is necessary to research the methods, which will be used to develop the algorithm underlying in
information technology under development, in order to define their efficiency on the datasets of
lowquality images. In the context of this paper low-quality images define as small-sized, blurry, indistinct,
pixelated, noised images, and high-quality images vice versa define as more distinct, low-compressed
and noise-reduced.</p>
      <p>During the research it was decided to use anisotropic diffusion as an image preprocessing method,
Gabor wavelet transform for image processing, histogram of oriented gradients (HOG) and local binary
patterns in 1-dimensional space (1DLBP).
2.1.</p>
    </sec>
    <sec id="sec-3">
      <title>Anisotropic diffusion</title>
      <p>
        The idea of using anisotropic diffusion as a preprocessing technique before wavelet transform have
been proposed in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Authors of this paper performed experiments on high-quality images. To develop
an information technology for person identification in video stream, it was decided to apply this method
to the images of low quality and resolution.
      </p>
      <p>
        In image processing the concept of anisotropic diffusion is a process of successive diffusion based
on more and more blurring of images in a scale space. Firstly, it was presented by Perona and Malik in
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. As a preprocessing step it makes thinning and linking of the edges unnecessary, because the
resulting images preserve the lines, edges and other important properties also leading to smoothing
      </p>
      <p>
        This method is based on the approximation of a parameterized version of the above defined process
obtained by applying anisotropic diffusion equation, results in the space variant filters being anisotropic,
close to lines and edges. The attributes of anisotropic diffusion are the anisotropic smoothing and
iterative diffusion for the processing of each image pixel [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        Anisotropic diffusion equation can be expressed as following:
  = 
( ( ,  ,  ) ) =  ( ,  ,  )
+  ∙  ,
(1)
where div is the divergence operator, ∇ is the gradient operator, Δ is the Laplacian operator, I0(x, y) is
the input image, t is the Gaussian kernel variance G(x, y, t0) and I(x, y, t) is a family of derived images
obtained by convolving the original image I0(x, y) with a Gaussian kernel G(x, y, t0). Anisotropic
diffusion helps successfully remove noise and preserve image edges and small structures if the diffusion
coefficient or edge stopping function c(∇I) is estimated correctly. If c(x, y, t) is a constant the equation
can be reduced to the equation of isotropic heat diffusion   =  [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ].
2.2.
      </p>
    </sec>
    <sec id="sec-4">
      <title>Gabor wavelets transform</title>
      <p>
        The usage of the combination of anisotropic diffusion and Gabor wavelet transform was described
in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Although authors concluded that combination of these methods provides high recognition rates,
however it wasn’t tested on low-quality images. During this research, it was decided to explore if these
methods are applicable to development of information technology of person identification in video
stream.
      </p>
      <p>Gabor wavelet transform is widely used in pattern recognition field because of its biological
significance and technical properties. The complex Gabor function in the spatial domain is defined by
the following:
s( x, y) – is a complex sinewave (carrier), defined as:</p>
      <p>( ,  ) =  ( ,  )  ( ,  ).</p>
      <p>( ,  ) = exp(j(2π( 0 +  0 ) +  )),
where (u0 , v0 ) and P determine the spatial frequency and sinewave phase respectively; wr (x, y) –
Gaussian 2D-function (envelope function). The Gaussian envelope is represented as the following:
(4)
  ( ,  ) = K ∙ exp(− ( 2( −  0) 2 +  2( − 0) 2,
where (x0 , y0 ) is the peak of the function, a and b are the Gaussian scaling parameters, and the index
r denotes the rotation operation, which can be described this way:
(2)
(3)
(5)
(6)</p>
      <p>
        Therefore, the complex Gabor function in the spatial domain can be written as follows [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]:
 ( ,  ) = K ∙ exp(− ( 2( −  0) 2 +  2( − 0) 2))exp⁡( (2 ( 0 +  0 ) +  ). (7)
The function is determined by the following parameters: K - scales the value of the Gaussian
envelope; (a, b) - scales two axes of the Gaussian envelope; (x0 , y0 ) - peak coordinates of the
Gaussian envelope; (u0 , v0 ) - spatial frequencies of the sinusoidal carrier in Cartesian coordinates; P
is the phase of the sinusoidal carrier [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
2.3.
      </p>
    </sec>
    <sec id="sec-5">
      <title>Histogram of Oriented Gradients (HOG)</title>
      <p>
        In [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] it was presented the algorithm that uses Haar wavelet decomposition as a processing technique
together with BSIF (binarized statistical image features) and HOG (histogram of oriented gradients) as
methods of feature extraction. Method proposed in this work solves the problem of palmprint features
extraction. Based on this research it was decided to apply similar algorithm with other methods for the
task of face recognition.
      </p>
      <p>The Histogram of Oriented Gradients (HOG) method can be applied to images that were processed
by wavelet transform to extract important features of image shape. Histogram of oriented gradients
calculates with the execution of the following steps. Firstly, before building of the orientation histogram
for each cell, there must be obtained the value of the gradient. After that, obtained histograms that
grouped by the individual cell are normalized. This process can be described mathematically:
  = [ 0 ].
derivative mask.</p>
      <p>The gradient histogram retrieves with the equations:
Gradient size is calculated as:
Orientation is calculated with the following equation:
 ( ) =  ∙   ,</p>
      <p>( ) =  ∙   .
| | = √ 2( ) +  2( ).</p>
      <p>= tan−1 



.</p>
      <p>(8)
(9)
(10)
(11)
(12)
(13)</p>
      <p>Then orientation distributes equally in a range of 0 to 180 deg (for an unsigned gradient) or 0 to 360
deg (for a signed gradient). Each pixel computes a channel weighted vote based on the gradient values.</p>
      <p>After that cells groups into blocks that are connected in space. It allows to obtain the vector of the
normalized histogram elements.
2.4.</p>
    </sec>
    <sec id="sec-6">
      <title>Local Binary Patterns (LBP) in 1-Dimensional Space</title>
      <p>
        For the palmprint recognition task authors of [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] used BSIF method (binarized statistical image
features) as feature extraction technique. BSIF is an LBP-based algorithm. For the task of face
recognition there are more efficient algorithms based on LBP like 1DLBP and I1DLBP.
      </p>
      <p>
        Local Binary Patterns in 1-Dimensional Space (1DLBP) firstly was presented in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and tested on
high-quality images. In this work 1DLBP will be explored with the appliance on low-quality images.
      </p>
      <p>The purpose of the 1DLBP method is to describe the local agitation of 1-dimensional signal segment
in binary code. That description can be obtained by comparing of the neighbor pixel values with the
central pixel value. All neighbor elements get the value of 1 if they are greater or equal to the current
element and value of 0 if they are less than the current element. Then, each element of the obtained
vector is multiplied by a weight according to its position. Finally, the current element is replaced by the
sum of the values of obtained vector. Described process can be expressed as follows:</p>
      <p>Dx is a 1-dimensional horizontal discrete derivative mask, Dy is a 1-dimensional vertical discrete
where S(x) function defines as S(x)={1 if x⩾0;⁡0⁡otherwise}; g’ and gn are the mean value of the linear
neighborhood and the values of its 1-dimensional neighbors, respectively.</p>
      <p>
        The 1DLBP algorithm applies on all blocks of the split image with different resolution. The extracted
histograms are concatenated in one global 1-dimensional vector that represents one face image [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>Summarizing the foregoing description of 1DLBP method, the following stages of image processing
can be distinguished to obtain 1DLBP feature vector:
</p>
      <p>Input image preprocesses.</p>
      <p>1
=
∑  (  −  0) ∙ 2 .
 −1
 =0
 −1
 =0
descriptor.
summed neighbors:</p>
      <p>S(x) function defines as S(x)={1 if x⩾0;⁡0⁡otherwise}; g0 and gn are the values of the central element
and its 1-dimensional neighbors, respectively. The index n changes its value increasingly from the left
to the right in the 1-dimensional string. The histogram of the 1-dimensional pattern defines the 1DLBP</p>
      <p>The idea of the I1DLBP is similar to the idea of the 1DLBP - the local patterns are extracted by
thresholding the linear neighbors of each pixel, from the projected image, with the mean value of the
 1
=</p>
      <p>∑  (  −  ́ ) ∙ 2 ,



</p>
      <p>Image splits into multiple blocks.</p>
      <p>Each decomposed block projects in 1-dimensional space.
1DLBP algorithm processes each projected block.</p>
      <p>
        The vectors that obtained after each block processing concatenate in one global feature
vector [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
    </sec>
    <sec id="sec-7">
      <title>3. Experimental research and analysis</title>
      <p>Experimental research of the selected methods was performed using software created with Python.
The research was conducted on three different face databases: The Database of Faces, Facial
Recognition Technology (FERET) database and Surveillance Cameras Face Database (SCface). The
purpose of experimental research is to establish the most efficient combination of methods by
comparing results of their work with the use of one feature extraction method (HOG or 1DLBP) and
the use of combination of feature extraction methods (HOG and 1DLBP). For the classification of the
feature vector, obtained as a result, square Euclidean distance metric was used in all sets of experiments.</p>
      <p>The examples of images obtained as a result of applying described methods are depicted on Figure 1.</p>
      <p>
        First set of experiments was performed using The Database of Faces that was created by AT&amp;T
Laboratories Cambridge during the work on face recognition project. Database presented with a set of
face images of 40 different individuals. It is organized in 40 directories with 10 PGM format images in
each directory. The size of each image is 92x112 pixels, with 256 grey levels per pixel. Images were
taken with the person in frontal position with lighting, facial expression and facial details varying. The
database was retrieved openly from official site of AT&amp;T Laboratories Cambridge [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Results of
experiments performed on The Database of Faces are presented in the Table 1.
vector extraction methods. It confirms the reasonability of the usage of two feature extraction methods
through one iteration of the algorithm work. Therefore, false recognition rate varies from 30 to 57.5%.
3.2.
      </p>
    </sec>
    <sec id="sec-8">
      <title>Facial Recognition Technology (FERET) database</title>
      <p>
        The FERET Database is a part of the Face Recognition Technology program [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] that researches
automatic face recognition capabilities to develop new algorithms for the automatic recognition of
human faces. As Technical Agent for distribution of the database serves The National Institute of
Standards and Technology (NIST). The FERET database contains 14126 high-resolution images of
1199 individuals with the resolution of 256x384. To obtain database it was necessary to request an
account for downloading the FERET database. Corresponding request was sent according to the
instruction given by the owner [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. As far as first set of experiments was conducted on the database
that contains images of total 40 people, it was decided to use images of 40 people as well to conduct
the experiments on the FERET database.
      </p>
      <p>Experimental results for The FERET Database are presented in Table 2.</p>
      <p>As can be seen from Fig. 3, on high resolution images the usage of 1DLBP method is not necessary,
because the identification accuracy rate is the same during the experiment performance with HOG
feature vector extraction method only, as well as with combination of HOG and 1DLBP methods, which
is 72.5%. For the FERET database the usage of two feature extraction methods is not justified. As a
conclusion, false recognition rate obtained during the experiments with The FERET Database varies
from 27.5 to 35.5%.</p>
      <p>
        SCface database was created by researchers from University of Zagreb [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] for testing face
recognition algorithms in conditions of the real world. Images of the database was taken on surveillance
cameras of varying quality and resolution, therefore images of this database are vary-resolution. Also,
the database contains frontal mug shot images for the scenario when a person needed to be recognized
comparing mug shot image to a low-quality video surveillance image. Database contains 4160 static
images of 130 individuals. More technical details can be found in [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. For the purpose of this research,
low quality video surveillance images were used to test the selected methods and explore the small
sample size problem. As far as first set of experiments was conducted on the database that contains
images of total 40 people, it was decided to use images of 40 people as well to conduct the experiments
on the SCface database. Obtained results of the experiments performed on SCface database are
presented in Table 3.
      </p>
      <p>From Fig. 4 it can be concluded that using 1DLBP method for feature extraction process by itself
provides only 77.5% of correctly identified face images. Usage of HOG method improves algorithm
work results within the order of 15% and provides the identification accuracy rate of 92.5%. But fusing
of HOG and 1DLBP feature vectors escalate the correctness of algorithm to 97.5%. Results of this set
of experiments confirms that the usage of combination of two feature vector extraction methods is
justified on low quality and resolution image, and the false recognition rate in this case is the lowest
among all the experiments – 2.5%.</p>
    </sec>
    <sec id="sec-9">
      <title>3.4. Analysis of obtained results</title>
      <p>Comparative diagram on Fig. 5 presents the results of identification with the use of selected methods
on different databases. Taking into consideration the fact that facial images from SCface database, that
were used for experimentation performance, are low-quality video surveillance images, it can be
concluded that combination of all researched methods provides high identification accuracy rate on
low-resolution facial images.</p>
      <p>After the analysis of all experimentational results it can be concluded that the combination of such
methods as anisotropic diffusion, Gabor wavelet transform, histogram of oriented gradients and local
binary patterns in 1-dimensional space demonstrates the highest identification results being applied to
all researched databases.</p>
    </sec>
    <sec id="sec-10">
      <title>4. Development of image processing algorithm</title>
      <p>From the perspective of analysis of obtained research results, combination of methods with the
highest identification accuracy rate can be used to develop the algorithm underlying in information
technology for person identification in video stream.</p>
      <p>
        In general, the person identification process consists of two stages. The first is the determination of
face location in the image taken from video stream. The original image is scanned with a smaller
window, and every time there is a determination of the similarity degree between the image in the
window with a human face. Formally, the face image can be structurally defined, statistically or by
sample face images list [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. After the window, which most likely contains just a person's face, has
been identified, the second stage starts- identification. The purpose of person identification is to define
a unique identifier for each input biometric parameter or to determine it as unknown if such sample is
not in the database [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. The algorithm for information technology of person identification must be
applicable to facial images in aim to extract their feature vectors for further classification in order to
identify a person. Data flow diagram of a proposed algorithm is presented on Fig. 6.
      </p>
      <p>The algorithm uses a portrait image of a person as input data. Step-by-step algorithm execution can
be described as follows:</p>
      <p>1. Face detection and localization on the input image. Only face image of a person is used for further
processing because it is the area of interest for person identification process.</p>
      <p>2. Image pre-processing using anisotropic diffusion that allows to preserve and enhance edge
information and remove noise.</p>
      <p>3. The image of the face, after applying anisotropic diffusion to it, is processed by Gabor wavelets
with various changes in the parameters of the wavelet function so that 16 wavelet-transformed
variations of the face image can be obtained. After that, the wavelet-transformed images are summed
to form a global image, which is processed by the following methods.</p>
      <p>4. The global image formed as a result of the Gabor wavelet transform is simultaneously fed to the
input of two independent methods - HOG and 1DLBP. As a result of the operation of each of these
methods, two separate 512-value feature vectors are formed.</p>
      <p>5. HOG feature vector and 1DLBP feature vector normalization.</p>
      <p>
        Because of the deviation in vector distributions and ranges, feature vectors extracted separately from
1DLBP and the HOG features are incompatible. There are some methods of vector normalization, that
can improve compatibility. For example, min–max normalization that transforms the feature vectors in
the range [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ]. If X = [x1; x2; x3; ...; xn] is the feature vector, the normalized feature vector can be
represented using min–max normalization [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]:
 ′ =

  − 
( ) − 
( )
( )
.
      </p>
      <p>6. The HOG feature vector and 1DLBP feature vector concatenates to form a 1024-value global
feature vector of a face image. The global feature vector of the image can be obtained by concatenating
into a single feature vector the normalized feature vectors of 1DLBP and HOG features.</p>
      <p>
        Let the normalized feature vectors be D = [d1; d2; d3 ; ...; dn] for 1DLBP and h = [h1; h2; h3; ...; hn]
for HOG extraction. The global vector can be represented as [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]:

 = [ 1,  2,  3, … ,   , ℎ1, ℎ2, ℎ3, … , ℎ ].
      </p>
      <p>The obtained global feature vector is using for further classification.</p>
      <p>The algorithm for information technology of person identification in video stream that includes such
methods as anisotropic diffusion, Gabor wavelet transform, histogram of oriented gradients and local
binary patterns in 1-dimensional space proposed in this paper firstly.</p>
    </sec>
    <sec id="sec-11">
      <title>5. Conclusion</title>
      <p>This paper describes the research of methods to develop the algorithm underlying in information
technology of person identification in video stream. During the research, experiments were performed
on the testing of methods being applicable in face recognition in order to establish the most efficient
combination of methods by comparing results of their work with the use of one feature extraction
method (HOG or 1DLBP) and the use of combination of feature extraction methods (HOG and 1DLBP).</p>
      <p>The obtained results indicated that the highest identification accuracy rate of 97.5% was obtained
with both feature extraction methods (HOG and 1DLBP) on the images from SCface database, which
are low-quality video surveillance images. Experimental results on the Database of Faces and FERET
database provides from 70 to 72.5% of correctly identified images.</p>
      <p>Proposed algorithm is based on anisotropic diffusion as an image preprocessing method, Gabor wavelet
transform as an image processing method, histogram of oriented gradients (HOG) and local binary patterns in
1-dimensional space (1DLBP) as the methods of feature vector extraction from the images, square Euclidean
distance metric for vector classification.</p>
      <p>Summarizing the foregoing conclusions, proposed in this work algorithm for information technology
of person identification based on anisotropic diffusion, Gabor wavelet transform, histogram of oriented
gradients (HOG) and local binary patterns in 1-dimensional space (1DLBP) can be applied for face
recognition and person identification in conditions of face images with low quality and resolution, that
means it partially solves small sample size problem. For future research it is aimed to improve the
(14)
(15)
proposed algorithm for the scenario when low quality video surveillance images compare with frontal
mug shot image from law enforcement or national security databases.</p>
    </sec>
    <sec id="sec-12">
      <title>6. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B.</given-names>
            <surname>Thompson</surname>
          </string-name>
          and
          <string-name>
            <surname>R. Van Noorden.</surname>
          </string-name>
          <article-title>The troubling rise of facial recognition technology</article-title>
          .
          <source>Nature. November 18</source>
          ,
          <year>2020</year>
          . URL: https://www.nature.com/articles/d41586-020-03271-8
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <article-title>[2] Facial recognition market - growth, trends, Covid-19 impact, and forecasts (</article-title>
          <year>2021</year>
          -
          <fpage>2026</fpage>
          ).
          <source>Mordor Intelligence. March</source>
          <year>2021</year>
          . URL: https://www.mordorintelligence.com/industry-reports/facialrecognition-market
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Facial</given-names>
            <surname>Recognition Market Size</surname>
          </string-name>
          ,
          <source>Share &amp; Trends Analysis Report</source>
          ,
          <fpage>2021</fpage>
          -
          <lpage>2028</lpage>
          . Grand View Research. May
          <year>2021</year>
          . 92 p.
          <source>Report ID: 978-1-68038-311-9</source>
          . URL: https://www.grandviewresearch.com/industry
          <article-title>-analysis/facial-recognition-market/segmentation</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>T.M.</given-names>
            <surname>Abhishree</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Latha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Manikantan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ramachandran</surname>
          </string-name>
          .
          <article-title>Face Recognition Using Gabor Filter Based Feature Extraction with Anisotropic Diffusion as a Pre-processing Technique</article-title>
          .
          <source>Procedia Computer Science</source>
          , Volume
          <volume>45</volume>
          ,
          <year>2015</year>
          , pp.
          <fpage>312</fpage>
          -
          <lpage>321</lpage>
          , ISSN 1877-
          <volume>0509</volume>
          , doi: 10.1016/j.procs.
          <year>2015</year>
          .
          <volume>03</volume>
          .149.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>B.</given-names>
            <surname>Attallah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Serir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chahir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Boudjelal</surname>
          </string-name>
          .
          <article-title>Histogram of gradient and binarized statistical image features of wavelet subband-based palmprint features extraction</article-title>
          .
          <source>J. Electron. Imag</source>
          .
          <volume>26</volume>
          (
          <issue>6</issue>
          )
          <fpage>063006</fpage>
          ,
          <string-name>
            <surname>November</surname>
            <given-names>8</given-names>
          </string-name>
          ,
          <year>2017</year>
          , doi: 10.1117/1.JEI.
          <volume>26</volume>
          .6.063006.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>P.</given-names>
            <surname>Perona</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Malik</surname>
          </string-name>
          .
          <article-title>Scale-Space and Edge Detection Using Anisotropic Diffusion</article-title>
          .
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          , vol.
          <volume>12</volume>
          , No.
          <issue>7</issue>
          , pp.
          <fpage>629</fpage>
          -
          <lpage>639</lpage>
          .
          <year>July 1990</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>C.</given-names>
            <surname>Yu</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Jia</surname>
          </string-name>
          .
          <article-title>Anisotropic Diffusion-based Kernel Matrix Model for Face Liveness Detection</article-title>
          .
          <source>Image Vis. Comput.</source>
          ,
          <volume>89</volume>
          ,
          <year>2019</year>
          , pp.
          <fpage>88</fpage>
          -
          <lpage>94</lpage>
          , doi: 10.1016/J.IMAVIS.
          <year>2019</year>
          .
          <volume>06</volume>
          .009
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.R.</given-names>
            <surname>Movellan</surname>
          </string-name>
          . Tutorial on Gabor,
          <year>2002</year>
          . URL: https://inc.ucsd.edu/mplab/tutorials/gabor.pdf
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G. P.</given-names>
            <surname>Dimitrov</surname>
          </string-name>
          et al.
          <source>Creation of Biometric System of Identification by Facial Image</source>
          .
          <source>2020 3rd International Colloquium on Intelligent Grid Metrology (SMAGRIMET)</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>29</fpage>
          -
          <lpage>34</lpage>
          , doi: 10.23919/SMAGRIMET48809.
          <year>2020</year>
          .
          <volume>9263995</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Benzaoui</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Boukrouche</surname>
          </string-name>
          .
          <article-title>Face Recognition Using 1DLBP Texture Analysis</article-title>
          .
          <source>Future Computing 2013: The Fifth International Conference on Future Computational Technologies and Applications</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>14</fpage>
          -
          <lpage>19</lpage>
          , ISBN:
          <fpage>978</fpage>
          -1-
          <fpage>61208</fpage>
          -272-1.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Benzaoui</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Boukrouche</surname>
          </string-name>
          .
          <article-title>1DLBP and PCA for face recognition</article-title>
          .
          <source>2013 11th International Symposium on Programming and Systems (ISPS)</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>7</fpage>
          -
          <lpage>11</lpage>
          , doi: 10.1109/ISPS.
          <year>2013</year>
          .
          <volume>6581486</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A.</given-names>
            <surname>Benzaoui</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Boukrouche</surname>
          </string-name>
          .
          <article-title>Face Analysis, Description and Recognition using Improved Local Binary Patterns in One Dimensional Space</article-title>
          .
          <source>Journal of Control Engineering and Applied Informatics</source>
          , Vol.
          <volume>16</volume>
          , No.
          <issue>4</issue>
          , pp.
          <fpage>52</fpage>
          -
          <lpage>60</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <article-title>The Database of Faces</article-title>
          . URL: https://cam-orl.co.uk/facedatabase.html
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Face</given-names>
            <surname>Recognition</surname>
          </string-name>
          <article-title>Technology (FERET)</article-title>
          . URL: https://www.nist.gov/programs-projects/facerecognition-technology-feret
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Color</surname>
            <given-names>FERET</given-names>
          </string-name>
          database. URL: https://www.nist.gov/itl/products-and
          <article-title>-services/color-feret-database</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>SCface - Surveillance Cameras</surname>
          </string-name>
          Face Database. URL: https://www.scface.org
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>M.</given-names>
            <surname>Grgic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Delac</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Grgic.</surname>
          </string-name>
          SCface
          <article-title>- surveillance cameras face database</article-title>
          .
          <source>Multimedia Tools and Applications Journal</source>
          , Vol.
          <volume>51</volume>
          , No. 3,
          <year>February 2011</year>
          , pp.
          <fpage>863</fpage>
          -
          <lpage>879</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>O.</given-names>
            <surname>Bychkov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Merkulova</surname>
          </string-name>
          and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhabska</surname>
          </string-name>
          ,
          <article-title>"Information Technology of Person's Identification by Photo Portrait,"</article-title>
          <source>2020 IEEE 15th International Conference on Advanced Trends in Radioelectronics</source>
          , Telecommunications and Computer Engineering (TCSET),
          <year>2020</year>
          , pp.
          <fpage>786</fpage>
          -
          <lpage>790</lpage>
          , doi: 10.1109/TCSET49122.
          <year>2020</year>
          .
          <volume>235542</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>O.</given-names>
            <surname>Bychkov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Ivanchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Merkulova</surname>
          </string-name>
          and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhabska</surname>
          </string-name>
          .
          <article-title>Mathematical Methods for Information Technology of Biometric Identification in Conditions of Incomplete Data</article-title>
          .
          <source>Proceedings of the 7th International Conference "Information Technology and Interactions" (IT&amp;I-</source>
          <year>2020</year>
          ), Kyiv, Ukraine,
          <source>December 02-03</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>336</fpage>
          -
          <lpage>349</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>