=Paper= {{Paper |id=Vol-3309/paper6 |storemode=property |title=Preprocessing Methods Study to Improve Information Technology for Person Identification by Occluded Image |pdfUrl=https://ceur-ws.org/Vol-3309/paper6.pdf |volume=Vol-3309 |authors=Oleksii Bychkov,Kateryna Merkulova,Yelyzaveta Zhabska |dblpUrl=https://dblp.org/rec/conf/ittap/BychkovMZ22 }} ==Preprocessing Methods Study to Improve Information Technology for Person Identification by Occluded Image== https://ceur-ws.org/Vol-3309/paper6.pdf
Preprocessing Methods Study to Improve Information
Technology for Person Identification by Occluded Image
Oleksii Bychkov, Kateryna Merkulova and Yelyzaveta Zhabska
Taras Shevchenko National University of Kyiv, 60, Volodymyrska Street, Kyiv, 01601, Ukraine


                Abstract
                The coronavirus pandemic has become challenging issue for the face recognition and
                identification technologies. Most algorithms failed because of the presence of medical masks
                on the faces. Such an issue made it difficult for the decision-making systems to provide the
                correct results during the face recognition and person identification process. Although, for the
                past three years many of these problems have been overcome, the new adversary attacks arose,
                that allow to evade the identification systems. Therefore, the development of information
                technologies for person identification robust to the presence of occlusion on faces is still up to
                date.
                This paper describes the preprocessing methods study with an aim to improve performance of
                information technology of person identification by occluded face image. Information
                technology is based on the algorithm that consist of Gabor wavelet transformation as an image
                processing method for forming a global face image, local binary patterns in one-dimensional
                space and a histogram of oriented gradients for forming a vector of image features, Euclidean
                squared distance metric for vector classification.
                For the purpose of information technology improvement, the experimental research was
                conducted with the use of variety of preprocessing methods: anisotropic diffusion, image
                histogram equalization and both of these methods applied. During the research there were used
                The Database of Faces database, the FERET database and the SCface database. Images from
                these databases were processed in order to consider it occluded and converted to uncompressed
                and compressed formats to conduct the experiments more clearly.
                The results of the experiments have shown that preprocessing by anisotropic diffusion and
                image histogram equalization along with conversion to uncompressed format can increase the
                accuracy of the algorithm performance on 5-7.5% in some cases. Also, the usage of image
                histogram equalization by itself on the images converted to compressed format can increase
                the identification accuracy rate of the algorithm on 2.5%.

                Keywords 1
                Information technology, biometric identification, face recognition, occlusion

1. Introduction
   Face recognition and identification technologies have long become a part of private and public life.
Today, such technologies are used in law enforcement agencies, at the state border, in access control
systems, and even on smartphones, which are owned by almost every person on earth.
   The coronavirus pandemic has called into question the feasibility of using facial recognition
technologies in decision-making systems. In the three years since the disease appeared, many proposals
have been made for measures to prevent its spread, but the basic preventive measure is still the
rudimentary wearing of medical masks in crowded places and closed spaces [1]. Since masks


ITTAP’2022: 2nd International Workshop on Information Technologies: Theoretical and Applied Problems, November 22–24, 2022,
Ternopil, Ukraine
EMAIL: bos.knu@gmail.com (O. Bychkov); kate.don11@gmail.com (K. Merkulova); y.zhabska@gmail.com (Y. Zhabska)
ORCID: 0000-0002-9378-9535 (O. Bychkov); 0000-0001-6347-5191 (K. Merkulova); 0000-0002-9917-3723 (Y. Zhabska)
             ©️ 2022 Copyright for this paper by its authors.
             Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
             CEUR Workshop Proceedings (CEUR-WS.org)
completely cover the lower part of a person's face, this creates problems for existing facial recognition
and identification systems.
    Experimental studies conducted during the pandemic have shown that most algorithms are unstable
when it comes to recognizing a face partially covered by a medical mask. So, for example, in July 2020,
the National Institute of Standards and Technology (NIST) conducted a study on the performance of
pre-Covid facial recognition and identification algorithms, in which it was stated that the most accurate
algorithms are unable to perform identification on masked images identify the person 20% to 50% of
the time [2].
    In November 2020, NIST published a study of post-Covid facial recognition and identification
algorithms. Despite the fact that in the post-pandemic period, many developers claimed that their
algorithms effectively cope with identification in face mask conditions, some widely used algorithms
fail to identify a person 10-40% of the time [3].
    Of course, during this period, many studies were conducted, such as [4, 5], and a large number of
algorithms were developed that overcome the problem of recognition and identification of faces that
are partially closed to the observer. However, another problem arises that allows attackers to bypass
occlusion-resistant algorithms, compromising their reliability, to carry out targeted adversary attacks.
    In recent paper [6] it was proposed a universal adversary attack that can be used to physically evade
facial recognition systems. Such an attack is based on the use of a fabric face mask on which a pattern
is printed, designed to ensure that all individuals wearing the mask are misidentified by facial
recognition systems. The authors of the article claim that 96.66% of experimental research participants
who wore the proposed mask were incorrectly identified or not identified at all.
    Given the presence of such attacks, the further need to develop algorithms that are resistant to the
presence of any occlusion on the human face follows. Such a problem is extremely relevant for today's
Ukraine, where martial law is currently in effect due to the full-scale invasion of Russian troops on its
territory [7]. After all, the faces of military personnel are usually almost completely covered by
protective equipment, which makes it difficult to identify them. Accordingly, the same algorithms will
not be resistant to recognizing military personnel who usually wear helmets and balaclavas.
    The purpose of this work is to improve the stability of the algorithm, which is the basis of the
information technology of person identification, to the presence of occlusion due to pre-processing
methods.

2. Problem Statement
   In the previous work [8], information technology for identification of a person based on an occlusive
image of a person's face was proposed. This technology is based on an algorithm consisting of
anisotropic diffusion as a method of image preprocessing, Gabor wavelet transformation as an image
processing method for forming a global face image, local binary patterns in one-dimensional space and
a histogram of oriented gradients for forming a vector of image features, Euclidean squared distance
metric for vector classification.

2.1.    Proposed algorithm
    Let's consider the proposed algorithm in more detail.
    In the continuous case, a flat image is represented by a two-dimensional function (x, y) → f(x, y).
The value of f in spatial coordinates (x, y) is positive and it is determined by the image source. Images
in computer vision are represented as a matrix of image elements. Matrices of image elements are given
to the input of the algorithm. If the input to the algorithm is a color image, it is first converted to a
grayscale image.

2.1.1. Face detection and preprocessing
   At the next step of the algorithm, the process of localization of a person's face in the image takes
place. Haar features are used to detect the face in the image.
   Next, the image containing only the human face is processed using the anisotropic diffusion method
to determine the most significant features. This method allows to get an image with more visible lines
and boundaries of the face while preserving other important properties.

2.1.2. Global face image formation
   After applying anisotropic diffusion, the image is processed with Gabor wavelets. Image
representation using Gabor wavelets is based on the principles of image representation in the human
mind. In the Gabor representation [9], the arbitrary function f(x) is extended from the point of view of
symmetric and asymmetric elementary signals:
                                           (𝑥−𝑥𝑚 )2
                            𝑆𝑠 (𝑥) = 𝑒𝑥𝑝 [−         ] 𝑐𝑜𝑠[2𝜋𝑓𝑛 (𝑥 − 𝑥𝑚 )],                                          (1)
                                             4𝜎 2
                                          (𝑥 − 𝑥𝑚 )2
                          𝑆𝑎 (𝑥) = 𝑒𝑥𝑝 [−       2
                                                      ] 𝑠𝑖𝑛[2𝜋𝑓𝑛 (𝑥 − 𝑥𝑚 )]                     (2)
                                            4𝜎
   The signals presented above are centered at the position x = xm and at the spatial frequency f = fn
with a Gaussian envelope described by the standard deviation 𝜎.
   A family of two-dimensional Gabor wavelets that satisfies wavelet theory and neurophysiological
constraints for simple cells can be obtained using the following formulas:
                             𝜔2                                                                                𝜅2
                     𝜔0    − 02(4(𝑥 cos 𝜃+𝑦 sin 𝜃)2 +(−𝑥 sin 𝜃+𝑦 cos 𝜃)2 )      𝑖(𝜔0 𝑥 cos 𝜃+𝜔0 𝑦 sin 𝜃)     −
𝜓(𝑥, 𝑦, 𝜔0 , 𝜃) =        𝑒  8𝜅                                             ∙ [𝑒                          − 𝑒    2 ], (3)
                    √2𝜋𝜅
where ω0 is the radial frequency in radians per unit length and T is the orientation of the wavelet in
radians. The Gabor wavelet is centered at the position (x = 0, y = 0), and the normalization coefficient
is such that <ψ, ψ> = 1, i.e. normalized by L2. κ is a constant, with κ ≈ π for a one-octave frequency
range and κ ≈ 2,5 for a 1.5-octave frequency range.
    By changing the parameters of the wavelet function, the several matrices of elements of the
transformed images can be obtained, which are added together to further create a global image of the
face. To extract the features of this image, the methods of forming local binary patterns in one-
dimensional space and histograms of oriented gradients are further used.

2.1.3. Feature vector extraction and classification
   The concept of the method of formation of local binary patterns in a one-dimensional space consists
in the formation of a binary code that will describe the local excitation of a segment in a one-
dimensional signal [10]. The binary code is calculated by comparing the value of the central pixel with
the values of neighboring pixels. This can be described as follows:
                                            𝑁−1
                           1𝐷𝐿𝐵𝑃 = ∑              𝑆(𝑓(𝑥, 𝑦)𝑛 − 𝑓(𝑥, 𝑦)0 ) ∙ 2𝑛 .                                    (4)
                                            𝑛=0
    f(x, y)0 and f(x, y)n are values of the central element and its one-dimensional neighbors. The index n
increases from left to right in a one-dimensional array. The 1DLBP descriptor is defined by a one-
dimensional pattern histogram.
    The histogram of oriented gradients descriptor allows to get information about the texture and shape
of the image [11]. The number of boundaries of image objects that have an orientation with a certain
range is represented by each interval within the histogram. Combining the calculated histograms in all
sub-ranges of the images allows to obtain such a descriptor. To create a histogram of local gradients, it
is necessary to first calculate the orientation gradients for each part of the image. The gradient
calculation is achieved by first filtering with a one-dimensional horizontal discrete derivative mask Dx
and a one-dimensional vertical discrete derivative mask Dy by convolution, and the resulting value is
the sum of the adjacent pixels with the weight of the mask taken into account:
                                           𝑓(𝑥) = 𝑓(𝑥, 𝑦) ∙ 𝐷𝑥 ,                                                    (5)

                                           𝑓(𝑦) = 𝑓(𝑥, 𝑦) ∙ 𝐷𝑦 .                                                    (6)
   The obtained descriptors of local binary patterns in one-dimensional space and histograms of
oriented gradients are normalized and concatenated, forming a common global vector. This vector is
then classified using the squared Euclidean distance metric.

2.2.    Task definition
    Preliminary results of the algorithm's work on occlusive images were that the identification accuracy
rate of the proposed algorithm ranges from 72.5% to 85%, depending on the database that was used
during the experiment. This study proposes the ways of increasing the identification accuracy rate of
the algorithm to improve its robustness against the presence of occlusion on the face of a person to be
identified.

3. Methods of problem solving
   With an aim to improve previously obtained results, it was decided to conduct experiments changing
the image preprocessing methods. After all, as is well known, the purpose of preprocessing is to modify
the presentation of the initial image to facilitate the execution of the next stages of the algorithm and
increase the speed of recognition and identification.

3.1.    Method of anisotropic diffusion
    Originally in the proposed algorithm the anisotropic diffusion method is used, that is based on
equations with partial derivatives. According to this method, the processed image is the result of solving
a diffusion-type equation, for which the initial condition is the original image. It is known that linear
models are characterized by such disadvantages as blurring and displacement of boundaries. Therefore,
it is necessary to expand the class of equations. In this way, an approach was developed using the
anisotropic diffusion model, the meaning of which is to replace the scalar diffusion coefficient with the
diffusion tensor. On the one hand, such regularized models are characterized by powerful smoothing
properties. On the other hand, they allow the image to remain sufficiently contrasted for a long time
until the moment of complete "blurring" (transformation into an image with a constant intensity of gray
color).
    An example of anisotropic method appliance is presented in Figure 1.




Figure 1: Example of original image and image processed with anisotropic diffusion

   Most people have an intuitive idea of diffusion as a physical process [12]. This physical observation
of balancing out differences in concentration without creating or destroying mass observation can be
easily described by a mathematical formula. The equilibrium property is expressed by Fick's law:
                                        𝑗 = −𝐷 ∙ ∇𝑢.                                               (7)
    This equation states that the concentration gradient ∇u causes a flux j that aims to compensate for
this gradient. The relationship between ∇u and j is described by the diffusion tensor D - a positive
definite symmetric matrix.
    To eventually obtain the diffusion equation, it is needed to use Fick’s law in the continuity equation:
                                       𝜕𝑡 𝑢 = 𝑑𝑖𝑣(𝐷 ∙ ∇𝑢).                                            (8)
    This equation is found in many physical transport processes. When processing images, it is possible
to identify the concentration with a gray value at a certain position.
    Adaptive smoothing methods are based on the idea of applying a process that itself depends on the
local properties of the image. A nonlinear diffusion method to avoid the blurring and localization
problems of linear diffusion filtering. They use a non-homogeneous process that reduces diffusion in
places that are more likely to be boundaries. This probability is measured as |∇u|2. The filter is based
on the equation:
                                    𝜕𝑡 𝑢 = 𝑑𝑖𝑣(𝑔(|∇𝑢|2 )∇𝑢).                                          (9)
    This notation uses diffusivity, which can be defined as:
                                                1
                                  𝑔(𝑠 2 ) =             , (𝜆 > 0).                                   (10)
                                            1 + 𝑠 2 /𝜆2
    Although this method was formulated long ago, it is quite rarely used in face recognition and
identification algorithms.


3.2.    Method of image histogram equalization
   To fulfill the purpose of this study it was decided to use histogram equalization apart from
anisotropic diffusion and along with it. Image histogram equalization is a very simple yet effective
technique for improving image quality. Since the previously proposed algorithm uses images in gray
gradients, it is advisable to consider applying equalization to images that contain information only about
the brightness, but not about the color of the pixels, i.e. to grayscale images.
   Figure 2 presents the example of image histogram equalization.




Figure 2: Example of original image and its histogram and equalized image with its histogram

   Histograms of very dark images are characterized by the fact that non-zero values of the histogram
are concentrated near zero brightness levels, and vice versa for very light images - all non-zero values
are concentrated on the right side of the histogram. Intuitively, one can assume that the most convenient
image for perception by the human eye will be an image with histogram that is close to a uniform
distribution. Those, to improve the visual quality of the image, it is necessary to apply such a
transformation so that the histogram of the result contains all possible brightness values and, at the same
time, in approximately the same amount.
   As a result of equalizing the histogram, in most cases the dynamic range of the image is significantly
expanded, which makes it possible to display previously unnoticed details.
   Equalization process can be described with the following. If brightness of a pixel of the initial is bk,
and l is the level of brightness on the histogram (l = 0 … N - 1), so the brightness of an equalized image
can be described as:
                                          𝑙               𝑙    𝑁𝑝
                                 𝑟𝑘 = ∑       𝐻(𝑏𝑝 ) = ∑           .                                 (11)
                                          𝑝=0             𝑝=0 𝑁


3.3.    Combination of both methods
   As far as both anisotropic diffusion and image histogram equalization are often used to increase the
detail visibility of an image, it is appropriate to check whether appliance of two of these methods can
improve the efficiency of the algorithm.
   An example of simultaneous appliance of these methods on an image depicted in Figure 3.




Figure 3: Example of images proceed with anisotropic diffusion and the same with equalized histogram

4. Experimental research
    Experimental research was conducted in three stages: on the occluded images preprocessed by
anisotropic diffusion, on the occluded images with equalized histograms, on the occluded images
preprocessed and equalized.
    To conduct experiments three databases were used as well as in the previous research: The Database
of Faces, the FERET database, and the SCface database.
    The Database of Faces [13] contains 40 directories with 10 PGM 92x112-sized images with 256
grey levels per pixel in each directory. It is an open database that can be downloaded from official site
of AT&T Laboratories Cambridge.
    Facial Recognition Technology database (FERET) [14] contains 14126 high-resolution images of
1199 individuals with the resolution of 256x384. The National Institute of Standards and Technology
(NIST) is a technical agent for distribution of the database.
    Surveillance Cameras Face Database (SCface) [15] contains 4160 static images of 130 individuals.
It is developed by the group of researchers from University of Zagreb.
    As far as The Database of Faces contains images for 40 individuals in total, to conduce the
experiments images of 40 individuals from other databases were used as well.
    As far as selected databases do not contain images with such occlusive attributes as medical masks
or balaclavas, they were previously cropped and converted to the single resolution in the way that after
these images would only contain the areas not covered by occlusive attributes under research.
    Also, the formats of the images in the used databases are vary. Since types of images can be
uncompressed or compressed, and that fact can have an impact on the results of the experiments,
according to the study [16], it was decided to convert initial images to common formats, that is BMP
and PNG (uncompressed) and JPG (compressed) to perform the experiments more objectively.
   Table 1 contain the results of the experiments performed with the use of anisotropic diffusion
method.

Table 1
The results of experiments performed with anisotropic diffusion on images that contain occlusion
                       Original image           JPG                  BMP                PNG
                      Accuracy Error Accuracy Error Accuracy Error Accuracy Error
                                                 The Database of Faces
   Total images /
                           120 / 40           120 / 40             120 / 40           120 / 40
     individuals
 Number of images        26         14      29         11         26         14     26         14
 Identification rate    65%        35%    72.5%      27.5%       65%        35%    65%        35%
                                                          FERET
   Total images /
                           99 / 40            99 / 40               99 / 40            99 / 40
     individuals
 Number of images        26         14      31          9         26         14     26         14
 Identification rate    75%        25%    77.5%      22.5%       75%        25%    75%        25%
                                                          SCface
   Total images /
                           160 / 40           160 / 40             160 / 40           160 / 40
     individuals
 Number of images        33          7      33          7         33         7      33         7
 Identification rate   82.5%      17.5%   82.5%      17.5%      82.5%     17.5% 82.5% 17.5%

   The results of experiments performed with image histogram equalization are provided in Table 2.

Table 2
The results of experiments performed on equalized images that contain occlusion
                        Original image         JPG                  BMP              PNG
                       Accuracy Error Accuracy Error Accuracy Error Accuracy Error
                                                 The Database of Faces
    Total images /
                           120 / 40          120 / 40             120 / 40         120 / 40
     individuals
 Number of images         30         10     28         12        30         10    30         10
  Identification rate    75%        25%    70%        30%       75%        25%   75%        25%
                                                          FERET
    Total images /
                            99 / 40           99 / 40              99 / 40          99 / 40
     individuals
 Number of images         28         12     31          9        28         12    28         12
  Identification rate    70%        30%   77.5%     22.5%       70%        30%   70%        30%
                                                          SCface
    Total images /
                           160 / 40          160 / 40             160 / 40         160 / 40
     individuals
 Number of images         28         12     28         12        25         15    25         15
  Identification rate    70%        30%    70%        30%      62.5%     37.5%  62.5%     37.5%

    In Table 3 there are results of experiments that were performed with the use of both anisotropic
diffusion and image histogram equalization methods.
Table 3
The results of experiments performed with anisotropic diffusion on equalized images that contain
occlusion
                       Original image         JPG                 BMP               PNG
                      Accuracy Error Accuracy Error Accuracy Error Accuracy Error
                                               The Database of Faces
   Total images /
                          120 / 40         120 / 40             120 / 40          120 / 40
     individuals
 Number of images        29        11     27        13         29         11     29        11
 Identification rate   72.5%     27.5%  67.5%     32.5%      72.5%     27.5%   72.5%     27.5%
                                                       FERET
   Total images /
                           99 / 40          99 / 40              99 / 40           99 / 40
     individuals
 Number of images        29        11     29        11         29         11     29        11
 Identification rate   72.5%     27.5%  72.5%     27.5%      72.5%     27.5%   72.5%     27.5%
                                                      SCface
   Total images /
                          160 / 40         160 / 40             160 / 40          160 / 40
     individuals
 Number of images        29        11     29        11         30         10     29        11
 Identification rate   72.5%     27.5%  72.5%     27.5%       75%        25%   72.5%     27.5%

   Figure 4 presents the results for the initial and occluded images from the Database of Faces that were
preprocessed with anisotropic diffusion, equalization and the combination of these methods. After the
images were occluded, identification accuracy rate for the JPG-converted images reduced on 2.5-7.5%.
On the other hand, results for the BMP- and PNG-converted images improved on 5 and 7.5% after
appliance of the equalization and combination of two methods respectively.




Figure 4: Comparative diagram of the results of experiments on preprocessing methods performed on
initial and occluded images from The Database of Faces

    Figure 5 depicts the results for the initial and occluded images from the FERET database that were
preprocessed with anisotropic diffusion, equalization and the combination of these methods. From
comparing these results, the following conclusion can be made: initial algorithm with the anisotropic
diffusion itself provided the highest identification accuracy results on the original images in all sets of
the experiments. But the appliance of the equalization method improved the result for the JPG-
converted images up to 77.5%.
Figure 5: Comparative diagram of the results of experiments on preprocessing methods performed on
initial and occluded images from the FERET database

   Figure 6 demonstrate the results for the original and occluded images from the SCface database that
were preprocessed with anisotropic diffusion, equalization and the combination of these methods.
Analysis of these results provided the following conclusions. The highest identification accuracy rate
of 92.5% was obtained with the use of the anisotropic diffusion only in all sets of the experiments on
the non-occluded images. The same for the occluded images: neither equalization method nor
combination of two methods improved the result, it is stably equal to 82.5%.




Figure 6: Comparative diagram of the results of experiments on preprocessing methods performed on
initial and occluded images from the FERET database

5. Conclusion
   This paper is devoted to the study of preprocessing methods with an aim to improve the efficiency
and the robustness of the algorithm, that underlies in the basis of information technology of person
identification by occluded face image.
   The researched algorithm consists of the following methods: anisotropic diffusion as a method of
image preprocessing, Gabor wavelet transformation as an image processing method for forming a
global face image, local binary patterns in one-dimensional space and a histogram of oriented gradients
for forming a vector of image features, Euclidean squared distance metric for vector classification.
   After analyzing the preliminary obtained results of the experiments conducted on this algorithm, it
was decided to explore preprocessing methods that can increase the identification accuracy rate and
enhance the algorithm performance on the occluded images.
   During the research there were three stages depending on the preprocessing methods that have been
used: with anisotropic diffusion, with image histogram equalization, with both of these methods. Also,
three databases were used with previous processing of their images so that could be considered as
occluded: The Database of Faces, the FERET database, the SCface database.
   The results of conducted experiments allow to make the following conclusions. After algorithm was
applied on the occluded images from The Database of Faces, that were converted to the uncompressed
formats, such as BMP and PNG, it provided the identification accuracy rate on 5-7.5% increased with
the use of equalization only and both of researched methods.
   On occluded images from the FERET database the efficiency of the algorithm did not improve.
Therefore, after images were converted to compressed format, that is JPG, and its histogram was
equalized, the identification accuracy of the algorithm increased on 2.5%.
   Appliance of image histogram equalization only or in combination with anisotropic diffusion did
not improve the identification accuracy rate of the algorithm on the occluded images from the SCface
database.
   Thereby, for the following studies it is needed to consider the parameters setting of the selected
preprocessing methods and consider other methods as well.

6. References
[1] World Health Organization. Coronavirus disease (COVID-19): Masks. World Health
     Organization, Q&A. 5 January 2022. URL: https://www.who.int/emergencies/diseases/novel-
     coronavirus-2019/question-and-answers-hub/q-a-detail/coronavirus-disease-covid-19-masks
[2] M. Ngan, P. Grother and K. Hanaoka. Ongoing Face Recognition Vendor Test (FRVT) Part 6A:
     Face recognition accuracy with masks using pre- COVID-19 algorithm. NIST Interagency/Internal
     Report (NISTIR), National Institute of Standards and Technology, Gaithersburg, MD.
     10.6028/NIST.IR.8311.
[3] M. Ngan, P. Grother and K. Hanaoka. Ongoing Face Recognition Vendor Test (FRVT) Part 6B:
     Face recognition accuracy with face masks using post-COVID-19 algorithms. NIST
     Interagency/Internal Report (NISTIR), National Institute of Standards and Technology,
     Gaithersburg, MD, 2020. 10.6028/NIST.IR.8331.
[4] V. Prakash, L. Garg, E. Fomiceva, S. V. Pineda, A. N. Santos and S. Bawa. A Framework for
     Masked-Image Recognition System in COVID-19 Era. In: Santosh, K., Hegadi, R., Pal, U. (eds)
     Recent Trends in Image Processing and Pattern Recognition. RTIP2R 2021. Communications in
     Computer and Information Science, vol 1576, 2022. Springer, Cham.
[5] H. Ding, M. Latif, Z. Zia, M. A. Habib, M. Qayum and Q. Jiang. Facial Mask Detection Using
     Image Processing with Deep Learning. Mathematical Problems in Engineering. 2022. 1-10.
     10.1155/2022/8220677.
[6] A. Zolfi, S. Avidan, Y. Elovici and A. Shabtai. Adversarial Mask: Real-World Adversarial Attack
     Against Face Recognition Models, 2021. ArXiv, abs/2111.10759.
[7] P. Dave and J. Dastin, "Exclusive: Ukraine has started using Clearview AI’s facial recognition
     during war", Reuters, 14 March 2022. URL: https://www.reuters.com/technology/exclusive-
     ukraine-has-started-using-clearview-ais-facial-recognition-during-war-2022-03-13/
[8] O. Bychkov, K. Merkulova and Y. Zhabska. Information Technology for Person Identification by
     Occluded Face Image. 2022 IEEE 16th International Conference on Advanced Trends in
     Radioelectronics, Telecommunications and Computer Engineering (TCSET), 2022, pp. 147-151.
     10.1109/TCSET55632.2022.9766867.
[9] O. Bychkov, K. Merkulova and Y. Zhabska. Software Application for Biometrical Person’s
     Identification by Portrait Photograph Based on Wavelet Transform. 2019 IEEE International
     Conference on Advanced Trends in Information Theory (ATIT), 2019, pp. 253-256.
     10.1109/ATIT49449.2019.9030462.
[10] S. P and H. S. Mohana. An Improved Local Binary Pattern Algorithm for Face Recognition
     Applications. 2021 IEEE Mysore Sub Section International Conference (MysuruCon), 2021, pp.
     394-398. 10.1109/MysuruCon52639.2021.9641612.
[11] B. Attallah, A. Serir, Y. Chahir and A. Boudjelal. “Histogram of gradient and binarized statistical
     image features of wavelet subband-based palmprint features extraction”, J. Electron. Imag. 26(6)
     063006, November 8, 2017, doi: 10.1117/1.JEI.26.6.063006.
[12] C. Yu, and Y. Jia. Anisotropic Diffusion-based Kernel Matrix Model for Face Liveness Detection.
     Image Vis. Comput., 89, 2019, pp. 88-94. 10.1016/J.IMAVIS.2019.06.009.
[13] The Database of Faces. AT&T Labotatories Cambridge. URL: https://cam-
     orl.co.uk/facedatabase.html
[14] Face Recognition Technology (FERET). URL: https://www.nist.gov/programs-projects/face-
     recognition-technology-feret
[15] SCface — Surveillance Cameras Face Database. URL: https://www.scface.org
[16] Z. Akhtar and E. Ekram. Revealing the Traces of Histogram Equalization in Digital Images,” IET
     Image Processing, 12, 2018, doi: 10.1049/iet-ipr.2017.0992.