<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Fake Face Image Detection Using Deep Learning-Based Local and Global Matching</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Margarita Favorskaya</string-name>
          <email>favorskaya@sibsau.ru</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anton Yakimchuk</string-name>
          <email>yakimchuk_aa@sibsau.ru</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Reshetnev Siberian State University of Science and Technology</institution>
          ,
          <addr-line>31 Krasnoyarsky Rabochy ave., Krasnoyarsk, 660037</addr-line>
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The widespread adoption of face recognition systems in practice has provoked multiple attempts to fail these systems in order to impersonate another person. The range of such fake attacks is wide, and methods which can be used to compensate for one type of attacks are not adapted against other attacks. In this study, we propose a method for detecting fake face images based on local and global matching provided by deep neural networks. Also we do not discard the background analysis as a pre-processing stage. The idea is to assess the depth of the face in a still image as one of the main features of liveliness, which is not an easy task. The proposed method is directed against presentation attacks and attacks of adversarial perturbations. The experiments were conducted with and without deep neural networks. The use of deep learning increased the true accept rate and significantly reduced the error values.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Fake face detection</kwd>
        <kwd>presentation attacks</kwd>
        <kwd>attacks of adversarial perturbations</kwd>
        <kwd>deep learning</kwd>
        <kwd>local matching</kwd>
        <kwd>global matchin</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Face recognition is one of the most famous biometric methods of identity authentication, which is
widely used in the field of security of organizations and enterprises, safety in public places such as
airport terminals, train stations, stadiums, outdoor surveillance, etc. Research in this area began in the
1990s with the traditional machine learning methods (principal component analysis, Bayesian
classification and metric models), methods for detecting local features (Gabor filters and Local Binary
Patterns (LBPs)) and methods for detecting generalized features and advanced to deep learning
techniques. Currently, the accuracy of deep learning-based face recognition has achieved 99.80%. At
the same time, it is believed that human vision shows an accuracy of 97.53% [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>Since it is quite easy to replace a face image or present a short video impersonating another person,
face recognition systems must include a fake face detection module. This fake face detection module
is usually introduced after the face detection and alignment module, but before the visual processing
module and the recognition module. It worth noting that fake face detection and face recognition have
different target functions. Detection of forgery is associated with the search for artifacts of the
"liveliness" of the face. Therefore, lighting, shadows, glare, scene depth, etc. are of great importance.
At the same time, face recognition involves minimizing the listed above artifacts and extracting
features that are invariant to lighting, posture, emotions, overlapping objects, etc. The aim of our
study is to develop a method for detecting fake faces using a single photograph. Our objective is to
develop an approach which takes into account the background analysis of an image and extraction of
pseudo-depth parameters from a single photograph using local and global matching provided by deep
neural networks. Of course, accurate depth parameters can be estimated with additional expensive
devices requiring a fusion of visual, thermal and/or depth information. Our method aims at applying
algorithmic solutions to complex cases such as fake face recognition.</p>
      <p>The structure of the paper is the following. A short literature review is given in Section 2. Section
3 describes the proposed method for detecting fake faces in the images based on local and global
matching. The results of the conducted experiments are discussed in Section 4. Section 5 concludes
the paper.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>
        Currently, there are two types of widespread attacks in face recognition systems, referred to as
presentation attacks or spoofing attacks and attacks of adversarial perturbations [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Presentation
attacks include presenting fake printed images, smartphone images or short video sequences to a
facial recognition camera or disguising a person using cosmetics, makeup or a 3D mask. Masking is
the most complicated case for recognizing presentation attacks. Attacks with the 3D mask are nearly
impossible to identify without additional modalities. Since the 2010s, most countermeasures for
presentation attacks have relied on deep neural networks (earlier, features were manually extracted).
Thus, Yang et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] trained a convolutional neural network (CNN) ImageNet to distinguish fake
faces from genuine ones using both one frame and five scaled frames. This algorithm required
preliminary image alignment using biomarkers. Binary classification (spoof/genuine) was performed
on the CNN output using a support vector machine (SVM). In [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], a two-stream CNN was proposed,
where one stream analyzed local fragments of the face, assigning spoofing estimates, and another
stream was trained to estimate the depth of the scene using 3D samples. Li et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] proposed CNN
with a more complex architecture called deep part features from CNN. The features partially extracted
by the first VGG (Visual Geometry Group) CNN were applied to the second fine-tuned VGG CNN
for classification. An original way to decompose an image into a genuine face and spoofing noise
using CNN was proposed in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. In this work, the classification of genuine images was implemented
using noise.
      </p>
      <p>
        The analysis of video sequences provides better detection of fake face images since in this case,
artifacts of the “liveliness” of the face are available, for example, blinking [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], simple movements of
the head, and so on. Note that CNN with the LSTM layers are traditionally utilized for the analysis of
spatio-temporal structures. Such an architecture is applied to recognize genuine video sequences in
[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Some research is aimed at detecting 3D masks [
        <xref ref-type="bibr" rid="ref10 ref9">9-10</xref>
        ].
      </p>
      <p>
        Adversarial perturbation attacks are based on deep learning models, and, therefore, have appeared
relatively recently. Adversarial perturbation is reduced to a slight distortion of the input image, such
as brightness, in such a way that this perturbation is not identified by human vision, but leads to the
fact that the deep network gives an incorrect classification. Goswami et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] suggested detecting
such masked attacks by analyzing the responses of filters in hidden layers and eliminating the most
problematic filters. The SmartBox software tool for testing the performance of algorithms for
detecting and mitigating adversarial attacks in face recognition systems is presented in [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. The
SmartBox software tool supports several algorithms, for example, DeepFool, Elastic-Net and utilities
against gradient attacks and L2 attacks. Despite some success in confronting this type of attacks,
adversarial perturbation attacks are constantly becoming more complex and they require further
improvement of the algorithms. Other, more specific types of attacks can be noted, namely, stealing
deep templates of faces for the purpose of manipulation by third persons. The deconvolutional neural
network NbNet was proposed to confront such attacks [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. The matter is that digital manipulation
attacks using generative adversarial networks can generate fully or partially modified photorealistic
facial images by altering an emotional expression, manipulating attributes or completely synthesizing
a face. Thus, adversarial perturbation attacks are directed against deep neural networks which have
proved to be good in the face recognition problem. The necessity to protect deep neural networks and
deep patterns remains a major challenge in face recognition systems.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. The proposed method</title>
      <p>The proposed method is based on several verifications due to the fact that the impact of different
attacks leads to different consequences. The method is based on two stages of the face image entering
the input of the recognition system. Note that the task of verifying the genuineness of a face image is
more difficult than using a short video.</p>
      <p>The background analysis and local and global matching are described in Sections 3.1-3.2,
respectively.
3.1.</p>
    </sec>
    <sec id="sec-4">
      <title>Background analysis</title>
      <p>Background analysis is required to assess the correspondence of the global brightness and color
parameters of a face image to the entire scene or their divergence from it. It is difficult to cut out the
face image without the background in a photograph. Looking for another background for the face is a
good reason to conduct a more detailed analysis for genuineness. For this, a sufficiently large
fragment of the scene is segmented, where the face image occupies no more than 25-30%. The
assumption is based on the fact that while it is quite simple to change the parameters of the face
image, it is difficult to change the parameters of the scene background, taking into account the
geometric binding of the camera, unknown to the attacker. Figure 1 depicts examples of capturing
faces in the background of the scene. In Figure 1b, the background near the face does not match the
background of the scene.</p>
      <p>
        CCTV cameras are usually installed stationary. Therefore, for constructing the scene background
model, we can use the Gaussian mixture model (GMM) with its adaptation to changes in lighting and
shadows, as well as to temporal/seasonal/meteorological characteristics [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
      </p>
      <p>In the GMM model, the pixel intensity is determined by a mixture of K Gaussian distributions,
where K is a small number. Each Gaussian distribution is associated with its own weight. The GMM
parameters are updated recursively with every incoming sample. The pixel probability P(Xt) is
estimated by Eq. 1, where Xt is the pixel value at time t, K is the number of the Gaussian distributions
taken into account, wj,t is the weight value, j,t is the mean value,  j,t is the covariance matrix of the
jth Gaussian at time t,  is the Gaussian probability density function (PDF).</p>
      <p>K
P Xt   wj,t   Xt , j,t , j,t 
j1
  X t , j,t ,  j,t  
The probability density function  is defined by Eq. 2, where n is the dimensionality of Xt.
e12 Xt  j,t T  j,t1 Xt  j,t 
(1)
(2)</p>
      <p>For simplicity, the covariance matrix  j,t is defined as 2j,t I for the jth component, where I is
the identity matrix under assumption that the Xt components (red, green and blue) are independent and
have the same deviations.</p>
      <p>The background distributions have a higher probability and lower standard deviations because the
background colors remain the same for longer time than the foreground objects. This observation
makes the GMM model updated when an incoming pixel is checked against the existing GMM
components. If the pixel value is within 2.5 of the standard deviation of some weighted Gaussian
distribution, then the distribution is updated. Otherwise, the distribution with the minimum weight is
replaced by a new distribution with high initial variance and low prior weight.
3.2.</p>
    </sec>
    <sec id="sec-5">
      <title>Detecting local and global matching</title>
      <p>
        The analysis of local areas near the face is close to the approach used in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], but in contrast to it,
we use the grid representation of the face image with the size of 33 elements. We get 9 patches
which can be analyzed by 9 sub-streams in the form of the simplest CNN. At the output of such
CNNs, the values of entropy and loss functions are estimated for each of 9 patches, forming the
general assessment of the genuineness of the face image. Such local matching is a countermeasure to
gradient attacks, which are usually local in their nature, and partly to attacks of adversarial
perturbations. The global matching performs the global assessment of the entire face image. Its
purpose is to identify 3D features. To do this, one can use different hardware and software solutions.
Hardware solutions include the use of a 3D scanner (for example, Microsoft Kinect) or a stereo
camera which is not always possible for practical application. Therefore, it is better to focus on
software solutions, in particular, on using CNN trained to classify the depth of the scene.
      </p>
      <p>
        The local and global matching is performed if the image passed the first stage (as the roughest
fake). Moreover, this stage can be presented into a single network with two global streams.
Presentation attacks usually distort image details. Therefore, special attention should be paid to the
areas around the eyes, because these areas contain the most detailed information. Our approach of
local matching is close to [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and is based on the fully convolutional network (FCN), which was
proposed by Long et al. in 2014 [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. FCN is widely used in semantic image segmentation and differs
from the traditional CNNs by convolutional layers instead of fully connected layers. Such an
architecture tunes the network output into a heat map. The loss function has the form of Eq. 3, where
pi,j(k)  {0, 1} is the prior probability, qi,j(k) is the prediction probability, k is the true class (0 or 1,
genuine or fake image).
      </p>
      <p>1
Li, j   pi, j k  log qi, j k 
k0
(3)</p>
      <p>The general loss function is defined as the sum of the local loss functions on the grid. CNN builds
a 2×n×n probability map, and after summing the values of each n×n map, a 12 vector is formed to
predict the class. In this case, the decision is made taking into account the predictions of each local
region rather than on the basis of any dominant region.</p>
      <p>The global matching is the assessment of the entire face image, which partly serves to validate the
previous decision. Various representations of the input image are allowed, for example, representation
in the YCbCr color space, in the form of LBP, high-frequency components, training on 3D models,
etc. The experiments have shown good results for the models based on the transition to the YCbCr
color model and analysis of high-frequency components of genuine and fake images. For the global
matching, FCN with 6 convolutional layers and 2 pooling layers is also used, and SVM serves as a
classifier. Then, the results of two streams are combined, and the final decision on the genuineness of
the face image is made.</p>
    </sec>
    <sec id="sec-6">
      <title>4. Experimental results</title>
      <p>
        For the experiments, the OULU-NPU dataset [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] and own dataset were used. The OULU-NPU
dataset contains 4950 videos received from 6 smartphones. The own dataset includes around 420
short videos with real faces, printed face images and videos from the tablet. The presentation attacks
are of two types: print attacks and replay attacks. For experiments, print attacks were simulated. The
dataset was divided into a training set and a test set with the ratio of 70% to 30%. The proposed
method showed the robustness to the presentation attacks and even to the attacks based on adversarial
examples. According to ISO/IEC 30107-3:2017 [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], we calculated the following metrics: true accept
rate (TAR), attack presentation classification error rate (APCER) as the false accept rate (FAR) and
bona-fide presentation classification error rate (BPCER) as the false reject rate (FRR) (in terms of
face recognition) provided by Eqs. 4-5, where TP is the true positive, FP is the false positive, TN is
the true negative, FN is the false negative.
      </p>
      <p>APCER  FP  FP  TN 
BPCER  FN  FN  TP 
(4)
(5)</p>
      <p>The experiments show that the accuracy of detecting fake face images reached 82.4-89.1% and
69.5-75.2% for the presentation attacks (print attacks) and attacks of adversarial perturbations,
respectively.</p>
      <p>The augmentation or generation of new data based on the existing dataset makes it quite easy to
expand the training set. We applied data augmentation “on-the-fly”, when new distorted samples were
created directly during the training process between learning epochs without increasing the amount of
initial data. The augmentation was carefully implemented using slight distortions of shooting
conditions, affine deformation of objects, blur and reflection. This procedure improved the quality of
the model and its robustness to noise in the input data. Using augmentation without changing the
network architecture, it was possible to increase the accuracy of the fake face detection by 3.4% for
print attacks.</p>
    </sec>
    <sec id="sec-7">
      <title>5. Conclusion</title>
      <p>At present, fake face image detection is a necessary procedure for the normal functioning of face
recognition systems. In this study, it is shown that there are different approaches to solving this
problem. However, for the protection against various types of attacks, it is reasonable to use several
methods. We offer a two-stage method for verifying the genuineness of a face image before its
entering the face recognition system. The first stage is the background analysis, while the second
stage is local and global matching. For the background estimation, a Gaussian mixture model is built,
and a two-stream deep neural network is created to assess local and global features. The experiments
conducted on the OULU-NPU dataset and own dataset show the accuracy for the presentation attacks
and attacks of adversarial perturbations to be 82.4-89.1% and 69.5-75.2%, respectively. Using data
augmentation, it was possible to increase the accuracy of detecting the presentation attacks to
85.792.5%. However, the temporal estimates of the recognition process do not correspond to the real time
and require further refinement of the algorithms.</p>
    </sec>
    <sec id="sec-8">
      <title>6. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Taigman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ranzato</surname>
          </string-name>
          , L. Wolf,
          <article-title>Deepface: Closing the gap to human-level performance in face verification</article-title>
          ,
          <source>in: the IEEE Conference on Computer Vision</source>
          and Pattern Recognition,
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          , Columbus,
          <string-name>
            <surname>OH</surname>
          </string-name>
          , USA,
          <year>2014</year>
          , pp.
          <fpage>1701</fpage>
          -
          <lpage>1708</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <article-title>Deep face recognition: A survey</article-title>
          .
          <source>Neurocomputing</source>
          <volume>429</volume>
          (
          <year>2021</year>
          )
          <fpage>215</fpage>
          -
          <lpage>244</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Lei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Learn convolutional neural network for face antispoofing</article-title>
          .
          <source>Cornell ArXiv Print, arXiv preprint arXiv:1408.5601</source>
          ,
          <year>2014</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Atoum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jourabloo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>Face anti-spoofing using patch and depth-based CNNs</article-title>
          , in: 2017
          <source>IEEE International Joint Conference on Biometrics (IJCB)</source>
          , IEEE, Denver, CO, USA,
          <year>2017</year>
          , pp.
          <fpage>319</fpage>
          -
          <lpage>328</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Boulkenafet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hadid</surname>
          </string-name>
          ,
          <article-title>An original face antispoofing approach using partial convolutional neural network</article-title>
          ,
          <source>in: the 6th International Conference on Image Processing Theory</source>
          ,
          <article-title>Tools and Applications (IPTA), IEEE</article-title>
          , Oulu, Finland,
          <year>2016</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Jourabloo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          , Face de-spoofing:
          <article-title>Anti-spoofing via noise modeling</article-title>
          , in: Ferrari,
          <string-name>
            <given-names>V.</given-names>
            ,
            <surname>Hebert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Sminchisescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Weiss</surname>
          </string-name>
          , Y. (eds) Computer Vision - ECCV
          <year>2018</year>
          . LNCS, volume
          <volume>11217</volume>
          ,
          <year>2018</year>
          , pp
          <fpage>297</fpage>
          -
          <lpage>315</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>K.</given-names>
            <surname>Patel</surname>
          </string-name>
          , H. Han,
          <string-name>
            <given-names>A.K.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <article-title>Cross-database face antispoofing with robust feature representation</article-title>
          , in: You,
          <string-name>
            <given-names>Z.</given-names>
            ,
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            ,
            <surname>Shan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            ,
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <surname>Q.</surname>
          </string-name>
          <article-title>(eds) Biometric Recognition (CCBR 2016), LNCS</article-title>
          , volume
          <volume>9967</volume>
          ,
          <year>2016</year>
          , pp.
          <fpage>611</fpage>
          -
          <lpage>619</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <article-title>Learning temporal features using LSTM-CNN architecture for face antispoofing</article-title>
          ,
          <source>in: 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR)</source>
          , IEEE,
          <string-name>
            <surname>Kuala</surname>
            <given-names>Lumpur</given-names>
          </string-name>
          , Malaysia,
          <year>2015</year>
          , pp.
          <fpage>141</fpage>
          -
          <lpage>145</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R.</given-names>
            <surname>Shao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Lan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. C.</given-names>
            <surname>Yuen</surname>
          </string-name>
          ,
          <article-title>Deep convolutional dynamic texture learning with adaptive channel-discriminability for 3D mask face anti-spoofing</article-title>
          ,
          <source>in: 2017 IEEE International Joint Conference on Biometrics (IJCB)</source>
          , IEEE, Denver, CO, USA,
          <year>2017</year>
          , pp.
          <fpage>748</fpage>
          -
          <lpage>755</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>R.</given-names>
            <surname>Shao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Lan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. C.</given-names>
            <surname>Yuen</surname>
          </string-name>
          ,
          <article-title>Joint discriminative learning of deep dynamic textures for 3D mask face anti-spoofing</article-title>
          .
          <source>Transactions on Information Forensics and Security</source>
          <volume>14</volume>
          (
          <issue>4</issue>
          ) (
          <year>2019</year>
          )
          <fpage>923</fpage>
          -
          <lpage>938</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>G.</given-names>
            <surname>Goswami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ratha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vatsa</surname>
          </string-name>
          ,
          <article-title>Unravelling robustness of deep learning based face recognition against adversarial attacks</article-title>
          ,
          <source>in: The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18)</source>
          , New Orleans, Louisiana, USA, volume.
          <volume>32</volume>
          ,
          <year>2018</year>
          , pp.
          <fpage>6829</fpage>
          -
          <lpage>6836</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A.</given-names>
            <surname>Goel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vatsa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <surname>Smartbox:</surname>
          </string-name>
          <article-title>Benchmarking adversarial detection and mitigation algorithms for face recognition</article-title>
          ,
          <source>in: 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS)</source>
          , IEEE, Redondo Beach, CA, USA,
          <year>2018</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>G.</given-names>
            <surname>Mai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. C .</given-names>
            <surname>Yuen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A .K.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <article-title>On the reconstruction of face images from deep face templates</article-title>
          ,
          <source>Transactions on Pattern Analysis and Machine Intelligence</source>
          <volume>41</volume>
          (
          <issue>5</issue>
          ) (
          <year>2018</year>
          )
          <fpage>1188</fpage>
          -
          <lpage>1202</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M. N.</given-names>
            <surname>Favorskaya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Buryachenko</surname>
          </string-name>
          ,
          <article-title>Background extraction method for analysis of natural images captured by camera traps</article-title>
          ,
          <source>Information and Control Systems</source>
          <volume>6</volume>
          (
          <year>2018</year>
          )
          <fpage>35</fpage>
          -
          <lpage>45</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ma</surname>
          </string-name>
          , L.
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>A novel face presentation attack detection scheme based on multiregional convolutional neural networks</article-title>
          ,
          <source>Pattern Recognition Letters</source>
          <volume>131</volume>
          (
          <year>2020</year>
          )
          <fpage>261</fpage>
          -
          <lpage>267</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>J.</given-names>
            <surname>Long</surname>
          </string-name>
          , E. S helhamer, T. Darrell,
          <article-title>Fully convolutional networks for semantic segmentation</article-title>
          ,
          <source>Transactions on Pattern Analysis and Machine Intelligence</source>
          <volume>39</volume>
          (
          <issue>4</issue>
          ), (
          <year>2014</year>
          )
          <fpage>640</fpage>
          -
          <lpage>651</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>OULU-NPU -</surname>
          </string-name>
          <article-title>a mobile face presentation attack database with real-world variations</article-title>
          , URL: https://sites.google.com/site/oulunpudatabase.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18] ISO/IEC 30107-3:2017 Information technology
          <article-title>- Biometric presentation attack detection - Part 3: Testing and reporting</article-title>
          , URL: https://www.iso.org/standard/67381.html.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>