<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Anti-spoofing Detection Based on Hierarchical Spatio-Temporal Representation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Souad Khellat-Kihel</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ahmed Tibermacine</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>LESIA Laboratory, Department of Computer Science, Biskra University</institution>
          ,
          <addr-line>BP 145 RP, 07000, Biskra</addr-line>
          ,
          <country country="DZ">Algeria</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Science and Technology Mohamed-Boudiaf</institution>
          ,
          <addr-line>Oran</addr-line>
          ,
          <country country="DZ">Algeria</country>
        </aff>
      </contrib-group>
      <fpage>59</fpage>
      <lpage>66</lpage>
      <abstract>
        <p>A complete biometric system ensures that only authorized individuals can access mobile devices. Face biometrics is a natural, user-friendly, and non-intrusive authentication method, making it a powerful biometric trait. However, recent studies have revealed its vulnerability to spoofing attacks, including printed photos, video replays, and 3D face masks. This paper proposes using the HMAX model to detect spoofing based on facial texture analysis. HMAX, inspired by the biological visual processing chain from the retinal stage to the inferotemporal cortex, encodes facial features sensitive to expressions and gaze direction. We further enhance this method by integrating HMAX with Long Short-Term Memory (LSTM) networks to build a spatio-temporal representation of facial dynamics for improved spoof detection. Extensive experiments on standard datasets demonstrate the feasibility and efectiveness of the proposed approach compared to state-of-the-art algorithms. Several experiments performed on real face images from standard datasets, also compared with state of the art algorithms, demonstrate the feasibility of the proposed approach in real applications.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Security</kwd>
        <kwd>anti spoofing</kwd>
        <kwd>LSTM</kwd>
        <kwd>Spatio-temporal representation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>trated in Figure 1.</p>
      <p>Biometric systems have become an essential component
of security and authentication processes, with face
recognition being one of the most widely adopted
modalities. However, the increasing reliance on face
recognition technology has also made it a target for
spoofing attacks, where malicious actors attempt to deceive
the system using fake representations of a legitimate
user. These attacks, known as Presentation Attacks (PAs),
pose a significant threat to security-sensitive
applications, such as access control, banking transactions, and
airport security. Presentation Attack Detection (PAD),
commonly referred to as anti-spoofing detection, has Figure 1: An example of spoofing in a real-world case.
emerged as a critical area of research to counteract these
threats[1]. The goal of PAD is to distinguish between
genuine and fake faces using various techniques, including To address these security concerns, researchers have
motion analysis, texture-based feature extraction, and explored various methodologies for PAD, categorized
deep learning approaches. Spoofing attacks can take mul- broadly into traditional and deep learning-based
aptiple forms, including printed photos, digital screen re- proaches. One of the earliest approaches to PAD
inplays, 3D masks, and even highly sophisticated deepfake- volves analyzing the texture and quality of facial images.
generated faces[2]. A well-documented real-world case Artur Costa-Pazo et al.[1] introduced two algorithms
occurred in 2011 when a passenger successfully boarded that leverage image-quality measures and texture
analya flight from Hong Kong to Canada by disguising himself sis with Gabor-Jets filters for spoofing detection. Their
as an elderly man using a high-quality mask, exposing study found that using an SVM-RBF classifier[ 3, 4, 5]
vulnerabilities in biometric security measures, as illus- resulted in an Equal Error Rate (EER) of 2.68%,
demonstrating the efectiveness of such feature-based methods.</p>
      <p>SYSTEM 2025: 11th Sapienza Yearly Symposium of Technology, Engi- Similarly, Boulkenafet et al.[6] proposed a color texture
neering and Mathematics. Rome, June 4-6, 2025 analysis technique utilizing the Color Local Binary
Pata$hmsoeuda.tdib.kehrmellaacti@neu@niuvn-iuvs-tboi.sdkzr(aS.d.zK(hAe.llTati-bKeirhmeal)c;ine) tern (LBP) descriptor, which captures fine luminance
0000-0002-9586-6522 (S. Khellat-Kihel); 0009-0004-4729-7128 variations between genuine and spoofed images. Their
(A. Tibermacine) method achieved an EER of 0.4% on the Replay-Attack
© 2025 Copyright for this paper by its authors. Use permitted under Creative Commons License database and 6.2% on the CASIA database, highlighting
Attribution 4.0 International (CC BY 4.0).
the potential of handcrafted feature extraction techniques SURF [39], Spoofing in the Wild (SiW)[ 40], and
OULUfor PAD. NPU[41], which contain extensive variations of spoofing</p>
      <p>
        Another prominent approach to PAD involves motion- attempts such as print attacks, replay attacks, and 3D
based methods that focus on liveness detection, as gen- mask attacks. However, a persistent challenge in PAD
uine faces exhibit dynamic patterns that are dificult to research is domain adaptation, as models trained on one
replicate in spoofing attacks. Chengyan Lin et al.[ 7] in- dataset may not generalize well to unseen attack types.
troduced a method based on the rank of sample matrices, To tackle this issue, Yu et al.[42] introduced a domain
observing that spoofed images have low-rank structures adaptation framework designed to improve cross-dataset
due to minimal frame variations, whereas live samples generalization, addressing the problem of dataset bias in
display higher rank values resulting from natural facial PAD systems.
movements such as blinking and lip motion[
        <xref ref-type="bibr" rid="ref57">8</xref>
        ]. Another In this work, we aim to contribute to the field of
antinotable contribution by Samarth et al.[
        <xref ref-type="bibr" rid="ref28 ref64">9</xref>
        ] involved the spoofing detection by utilizing a biologically inspired
apuse of Eulerian motion magnification, which enhances proach based on the HMAX model. Unlike conventional
subtle facial expressions before extracting features us- texture-based or CNN-based methods, our approach
exing multiscale LBP. Their approach achieved a Half To- ploits the hierarchical structure of the human visual
cortal Error Rate (HTER) of 0% and 1.25% on benchmark tex to extract highly relevant features for distinguishing
datasets, demonstrating the robustness of motion-based between real and spoofed faces. By integrating insights
PAD techniques[
        <xref ref-type="bibr" rid="ref49 ref60">10, 11, 12</xref>
        ]. from biological vision systems and leveraging large-scale
      </p>
      <p>With the rise of deep learning, convolutional neural spoofing datasets, we strive to enhance the
generalizanetworks (CNNs) have significantly improved PAD per- tion capability of PAD systems while maintaining high
formance. Deep learning has also been widely applied in accuracy against various attack types.
various domains such as computer vision[13, 14], robotic The remainder of this paper is structured as follows.
control[15], brain-computer interface (BCI)[16, 17, 18], Section 2 presents our proposed methodology, including
EEG analysis[19, 20, 21], and sentiment analysis[22, 23, the implementation details of the HMAX model and its
24, 25, 26, 27], demonstrating its ability to extract mean- application to spoofing detection. Section 3 describes
ingful representations from complex data structures. the dataset and protocols used. Section 4 discusses the
Leveraging these advancements, Jianwei Yang et al. [28] results. Finally, Section 5 concludes the paper with key
proposed a deep CNN architecture that learns discrimi- findings and future research directions.
native features for classifying real and fake faces. Their
model achieved an HTER of less than 5% on both the
CASIA and Replay-Attack databases. More recent works, 2. Proposed Approach
such as that of Liu et al. [29], introduced a hybrid
CNNRNN model to capture both spatial and temporal depen- In this section we highlight the diferent proposed stages
dencies in facial videos, yielding state-of-the-art results corresponding to the anti-spoofing detection algorithm.
across multiple datasets[30, 31, 32]. Similarly, Shao et At first glance, the HMAX network is a
biologicallyal. [33]] explored a multi-modal framework that com- inspired network which has been conceived to mimic
bines RGB, depth, and infrared (IR) imaging to enhance the basic neural architecture of the ventral stream of the
the robustness of PAD systems against varying attack visual cortex. Also, the texture extracted with HMAX
scenarios.attack scenarios[34, 35]. has a high discrimination performances. The general</p>
      <p>Inspired by the human visual system, researchers have proposed architecture is depicted in Figure. 2.
also investigated biological models for feature extraction
in PAD. One such model is the Hierarchical Model and 2.1. Feature extraction based on
X (HMAX), which simulates the ventral stream of the hierarchical network
visual cortex [36] and is particularly adept at capturing
texture information[37, 38]. The HMAX model is based The HMAX model is an hierarchical model for object
on a hierarchical structure that processes visual inputs representation and recognition inspired by the neural
through simple and complex cells, mimicking the way the architecture of the early stages of the visual cortex in the
brain interprets object textures. In this work, we propose primates. The general architecture of the HMAX model
to apply the HMAX model to cropped face images for is represented in Figure. 3. Proceeding to the higher
texture-based spoofing detection, leveraging its ability levels of the model, the number and typicality of the
to extract highly discriminative features. extracted features change. Each layer is projected to the</p>
      <p>Recent advances in PAD have been supported by the next layer by applying template matching or max pooling
availability of large-scale spoofing datasets, enabling re- iflters. Proceeding to the higher levels of the model, the
searchers to develop robust models that generalize across number of (X,Y) pixel positions in a layer is reduced. The
diverse attack types. Notable datasets include CASIA- input to the model is the gray level image. S1 and C1
(, ) = ( − (2 2 2)
)(</p>
      <p>)
2 2
2</p>
      <p>Where X=xcos -ysin and Y=xsin +ycos . x and y
vary between -5 and 5, and  varies between 0 and  .</p>
      <p>
        (1)
(2)
[
        <xref ref-type="bibr" rid="ref49">10</xref>
        ]. In the intermediate feature layer (S2 level), the
response for each C1 grid position is computed. Each
feature is tuned to a preferred pattern as stimulus. Starting
from an image of size 256x256 pixels, the final S2 layer
is a vector of dimension 44 x 44 x 4000.The response is
obtained using:
(,  ) = ( || −  | |2 )
 2
      </p>
      <p>(3)</p>
      <p>The last layer of the architecture is the Global
Invariance layer (C2). The maximum response to each
intermediate feature over all (X,Y) positions and all scales is
calculated. The result is a characteristics vector that will
be used for classification. For the implementation of the
HMAX model we use the tool proposed in [43].
2.2. Classification
Firstly, we extract features using HMAX, then we add
the Long-Short Term Memory (LSTM) with the extracted
features as inputs to capture the temporal dynamic
information for diferentiation of genuine and fake faces. Each
LSTM unit has a memory cell () and three gates [44]:
the input gate (), output gate () and forget gate ().</p>
      <p>The memory cell () could store and output
information allowing it to better discover long-range temporal
relationships. LSTM mechanism is presented in Figure 4.
ℎ = () ⨀︀  cell output</p>
    </sec>
    <sec id="sec-2">
      <title>3. Database and protocols</title>
      <p>Diferent databases have been proposed for presentation
attack detection (PAD) or anti-spoofing face detection.
However, most existing ones are not dedicated in
realistic conditions. The publicly available OULU-NPU face
presentation attack database [45] consists of 5940 videos
corresponding to 55 subjects recorded in three diferent
environments (sessions) using high-resolution frontal
cameras of six diferent smartphones. The high-quality
of print and video replay attacks was created by two
different printers and two diferent devices. Figure 5 shows
examples corresponding to real accesses and attacks
captured with a Samsung Galaxy S6 edge phone.</p>
      <p>The gates serve to modulate the interactions between
the memory cell itself and its enviornment. The input
gate can allow incoming signal to alter the state of the
memory cell or block it. On the other hand, the output
gate can allow the state of the memory cell to have an
efect on other neurons or prevent it.Finally, the forgot
gate can modulate the memory cell’s self-recurrent
connection, allowing the cell to remember or forget its
previous state, as needed. we used  and ℎ as input
and output vector respectively for timestep t, T are input
weights matrices, R are recurrent weight matrices and
1
b are bias vectors. Logic sigmoid  () = ( 1+−  ) and
hyperbolic tangent () = ( − −  ) are element-wise
+− 
non-linear activation functions, mapping real values to
(0,1) and (-1,1) separately. The dot product and sum of
two vectors are denoted with ⨀︀ and ⨁︀ respectively.
Given inputs , ℎ− 1 and − 1, the LSTM unit updates
for timestep t are:
 = ( + ℎ− 1 + ) cell input
 =  ( + ℎ− 1 + ) input gate
  =  (  +  ℎ− 1 +  ) forgetgate
 =  ⨀︀  + − 1 ⨀︀   cell state
 =  (∘  + ∘ ℎ− 1 + ∘ ) output gate</p>
      <p>Four protocols have been cited for this database
[45] corresponding to the illumination variation efect,
diferent displays and printers, the efect of camera
device variation and the last protocol comprises all
the previous ones. A challenge has been proposed
to evaluate diferent PAD algorithms under some
real-world variations, on the OULU-NPU dataset using
its standard evaluation protocols and metrics. We tested
the proposed anti-spoofing system on the diferent
protocols proposed for the OULU-NPU dataset [45]. The
protocols are defined as follow:
Protocol I: The first protocol is designed to evaluate the
PAD methods under diferent environmental conditions,
namely illumination and background scene. The
database is recorded in three sessions.</p>
      <p>Protocol II: The main goal in this protocol is to test
attacks obtained from diferent resources (printers or
displays). The efect of attack variation is evaluated by
using unseen print and video-replay attack during the
test phase.</p>
      <p>Protocol III: The third protocol is dedicated to one
of the critical issues in face PAD which is sensor
interoperability. In each iteration, real and attack videos</p>
      <sec id="sec-2-1">
        <title>Session 1</title>
      </sec>
      <sec id="sec-2-2">
        <title>Session 2</title>
      </sec>
      <sec id="sec-2-3">
        <title>Session 3</title>
      </sec>
      <sec id="sec-2-4">
        <title>Training Set</title>
      </sec>
      <sec id="sec-2-5">
        <title>Client</title>
      </sec>
      <sec id="sec-2-6">
        <title>Imposter</title>
      </sec>
      <sec id="sec-2-7">
        <title>Testing Set</title>
      </sec>
      <sec id="sec-2-8">
        <title>Client Imposter 889 855</title>
        <p>The NUAA database [39] is publicaly available
photograph imposter database. This database has been
collected in three diferent sessions with an interval of 15
days between the two sessions was conducted and the
place and illumination conditions of each session are
different. The database is composed of 15 subjects. Some
example of images are depicted in Figure 6. The
distribution of the train set and the test is represented in Table
I.
obtained from five smartphones are used to run the
algorithms. However, the models are constructed using   = ∑︀=1  (5)
the videos recorded with the remaining one. 
Protocol IV: The last protocol englobes all the cases Where   ,is the number of the attack
presentawhich are seen in the previous three protocols. General- tions for the given PAI,  is the total number of the
ization of the face anti-spoofing algorithms is evaluated bona fide presentations.  takes the value of 1 if the
with previously unseen environmental conditions, ith presentation is classified as an attack presentation
attacks and input sensors. and 0 is classified as bona fide presentation. These two
metrics correspond to False Acceptance Rate (FAR) and
False Rejection Rate (FRR) commonly used in biometrics
evaluation systems. Also, Average Classification Error
Rate (ACER) is proposed in the challenge conducted in
[41], which is the average of the APCER and the BPCER.</p>
        <p>For performance evaluation we used a recent
standardized ISO/IEC 30107-3 metrics proposed in [46] to compare
the proposed framework with the various presented
results.These metrics consists of Attack Presentation
Classification Error Rate (APCER) and Bona Fide Presentation
Classification Error Rate (BPCER), Equation (4) and (5)
represent the APCER and BPCER metrics respectively.</p>
        <p>=</p>
        <p>1 ∑ ︁ (1 − )
  =1
(4)</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Experimental results</title>
      <p>In this section we summarized the diferent works
conducted in the challenge to assume the efectiveness of
proposed system. In [41], a detailed algorithm
description applied to the OULU-NPU database was carried out.
The described algorithms have been proposed within
a challenge. Some groups use a Local Phase
Quantization (MBLPQ and PML), others rely on a convolutional
and deep CNN (Convolutional Neural Networks) models
(VSS, NWPU, SZUCVI, Record, CPqD and mixedFASNet).
In the Baseline [2],     and HKBU groups
proposed some systems based on LBP (Local Binary Pattern)
algorithm. Furthermore, Binarized statistical image
features was applied by the MFT-FAS group, while
GRADIANT and Idiap groups were based on fusion between
motion and texture information1. In Table 2, Table 3,
Table 4 and Table 5, we propose a comparaison between the
best obtained results from each category and the results
obtained from our proposed approach. The category
corresponds to feature extraction method such as the LBP,
CNN and LPQ. For classification an LSTM network was
used.</p>
      <p>As mentioned in Table 2, Table 3, Table 4 and Table 5
the proposed architectures surpass the previous proposed
methods mainly because the LSTM is based on video
sequences or the features obtained during time hence the
emotion in natural cases are analysed. From the obtained
1These frameworks abbreviations and metrics are used in [41]
LBP
CNN</p>
      <p>LPQ</p>
      <sec id="sec-3-1">
        <title>HMAX LSTM LBP CNN LPQ</title>
      </sec>
      <sec id="sec-3-2">
        <title>HMAX LSTM LBP CNN LPQ</title>
      </sec>
      <sec id="sec-3-3">
        <title>HMAX LSTM LBP CNN LPQ</title>
      </sec>
      <sec id="sec-3-4">
        <title>HMAX LSTM</title>
        <p>results it is obvious that the LSTM can achieve a low the research is progressing, the development of such
apEER. Also, the approach is based on a biological aspect plications with a high accuracy may be a challenging
not only by simulating the visual perception by HMAX task. This mainly due to the distinctiveness dificulty
but also by treating the frames during time using LSTM. between the fake and genuine images even by human
The LSTM showed good performances comparing to ap- eye, also the high quality of the 3D masks. In this paper,
proaches based on LBP, CNN and LPQ. For the NUAA a framework based on biologically spatio-temporal
repredatabase, the Fourier spectra analysis method introduced sentation has been developed to study the anti-spoofing
in [47] gives a classification rate of 76.7% when the DoG based on faces. This approach is based on the
combinafeatures proposed in [39] obtain about 10% higher than tion between the HMAX and the LSTM. The experimental
the first approach. In our case the proposed hierarchical evaluation carried out shows a great eficiency comparing
spatio-temporal representation achieve a very high per- to the proposed methods in the litterature.
formances around 90% as a rate of classification between
the genuine and fake faces.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Declaration on Generative AI</title>
      <p>5. Conclusion During the preparation of this work, the authors used
ChatGPT, Grammarly in order to: Grammar and spelling
The study of weaknesses of biometric systems against check, Paraphrase and reword. After using this
tool/serspoofing attacks has been very active field of research vice, the authors reviewed and edited the content as
in recent years. This focus has led to investigate in Anti- needed and take full responsibility for the publication’s
spoofing applications based on faces. However, even if content.</p>
      <sec id="sec-4-1">
        <title>Display APCER</title>
        <p>9.3
1.7
8.2
2.05</p>
      </sec>
      <sec id="sec-4-2">
        <title>Print APCER</title>
        <p>11.8
5.3
15.3
4.60</p>
      </sec>
      <sec id="sec-4-3">
        <title>Overall APCER</title>
        <p>14.2
5.3
15.7
6.00</p>
      </sec>
      <sec id="sec-4-4">
        <title>Overall BPCER</title>
        <p>8.6
7.8
15.8
3.33</p>
      </sec>
      <sec id="sec-4-5">
        <title>Overall ACER</title>
        <p>11.4
6.5
15.8
1.66</p>
      </sec>
      <sec id="sec-4-6">
        <title>Display APCER</title>
        <p>19.20
10
59.2
3.00</p>
      </sec>
      <sec id="sec-4-7">
        <title>Print APCER</title>
        <p>22.5
4.2
38.3
14.80</p>
      </sec>
      <sec id="sec-4-8">
        <title>Overall APCER</title>
        <p>29.2
10
61.7
15.70</p>
      </sec>
      <sec id="sec-4-9">
        <title>Overall BPCER</title>
        <p>23.3
35.8
13.3
7.40</p>
      </sec>
      <sec id="sec-4-10">
        <title>Overall ACER</title>
        <p>26.3
22.9
37.5
5.00</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <source>Processing</source>
          <volume>11</volume>
          (
          <year>2021</year>
          ). [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Costa-Pazo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bhattacharjee</surname>
          </string-name>
          , E. Vazquez- [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Guettala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          , Ef-
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          ference of the Biometrics Special Interest Group matica
          <volume>72</volume>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <source>(BIOSIG)</source>
          ,
          <year>2016</year>
          . [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Selmi</surname>
          </string-name>
          ,
          <string-name>
            <surname>An</surname>
            end-to-end train[2]
            <given-names>Z.</given-names>
          </string-name>
          <string-name>
            <surname>Boulkenafet</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Komulainen</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Hadid</surname>
          </string-name>
          ,
          <article-title>Face able capsule network for image-based character</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <source>(ICIP)</source>
          ,
          <year>2015</year>
          . Processing 11 (
          <year>2021</year>
          ). [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ahmed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          , En- [15]
          <string-name>
            <given-names>W.</given-names>
            <surname>Guettala</surname>
          </string-name>
          , et al.,
          <article-title>Real-time human detection by</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <article-title>hancing eeg signal reconstruction in cross-domain unmanned aerial vehicles</article-title>
          , in: 2022 International
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <article-title>adaptation using cyclegan</article-title>
          , in: 2024
          <source>International Symposium on iNnovative Informatics of Biskra</source>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <article-title>Conference on Telecommunications and Intelligent (ISNIB)</article-title>
          , IEEE,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Systems</surname>
          </string-name>
          (ICTIS), IEEE,
          <year>2024</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          . [16]
          <string-name>
            <given-names>R.</given-names>
            <surname>Brociek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. D.</given-names>
            <surname>Magistris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cardia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Coppa</surname>
          </string-name>
          , [4]
          <string-name>
            <given-names>E.</given-names>
            <surname>Iacobelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ponzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          , Eye- S. Russo, Contagion prevention of covid-19 by
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <source>ment and evaluation, Information</source>
          <volume>14</volume>
          (
          <year>2023</year>
          )
          <fpage>644</fpage>
          . Workshop Proceedings, volume
          <volume>3092</volume>
          ,
          <year>2021</year>
          , p.
          <fpage>89</fpage>
          -
          <lpage>[</lpage>
          5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bouchelaghem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Balsi</surname>
          </string-name>
          , M. Mo-
          <volume>94</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>roni</surname>
          </string-name>
          , C. Napoli, Cross-domain
          <source>machine learning</source>
          [17]
          <string-name>
            <given-names>N.</given-names>
            <surname>Boutarfaia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. E</surname>
          </string-name>
          . Tiber-
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <article-title>tics litter detection, in: 2024 IEEE Mediterranean classification: Towards enhanced human-machine</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>Symposium (M2GARSS),</surname>
            <given-names>IEEE</given-names>
          </string-name>
          ,
          <year>2024</year>
          , pp.
          <fpage>36</fpage>
          -
          <lpage>40</lpage>
          . Proceedings 3695 (
          <year>2023</year>
          )
          <fpage>68</fpage>
          -
          <lpage>74</lpage>
          . [6]
          <string-name>
            <given-names>K.</given-names>
            <surname>Patel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Jain</surname>
          </string-name>
          , Secure smartphone unlock: Ro- [18]
          <string-name>
            <given-names>I.</given-names>
            <surname>Naidji</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Guettala</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. E</surname>
          </string-name>
          . Tiber-
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <article-title>bust face spoof detection on mobile</article-title>
          ,
          <year>2015</year>
          . macine, et al.,
          <source>Semi-mind controlled robots based</source>
          [7]
          <string-name>
            <given-names>L.</given-names>
            <surname>Chengyan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <article-title>Low rank analy- on reinforcement learning for indoor application</article-title>
          .,
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <article-title>sis of eye image sequence - a novel basis for face in</article-title>
          : ICYRIME,
          <year>2023</year>
          , pp.
          <fpage>51</fpage>
          -
          <lpage>59</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <article-title>liveness detection</article-title>
          , in: Biometric Recognition -
          <volume>10th</volume>
          [19]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Akrour</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Khamar</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. E</surname>
          </string-name>
          . Tiber-
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <given-names>Chinese</given-names>
            <surname>Conference</surname>
          </string-name>
          , China,
          <year>2015</year>
          . macine, A. Rabehi,
          <article-title>Comparative analysis of svm [8] S. eddine</article-title>
          <string-name>
            <surname>Boukredine</surname>
            , E. Mehallel,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Boualleg</surname>
          </string-name>
          ,
          <article-title>and cnn classifiers for eeg signal classification in</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <given-names>O.</given-names>
            <surname>Baitiche</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rabehi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Guermoui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Douara</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. E.</surname>
          </string-name>
          <article-title>response to diferent auditory stimuli</article-title>
          ,
          <source>in: 2024</source>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <article-title>antenna arrays through concave modifications and and Intelligent Systems (ICTIS)</article-title>
          , IEEE,
          <year>2024</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <article-title>cut-corner techniques</article-title>
          ,
          <source>ITEGAM-JETIA</source>
          <volume>11</volume>
          (
          <year>2025</year>
          ) [20]
          <string-name>
            <given-names>N.</given-names>
            <surname>Brandizzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Bianco</surname>
          </string-name>
          , G. Castro,
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          , A. Wa-
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          65-
          <fpage>71</fpage>
          . jda,
          <source>Automatic rgb inference based on facial emo</source>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bharadwaj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. I.</given-names>
            <surname>Dhamecha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vatsa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <article-title>tion recognition</article-title>
          ,
          <source>in: CEUR Workshop Proceedings,</source>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <article-title>Computationally eficient face spoofing</article-title>
          detection volume
          <volume>3092</volume>
          ,
          <year>2021</year>
          , p.
          <fpage>66</fpage>
          -
          <lpage>74</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <article-title>with motion magnification</article-title>
          , in: IEEE Conference on [21]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          , M. Zouai,
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <source>shops (CVPRW '13)</source>
          , Washington, DC, USA,
          <year>2013</year>
          ,
          <article-title>ing and riemannian tangent space representations,</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          pp.
          <fpage>105</fpage>
          -
          <lpage>110</lpage>
          . in: 2024 International Conference on Telecommuni[10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Djedi</surname>
          </string-name>
          ,
          <article-title>Neat neural networks to cations and Intelligent Systems (ICTIS)</article-title>
          , IEEE,
          <year>2024</year>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <article-title>control and simulate virtual creature's locomotion</article-title>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          in: 2014 International Conference on Multimedia [22]
          <string-name>
            <given-names>N.</given-names>
            <surname>Brandizzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          , G. Galati,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          , Address-
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <source>Computing and Systems (ICMCS)</source>
          , IEEE,
          <year>2014</year>
          , pp.
          <article-title>ing vehicle sharing through behavioral analysis: A</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          9-
          <fpage>14</fpage>
          .
          <article-title>solution to user clustering using recency-frequency[11] B</article-title>
          .
          <string-name>
            <surname>Nail</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Djaidir</surname>
            ,
            <given-names>I. E.</given-names>
          </string-name>
          <string-name>
            <surname>Tibermacine</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <article-title>Napoli, monetary and vehicle relocation based on neigh-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          <string-name>
            <given-names>N.</given-names>
            <surname>Haidour</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Abdelaziz</surname>
          </string-name>
          ,
          <article-title>Gas turbine vibration borhood splits</article-title>
          ,
          <source>Information (Switzerland) 13</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          <article-title>monitoring based on real data and neuro-fuzzy sys</article-title>
          - doi:10.3390/info13110511.
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          <string-name>
            <surname>tem</surname>
          </string-name>
          ,
          <source>Diagnostyka</source>
          <volume>25</volume>
          (
          <year>2024</year>
          ). [23]
          <string-name>
            <given-names>N.</given-names>
            <surname>Brandizzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Brociek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Wajda</surname>
          </string-name>
          , First [12]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Amine</surname>
          </string-name>
          ,
          <article-title>An end-to-end train- studies to apply the theory of mind theory to green</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          <article-title>recognition and its application to video subtitle ing</article-title>
          ,
          <source>in: CEUR Workshop Proceedings</source>
          , volume
          <volume>3118</volume>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          <year>2021</year>
          , p.
          <fpage>71</fpage>
          -
          <lpage>76</lpage>
          . [35]
          <string-name>
            <given-names>W.</given-names>
            <surname>Guettala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sayah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kahloul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          , [24]
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <article-title>Real time human detection by unmanned aerial</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          <string-name>
            <given-names>D.</given-names>
            <surname>Chebana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nahili</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Starczewscki</surname>
          </string-name>
          , C. Napoli, vehicles, in: 2022 International Symposium on iN-
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          <article-title>Analyzing eeg patterns in young adults exposed to novative Informatics of Biskra (ISNIB)</article-title>
          , IEEE,
          <year>2022</year>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          <article-title>diferent acrophobia levels: a vr study</article-title>
          , Frontiers in pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          <source>Human Neuroscience</source>
          <volume>18</volume>
          (
          <year>2024</year>
          )
          <fpage>1348154</fpage>
          . [36]
          <string-name>
            <given-names>M.</given-names>
            <surname>Riesenhuber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Poggio</surname>
          </string-name>
          , Hierarchical models of [25]
          <string-name>
            <given-names>A.</given-names>
            <surname>Alfarano</surname>
          </string-name>
          , G. De Magistris,
          <string-name>
            <given-names>L.</given-names>
            <surname>Mongelli</surname>
          </string-name>
          , S. Russo,
          <article-title>object recognition in cortex</article-title>
          ,
          <source>Nature Neuroscience</source>
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          <string-name>
            <given-names>J.</given-names>
            <surname>Starczewski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <source>A novel convmixer trans- 2</source>
          (
          <year>1999</year>
          )
          <volume>169</volume>
          ,
          <fpage>321</fpage>
          -
          <lpage>354</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          <article-title>former based architecture for</article-title>
          violent behavior de- [37]
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ponzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Puglisi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          , I. Tiber-
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          <source>ume 14126 LNAI</source>
          ,
          <year>2023</year>
          , p.
          <fpage>3</fpage>
          -
          <lpage>16</lpage>
          . doi:
          <volume>10</volume>
          .1007/ ume 3686,
          <string-name>
            <surname>CEUR-WS</surname>
          </string-name>
          ,
          <year>2024</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          978-3-
          <fpage>031</fpage>
          -42508-
          <issue>0</issue>
          _
          <fpage>1</fpage>
          . [38]
          <string-name>
            <given-names>B.</given-names>
            <surname>Ladjal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bechouat</surname>
          </string-name>
          , M. Se[26]
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          , W. Guettala, draoui, C. Napoli,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rabehi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lalmi</surname>
          </string-name>
          , Hybrid mod-
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          <article-title>comparative study</article-title>
          ,
          <source>in: Proceedings of the 2023 11th 120</source>
          (
          <year>2024</year>
          )
          <fpage>14703</fpage>
          -
          <lpage>14725</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          <source>international conference on information technol-</source>
          [39]
          <string-name>
            <given-names>X.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jiang</surname>
          </string-name>
          , Face liveness detection
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          <article-title>ogy: IoT and smart</article-title>
          city,
          <year>2023</year>
          , pp.
          <fpage>238</fpage>
          -
          <lpage>246</lpage>
          .
          <article-title>from a single image with sparse low rank bilinear</article-title>
          [27]
          <string-name>
            <given-names>V.</given-names>
            <surname>Marcotrigiano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. D.</given-names>
            <surname>Stingi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Fregnan</surname>
          </string-name>
          , P. Mag- discriminative model,
          <source>in: 11th European</source>
          Confer-
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          <string-name>
            <surname>arelli</surname>
            , P. Pasquale,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Russo</surname>
            ,
            <given-names>G. B.</given-names>
          </string-name>
          <string-name>
            <surname>Orsi</surname>
          </string-name>
          , M. T. Mon- ence
          <source>on Computer Vision (ECCV'10)</source>
          , Crete, Greece,
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          <string-name>
            <surname>tagna</surname>
            ,
            <given-names>C.</given-names>
            Napoli, C.
          </string-name>
          <string-name>
            <surname>Napoli</surname>
          </string-name>
          , An integrated control
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          <article-title>plan in primary schools: Results of a field investi-</article-title>
          [40]
          <string-name>
            <given-names>SiW</given-names>
            <surname>Dataset</surname>
          </string-name>
          ,
          <article-title>Spoofing in the wild</article-title>
          , in: IEEE Confer-
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          <article-title>apulia region (southern italy)</article-title>
          ,
          <source>Nutrients</source>
          <volume>13</volume>
          (
          <year>2021</year>
          ).
          <source>(CVPR)</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          <source>doi:10</source>
          .3390/nu13093006. [41]
          <article-title>Oulu-npu database</article-title>
          , https://sites.google.com/site/ [28]
          <string-name>
            <given-names>J.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Lei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. Z.</given-names>
            <surname>Li</surname>
          </string-name>
          , Learn convolutional neural oulunpudatabase/welcome, ????
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          <article-title>network for face anti-spoofing</article-title>
          ,
          <source>CoRR</source>
          (
          <year>2014</year>
          ). ArXiv [42]
          <string-name>
            <given-names>X.</given-names>
            <surname>Yu</surname>
          </string-name>
          , et al.,
          <article-title>Domain adaptation for face anti-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          preprint. spoofing,
          <source>IEEE Transactions on Neural Networks</source>
          [29]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          , et al.,
          <article-title>Hybrid cnn-rnn for face anti-spoofing, and Learning Systems (</article-title>
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref52">
        <mixed-citation>
          <string-name>
            <given-names>IEEE</given-names>
            <surname>Transactions on</surname>
          </string-name>
          <string-name>
            <given-names>Biometrics</given-names>
            , Behavior, and [43]
            <surname>Hmax</surname>
          </string-name>
          <string-name>
            <surname>toolbox</surname>
          </string-name>
          , http://maxlab.neuro.georgetown.
        </mixed-citation>
      </ref>
      <ref id="ref53">
        <mixed-citation>
          <string-name>
            <given-names>Identity</given-names>
            <surname>Science</surname>
          </string-name>
          (
          <year>2021</year>
          ). edu/hmax.html, ???? [30]
          <string-name>
            <given-names>B.</given-names>
            <surname>Nail</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Atoussi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Saadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          , [44]
          <string-name>
            <given-names>X.</given-names>
            <surname>Tu</surname>
          </string-name>
          , et al.,
          <article-title>Enhance the motion cues for face anti-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref54">
        <mixed-citation>
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <article-title>Real-time synchronisation of multiple spoofing using cnn-lstm architecture</article-title>
          ,
          <source>CoRR</source>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref55">
        <mixed-citation>
          <article-title>fractional-order chaotic systems: an application ArXiv:</article-title>
          <year>1901</year>
          .05635.
        </mixed-citation>
      </ref>
      <ref id="ref56">
        <mixed-citation>
          <article-title>study in secure communication, Fractal</article-title>
          and Frac- [45]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Boulkenafet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Komulainen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Ha-
        </mixed-citation>
      </ref>
      <ref id="ref57">
        <mixed-citation>
          <source>tional 8</source>
          (
          <year>2024</year>
          )
          <article-title>104. did, Oulu-npu: A mobile face presentation attack</article-title>
          [31]
          <string-name>
            <given-names>G.</given-names>
            <surname>Capizzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Woźniak</surname>
          </string-name>
          ,
          <article-title>Lessen- database with real-world variations</article-title>
          , in: IEEE Inter-
        </mixed-citation>
      </ref>
      <ref id="ref58">
        <mixed-citation>
          <article-title>of ai-driven drones for aromatherapy</article-title>
          ,
          <source>in: CEUR ture Recognition</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref59">
        <mixed-citation>
          <source>Workshop Proceedings</source>
          , volume
          <volume>2594</volume>
          ,
          <year>2020</year>
          , p.
          <fpage>7</fpage>
          -
          <lpage>[</lpage>
          46]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Boulkenafet</surname>
          </string-name>
          , et al.,
          <article-title>A competition on generalized</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref60">
        <mixed-citation>
          12.
          <article-title>software-based face presentation attack detection [32]</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Djedi</surname>
          </string-name>
          ,
          <article-title>Neat neural networks to in mobile scenarios</article-title>
          , in: IEEE International Joint
        </mixed-citation>
      </ref>
      <ref id="ref61">
        <mixed-citation>
          <article-title>control and simulate virtual creature's locomotion</article-title>
          ,
          <source>Conference on Biometrics (IJCB)</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>688</fpage>
          -
          <lpage>696</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref62">
        <mixed-citation>
          in: 2014 International Conference on Multimedia [47]
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Jain</surname>
          </string-name>
          , Live face detection
        </mixed-citation>
      </ref>
      <ref id="ref63">
        <mixed-citation>
          <source>Computing and Systems (ICMCS)</source>
          , IEEE,
          <year>2014</year>
          , pp.
          <source>based on the analysis of fourier spectra</source>
          , in: SPIE
        </mixed-citation>
      </ref>
      <ref id="ref64">
        <mixed-citation>
          9-
          <fpage>14</fpage>
          . Conference,
          <year>2004</year>
          , pp.
          <fpage>296</fpage>
          -
          <lpage>303</lpage>
          . [33]
          <string-name>
            <given-names>S.</given-names>
            <surname>Shao</surname>
          </string-name>
          , et al.,
          <article-title>Multi-modal face anti-spoofing using</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref65">
        <mixed-citation>
          <source>Pattern Analysis and Machine Intelligence</source>
          (
          <year>2022</year>
          ). [34]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tibermacine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Djedi</surname>
          </string-name>
          , Gene regulatory network
        </mixed-citation>
      </ref>
      <ref id="ref66">
        <mixed-citation>
          &amp;
          <string-name>
            <surname>Applications</surname>
          </string-name>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>