<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Machine Learning Algorithm for Biometric Identification of a Face and Its Application: Survey</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Azamat Berikuly</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marat Nurtas</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dinara Kozhamzharova</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Ionosphere</institution>
          ,
          <addr-line>Gardening community IONOSPHERE 117, Almaty, 050020</addr-line>
          ,
          <country country="KZ">Kazakhstan</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>International Information Technology University</institution>
          ,
          <addr-line>Manas St. 34/1, Almaty, 050040</addr-line>
          ,
          <country country="KZ">Kazakhstan</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Modern security demands advanced biometric verification, with facial recognition leading due to its non-intrusive nature. This study delves into machine learning-powered algorithms for facial biometric identification. Our research presents a novel algorithm that enhances accuracy, efficiency, and security in face-based authentication. Extensive tests validate our system's prowess in addressing real-world security challenges. By the end of this analysis, readers will understand contemporary breakthroughs in face recognition, foundational machine learning strategies, and potential biometric applications. Our aim is to bolster the evolution and widespread adoption of biometric security systems.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Convolutional neural network</kwd>
        <kwd>facial recognition</kwd>
        <kwd>deep learning</kwd>
        <kwd>feature extraction</kwd>
        <kwd>image processing</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The need for enhanced biometric verification in the present security environment is greater than
ever. Facial recognition is at the forefront of these security efforts due to its non-intrusive nature.
Driven by the need to improve face-based authentication security, accuracy, and efficiency, this
research delves deeply into the nexus of machine learning and facial biometric identification.</p>
      <p>This study addresses the problems that exist now with biometric authentication. The demands
of practical security scenarios need a more thorough examination of the effectiveness of current
technologies. In order to tackle these issues, we provide a new algorithm that has been thoroughly
tested and proven to be effective in handling the complexities presented by various security
problems.</p>
      <p>The primary purpose of this study is to address the current challenges prevalent in biometric
verification methodologies. Acknowledging the pivotal role of facial recognition and the
limitations inherent in existing systems, our research endeavors to pave the way for a
transformative solution. Essentially, this work seeks to contribute a novel algorithm, grounded in
machine learning principles, designed to significantly enhance the accuracy, efficiency, and
security of face-based authentication.</p>
      <p>Main Objectives: i) Identify contemporary challenges in biometric verification: Conduct an
exhaustive examination of the existing hurdles and limitations within biometric verification,
particularly emphasizing the nuances of facial recognition; ii) Propose and develop a novel
machine learning algorithm: Introduce an innovative algorithm underpinned by machine
learning, with the explicit goal of overcoming identified challenges and elevating the efficacy of
face-based authentication; iii) Conduct a comprehensive literature review: Undertake an in-depth
exploration of the latest resources in the dynamic field of biometrics.</p>
      <p>By the culmination of this study, readers will have a comprehensive grasp of the most recent
developments in face recognition, the fundamental machine learning techniques that underpin
these developments, and the countless possible uses within the larger biometric field. In addition
to addressing immediate issues, the main goal is to make a substantial contribution to the
development and broad implementation of biometric security systems.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Literature review</title>
      <p>In recent years, facial recognition systems have improved thanks to deep learning methods. The
purpose of this evaluation of the literature is to compare and assess the efficacy of four distinct
facial recognition techniques.</p>
      <p>
        Deep Learning-Oriented Methods: Guo and Zhang (2019) offer an extensive overview of facial
recognition techniques based on deep learning. The authors discuss deep learning models such
as сonvolutional Neural Networks (CNN), deep belief networks (DBN) and recurrent neural
networks (RN). The authors describe how position variation, lighting, and occlusion are only a
few of the obstacles that deep learning can overcome in face recognition tasks. They also go over
the benefits of deep learning-based techniques over conventional ones [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        Hybrid Approach: Benradi et al. (2022) suggest a hybrid face recognition method that
combines feature extraction methods with a CNN. Using a CNN model that has already been
trained, the authors extract characteristics from the last layer. Together with manually created
features like Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG), they
integrate these retrieved characteristics. The authors demonstrate that the suggested hybrid
strategy performs better than the manually constructed feature-based technique and the
standalone CNN-based method. Additionally, the authors show that the suggested approach is resistant
to changes in posture, emotion, and lighting. The findings imply that the hybrid strategy that has
been suggested can work well for face recognition systems in the actual world [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Nonsubsampled Shearlet Transform: Yallamandaiah and Purnachand (2021) provide a novel
face recognition method that combines the histogram of local feature descriptors (HLFD) and
nonsubsampled shearlet transform (NSST). High-frequency features are extracted using the
NSST, while texture information is captured using the HLFD. The suggested strategy beats out
other cutting-edge techniques on many benchmark datasets, as demonstrated by the authors.
Additionally, they show that the suggested approach is resistant to changes in posture, emotion,
and lighting. For face recognition tasks, the suggested approach shows promise, and more study
may examine its possibilities [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        Multi-Feature Fusion: Zhu and Jiang (2020) provide a deep learning-based multi-feature
fusion method for face recognition. The authors employ a weighted averaging strategy to merge
characteristics that they extract from various levels of a pre-trained CNN model. On the ATR Jaffe
database, they demonstrate how the suggested strategy performs better than other cutting-edge
techniques. Additionally, the authors show that the suggested approach is resistant to changes in
posture, emotion, and lighting. The suggested technique is a useful strategy for face recognition
tasks that can enhance the functionality of face recognition systems in the real world [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        Vijaya and Mahammad (2023) present a novel hybrid optimized region-based convolutional
neural network method for real-time face detection. One of the most important discoveries is the
quick feature selection method that makes face detection effective and real-time. By fusing
optimal convolutional neural networks with region-based techniques, the study advances the
field of face identification technology. This hybrid method makes a significant addition to the field
of multimedia tools and applications by improving accuracy while preserving real-time
processing capabilities [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>The approaches under evaluation provide light on the most advanced face recognition
algorithms available today. For face recognition, deep learning-based techniques have
demonstrated a notable improvement over conventional techniques. The hybrid strategy put
forth by Bernardi et al. and the multi-feature fusion technique put forward by Zhu and Jiang
performed the best out of the four strategies that were assessed. But Yallamandaiah and
Purnachand's strategy, which combines NSST and HLFD, is also a viable one for further study. An
key consideration for real-world face recognition systems is robustness to variations in position,
expression, and lighting, as demonstrated by the authors of all examined publications. All things
considered, the methodologies under consideration open up new avenues for face recognition
research.</p>
      <p>
        Jattain and Jailia (2023) highlight the significance of facial traits in their deep learning-based
method for autonomous human face identification and recognition. Using convolutional neural
networks (CNNs) for face recognition, tackling issues like lighting conditions and position
variations, and advancing the usefulness of deep learning in real-world situations are some of the
key breakthroughs. By emphasizing robust feature extraction for increased accuracy and
efficiency in face recognition systems, the study contributes to the growing body of research on
deep learning applications in biometric identification [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        Conventional facial recognition algorithms, such Fisherfaces and Eigenfaces, depended on
statistical methodologies and manually created features. But these techniques frequently had
trouble with changes in stance, occlusions, and illumination [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Machine learning methods have
been widely used in facial recognition to get over these restrictions. By automatically extracting
discriminative characteristics from facial photos, these methods hope to improve identification
reliability and accuracy [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        CNNs are deep learning models that are particularly good at extracting hierarchical
representations and intricate patterns from unprocessed picture data. They are made up of
several layers, such as fully connected, pooling, and convolutional layers. In a variety of computer
vision applications, such as object identification, picture categorization, and face detection, CNNs
have demonstrated impressive performance [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        CNNs can automatically learn discriminative features from raw face photos in the context of
facial recognition, doing away with the requirement for manually created feature engineering.
CNNs' hierarchical structure enables them to record both high-level semantic characteristics like
face landmarks and emotions as well as low-level features like edges and textures. Because of this,
CNNs are well-suited to deal with posture, occlusions, and changes in lighting all of which are
frequent problems in facial recognition [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>The suggested machine learning method will be trained and assessed using the "Labeled Faces in
the Wild" (LFW) dataset. The LFW dataset offers a realistic depiction of real-world circumstances
and variations through its varied collection of face photos taken in unrestricted settings.
Preprocessing will be done on the dataset to guarantee quality and consistency.</p>
      <p>
        Alignment, normalization, and face detection are the preprocessing procedures. To find and
extract facial areas from the photos, face identification methods like the Viola-Jones algorithm
and the Histogram of Oriented Gradients (HOG) approach will be used. Then, to normalize the
position and alignment of the faces, facial alignment methods such geometric transformations or
landmark detection will be used [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Lastly, to improve the comparability of the facial photos
and reduce variances in illumination, normalizing techniques such mean normalization or
histogram equalization will be applied [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>
        Convolutional Neural Networks (CNNs) will be utilized by the suggested method to do facial
recognition. In a variety of computer vision applications, such as object identification and picture
categorization, CNNs have shown remarkable performance. The CNN's architecture is intended
to maximize the trade-off between computational efficiency and model complexity by extracting
discriminative characteristics from face photos [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>
        Multiple convolutional layers, pooling layers, and fully linked layers will make up the CNN
architecture [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. By using convolutional filters, the convolutional layers will extract local
information from the input face pictures. In order to extract the most prominent features and
minimize spatial dimensions, the pooling layers will downsample the feature maps [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. The
retrieved features will be integrated by the fully connected layers, which will then use the learnt
representations to conduct classification [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
      </p>
      <p>Training and validation sets will be created from the preprocessed LFW dataset. The CNN
model will be trained on face photos using the training set, and its performance will be monitored
and overfitting prevented with the validation set.</p>
      <p>
        The CNN model will learn how to identify distinguishing characteristics in face photos during
training, and it will use gradient descent and backpropagation methods to improve its
parameters. Whether softmax loss or triplet loss is the explicit goal of the face recognition job will
determine which loss function is employed during training. The capacity of the loss function to
promote intra-class compactness and inter-class separability in the learnt feature space will
determine which one is used [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
      </p>
      <p>Throughout the training process, the training set will be iterated over several times, with each
epoch involving forward propagation, backward propagation, and loss computation to update the
model's parameters. To maximize the training process and avoid overfitting, the learning rate,
batch size, and regularization strategies, such as dropout or weight decay will be carefully
adjusted.</p>
      <p>To determine the proposed algorithm's efficacy in biometric identification, a number of
measures will be used to evaluate its performance. Accuracy, precision, recall, F1 score, and
receiver operating characteristic (ROC) curve analysis are the assessment measures that will be
used. Precision is the percentage of properly recognized positive cases out of all positively
identified cases, whereas accuracy is the percentage of correctly identified faces. Recall, which is
sometimes referred to as sensitivity, is a metric that expresses the percentage of true positive
cases that were accurately recognized. The F1 score is a balanced indicator of the algorithm's
performance that is calculated as the harmonic mean of accuracy and recall. Plotting the true
positive rate versus the false positive rate allows the ROC curve analysis to evaluate the
algorithm's performance over a range of operational points.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Results and discussion</title>
      <p>A variety of performance assessment measures were used to assess the machine learning
algorithm that was created for the Convolutional Neural Network (CNN) method to biometric face
identification (see Figure 1).</p>
      <p>These measures consist of F1 score, recall, accuracy, and precision. The created method
proved successful in biometric face identification, as evidenced by its 82.43% accuracy on the
assessment dataset (see Figure 2). After computation, the precision, recall, and F1 score came out
to be 83.28%, 82.43%, and 82.19%, respectively.</p>
      <p>The created method beat several of the benchmarks in terms of accuracy and other assessment
criteria, according to a comparison with existing algorithms. This shows that the suggested
CNNbased technique may handle changes in position, illumination, facial expressions, and occlusions
with ease, leading to better biometric recognition performance.</p>
      <p>Although the results are encouraging, there are several limitations to the created method. The
use of the LFW dataset, which is varied but could not accurately reflect all potential real-world
circumstances, is one drawback. Variations in picture quality and resolution may also have an
impact on the algorithm's performance.</p>
      <p>Subsequent research endeavours may concentrate on mitigating these constraints and
augmenting the algorithm's efficiency. This might entail gathering and adding more datasets that
encompass a greater variety of situations and demographics. Additionally, investigating more
complex methods like ensemble learning or attention processes may improve the algorithm's
resilience and accuracy. It's crucial to take into account any potential biases and ethical
ramifications related to facial recognition algorithms. Future studies should focus on resolving
these issues and guaranteeing privacy and justice in the use of these technologies.</p>
      <p>In conclusion, the machine learning algorithm that was created for the CNN method of
biometric face identification showed encouraging performance and accuracy results. The
algorithm's competitive performance was demonstrated by comparing its efficacy with current
state-of-the-art methods. To overcome these obstacles and expand the algorithm's potential,
more study and advancements are required. The created algorithm might have a number of uses
in surveillance, access control, and security systems, advancing the field of face recognition
technology.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Acknowledgements</title>
      <p>
        We acknowledge the significant contribution of the founders and maintainers of the 'Labeled
Faces in the Wild' (LFW) [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] dataset to our study. The fact that this varied and carefully selected
dataset was available was essential to the accomplishment of our research. Our study findings
now have more quality and dependability thanks to the LFW dataset.
      </p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>This study developed and assessed the Convolutional Neural Network (CNN) technique for
biometric recognition. The program showed encouraging performance and accuracy results,
highlighting its potential for a range of uses in forensics, identity verification, access control,
security systems, and human-computer interaction.</p>
      <p>Strong identification was made possible even in difficult situations by the algorithm's ability
to use CNNs to manage fluctuations in illumination, position, facial emotions, and occlusions. The
algorithm's competitiveness against current state-of-the-art methods was assessed by an
evaluation of its performance on the popular LFW dataset.</p>
      <p>But it's critical to recognize the algorithm's limits as it was designed. Though it produced
amazing findings, issues including differences in picture quality, resolution, and other biases need
to be addressed with further study and advancements. In order to improve the algorithm's
accuracy and resilience, future work might concentrate on gathering and integrating more
datasets that represent a larger range of events and demographics. It could also investigate more
sophisticated methods like ensemble learning and attention mechanisms.</p>
      <p>In conclusion, there is a lot of promise for strengthening security protocols, expediting
authentication procedures, and boosting user experiences using the machine learning algorithm
created for biometric face identification with the CNN technique. Still, further research and
development is required to solve the shortcomings, guarantee equity, and safeguard privacy
while using face recognition technology.</p>
    </sec>
    <sec id="sec-7">
      <title>7. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Guo</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>A survey on deep learning based face recognition</article-title>
          .
          <source>Computer Vision</source>
          and Image Understanding,
          <volume>189</volume>
          ,
          <fpage>102805</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Benradi</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chater</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Lasfar</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2022</year>
          ).
          <article-title>A hybrid approach for face recognition using a convolutional neural network combined with feature extraction techniques</article-title>
          .
          <source>IAES International Journal of Artificial Intelligence (IJ-AI)</source>
          ,
          <volume>12</volume>
          (
          <issue>2</issue>
          ),
          <fpage>627</fpage>
          -
          <lpage>640</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Yallamandaiah</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Purnachand</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          (
          <year>2021</year>
          ).
          <article-title>Convolutional neural network-based face recognition using nonsubsampled shearlet transform and histogram of local feature descriptors</article-title>
          .
          <source>IAES International Journal of Artificial Intelligence (IJ-AI)</source>
          ,
          <volume>10</volume>
          (
          <issue>4</issue>
          ),
          <fpage>1079</fpage>
          -
          <lpage>1090</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Zhu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Jiang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Optimization of face recognition algorithm based on deep learning multi feature fusion driven by big data</article-title>
          .
          <source>Image and Vision Computing</source>
          ,
          <volume>104</volume>
          ,
          <fpage>104023</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Vijaya</surname>
            <given-names>K.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Mahammad</surname>
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>A fast feature selection technique for real-time face detection using hybrid optimized region based convolutional neural network</article-title>
          .
          <source>Multimedia Tools and Applications</source>
          ,
          <volume>82</volume>
          (
          <issue>9</issue>
          ),
          <fpage>13719</fpage>
          -
          <lpage>13732</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Jattain</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Jailia</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Automatic Human Face Detection and Recognition Based On Facial Features Using Deep Learning Approach</article-title>
          .
          <source>International Journal on Recent and Innovation Trends in Computing and Communication</source>
          ,
          <volume>11</volume>
          (
          <issue>2s</issue>
          ),
          <fpage>268</fpage>
          -
          <lpage>277</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Bengio</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          (
          <year>2009</year>
          ).
          <article-title>Learning deep architectures for AI. Foundations and Trends in Machine Learning</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Saadaldeen</surname>
            <given-names>R.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Emrullah</surname>
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Deepfake detection using rationale-augmented convolutional neural network</article-title>
          .
          <source>Applied. Applied Nanoscience</source>
          ,
          <volume>44</volume>
          (
          <issue>3</issue>
          ),
          <fpage>1485</fpage>
          -
          <lpage>1493</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Wen</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Qiao</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>A discriminative feature learning approach for deep face recognition</article-title>
          .
          <source>European Conference on Computer Vision</source>
          (ECCV).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Park</surname>
            hi,
            <given-names>O. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vedaldi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Zisserman</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>Deep face recognition</article-title>
          .
          <source>In British Machine Vision Conference (BMVC)</source>
          ,
          <volume>41</volume>
          .
          <fpage>1</fpage>
          -
          <lpage>41</lpage>
          .
          <fpage>12</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Deng</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Guo</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Zafeiriou</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Marginal loss for deep face recognition</article-title>
          .
          <source>In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <fpage>5110</fpage>
          -
          <lpage>5119</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Hassner</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>Viewing real-world faces in 3D</article-title>
          .
          <source>In International Conference on Computer Vision</source>
          (ICCV).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Van Der Maaten</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Weinberger</surname>
            ,
            <given-names>K. Q.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Densely connected convolutional networks</article-title>
          .
          <source>In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <fpage>4700</fpage>
          -
          <lpage>4708</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yan</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lei</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>S. Z.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>Convolutional channel features</article-title>
          .
          <source>In IEEE International Conference on Computer Vision</source>
          ,
          <fpage>82</fpage>
          -
          <lpage>90</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Schroff</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kalenichenko</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Philbin</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>Facenet: A unified embedding for face recognition and clustering</article-title>
          .
          <source>In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source>
          ,
          <fpage>815</fpage>
          -
          <lpage>823</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Mohammad</surname>
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <source>Theoretical Understanding of Convolutional Neural Network: Concepts</source>
          , Architectures, Applications, Future Directions. Computation,
          <volume>11</volume>
          (
          <issue>3</issue>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Qiao</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>Joint face detection and alignment using multitask cascaded convolutional networks</article-title>
          .
          <source>IEEE Signal Processing Letters.</source>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>G. B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ramesh</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Berg</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Learned-Miller</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          (
          <year>2008</year>
          ).
          <article-title>Labeled faces in the wild: A database for studying face recognition in unconstrained environments</article-title>
          .
          <source>In ECCV Workshop on Faces in Real-life Images.</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>