<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Hybrid face recognition solution for security</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Y Donon</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Samara National Research University</institution>
          ,
          <addr-line>Moskovskoe Shosse 34, Samara, Russia, 443086</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <fpage>417</fpage>
      <lpage>423</lpage>
      <abstract>
        <p>This article introduces a design that aims threw the combination of open source and closed source technologies, to make a, simple to implement,low-cost and high-performing face recognition solution. The solution provides identification, emotions and facial features recognition as well asdangerous objects spotting.This article exposes the concept of the solution, explains its importance on the market and provides details of a proof of concept prototype.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The market of face and image recognition technologies is booming and forecasted a brilliant future.
Although it is seen more and more in specialised magazine or promoted by giants of information
technologies, many smaller actors are left behind as they perceive the technology as inaccessible or
too expensive.</p>
      <p>
        Numerous researchesaboutthose systems have been made in the recent times and during those years
of research, computers science has evolved beyond measure.But what really have changed since a few
years, are the cameras. What makes this ground of research more prolific than ever todays is that we
all havephones in our pockets which sensors have an average of 14 megapixels, that we can buy full
HD webcams for less than a hundred dollars. 15 years ago, a digital camera’s resolution would be a
fifth of what a webcam hasnow and be ten times its price. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]
      </p>
      <p>
        Although face recognitions attempts have been around for more than 50 years now, it still appears
as a new technology to most of people. If we had indeed technologies able to perform those tasks back
in the sixties, pictures would have to be taken according to very precise specifications. Attempts were
multiplied; it became a trend in the nineties, some artefacts from that time, such as the ORL Database
if Faces from Cambridge are even still in use today. In the beginning of the two-thousands, an
international contest has even been thrown on the subject of face recognition. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] Yet with all of that, it
is only now and in the upcoming years that we really can and will perceive ground-breaking advance
in those technologies. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]
      </p>
      <p>
        Nowadays, we have the tools, we have the necessary sensors for an efficient recognition and new
actors on this market are emerging every day.Those solutions represent a trend on the security market
of course; it allows to recognise not only people, but also specific objects and track them if necessary.
The industry alsostarts to use emotion recognition systems to understand better their customer. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] In
this paper, I will introduce a solution to exploit this new market and make it accessible to everyone
threw a low-cost, high performing face recognition solution for security. A design that is easy to
deploy without high computing capabilities. The goal of this paper is for everyone to understand the
stakes of this market, how accessible it is now and how it can be used in our everyday life.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Market and projections</title>
      <sec id="sec-2-1">
        <title>2.1. Hybrid</title>
        <p>
          As the market is still emerging but have been around for a long time, both open-source and closed
source solutions exists. Closed source solutions are efficient to spot faces,can differentiate them,
making an authentication possible. Those solutions, however, falls short when it comes to analyse a
picture’s details, such as emotions, facial details or objects. Closed source image recognition
providers, however, are usually specialized and therefore extremely good when it comes to identifying
those details. [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]
        </p>
        <p>The design presented in this paper tries to take profit of this reality. Combining open source
technologies and closed source ones, taking to both worlds what they are good at, allows making a
first analysis on a local computer, even one having low computation capabilities and, over the internet,
using solutions provided by the majors of image recognition, to analyse pictures in-depth,beyond the
capacities of open source solutions.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Projection</title>
        <p>
          As mentioned, the face recognition market is still emerging. It is expected to be worth between 7.5 and
10 billion dollars by 2022, 2 to 3 time more than it was worth in 2016. The year before that, the main
client of those systems was US Homeland Security.By now the use of such solution for security has
already spread in several countries and is used by such actors as the British police. Since its beginning
this technology has been viewed as a major asset in security systems. [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]
        </p>
        <p>
          Open source solutions are forecasted to improve their algorithms in 2D and thermal face
recognition, while it is believed that online services will keep the specialized market (complex
emotions, facial features details, 3D modelling, etc...) , although open source alternatives exists and
will also improve, but not with the same precision rate. [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] The main uses between 2017 and 2022 are
forecasted to be emotion recognition, tracking and monitoring, access control and law enforcement.
[
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] Therefore the design suggested in this paper fits the needs of the market to have an affordable
solution, using the full capacities offered by the different actors of face recognitions solutions. It also
is appropriate as thiscurrent is forecasted to be stable over at least the upcoming four years.
        </p>
        <p>Making profitable for SMEs (Small and medium-sized enterprises), which are 98% of economic
environment, a multi-billions digital economy market threw the design presented is a breakthrough for
face recognition as it makes it an accessible tool.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results</title>
      <sec id="sec-3-1">
        <title>3.1. Functioning</title>
        <p>
          In this design, if the picture is of sufficient quality for an optimal analysis[
          <xref ref-type="bibr" rid="ref6">6</xref>
          ], the system queries first a
micro database of a handful of the most recent faces, loaded on the computer’s RAM (1). This reduces
the load on the disk’s database and accelerates the program, as between two frames, it is usually the
same faces that show up.
        </p>
        <p>If the face hasn’t been recognized on the first database, a query is sent to the second one, which can
store up to a thousand of faces, depending of the capacities of the computer (2). This database is
typically conceived to store the faces of all the employees of a company and manage access controls.</p>
        <p>If no match is found in the second database (the confidency of the comparison between the shown
face and the existing ones is too low), the system querries online services, that can analyse the picture,
confirm that the idividual in unknown via an online database (3), and differenciate its emotions, facial
features as well as alaysing its environement, detecting immediate threats, such as weapons.</p>
        <p>Finnaly, the result of this detection will be added to the RAM-loaded database to avoid detecting
and alaysing again the same face (4). Each query to an online service having of course a cost.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Results</title>
        <p>
          The performance reached by the test program fitted all of our expectations, if sometimes the
description suffers small imprecisions it offers a real-time identification on video with 5 frames per
second, spotting simultaneously several object[
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], more than enough for a security camera, giving
even an impression of relative fluidity in the capture. With identified facial features such as hair colour
or emotions, a very precise recognition differentiating identical twins without any hesitation and
beingable to detect some specific object such as weapons, we can say that, on a technical point of
view, the performance test of the design is a complete success.
        </p>
        <p>Although some obvious progress are to be made on the hair colour detection, the features
calculated are generally close to reality and most importantly allowsa human identification of a
person, even without a the subsequent picture.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Technical specifications</title>
        <p>A software has been developed as a design proof of concept. It has been developed in C#, using an
OpenCV wrapper for this platform, OpenCvSharp, Fisherfaces recognition algorithm and Microsoft’s
Face and Vision API.</p>
        <p>
          The use of Fisherfaces has been motivated over other methods for its search of discrimination
criteria, which is more reliable to exclude possible faces match, enhancing the security offered by the
solution. We widely favour a false negative, which leads to a control on the server that the person truly
isn’t identified in our database, than a false positive, which would allow an intruder to get through the
system. [
          <xref ref-type="bibr" rid="ref10 ref11 ref8 ref9">8-11</xref>
          ]
        </p>
        <p>The use of the Microsoft cognitive systems has been decided as it fitted the technical needs of the
environment, offered a good transparency and as they send back details from the analysis of the image
such as face coordinates, allowing further extrapolation. The other considered providers whichbilling
systems were adapted to this design were Google Cloud Platform and IBM Watson.</p>
        <p>
          The goal being to make the market as accessible as possible, it was important to reduce every
source of costs. The system has been tested on several Microsoft Windows platforms (Win 7 and
superior versions), it function and manage real-time recognition on computers having 4Gb of RAM, a
dual core processor and a SSD of 64 GB,inferior configurations haven’t been tested.
3.4. Costs
The design described here is of course flexible, meaning any online service could be used alternatively
to Microsoft’s. The calculation of costs for such an access control system was made considering an
arbitrary a company size of a hundred workers (big company on the SME environment). Considering
each of the employees comes into the company’s building twice a day every working day, it is 4000
controls a month. If those faces are all stored locally, they should be recognized and therefore not
generate any cost. If every day fifty unknown person comes into the building it will make about a
thousand controls that are not perceived by the local recognition system. Those numbers all falls under
the “free calls pool” of Microsoft Azure subscription, even considering that some queries of the
analysis must be done in several steps, generating as many calls. However, this represents a laboratory
reality which always differs from the “field”. For the same amount of people, used in a production
environment, the price of the online analysis has been calculated to be about 10 to 15 dollars a month,
taking in account all the frequent errors of the software. [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]
        </p>
        <p>
          As a onetime cost, it is necessary to get a small computer and a webcam to run the software.
Multiple devices have been assessed on that purpose, all in a price range of 250 to 350 dollars for the
computer and as for the camera between 50 and 80 dollars. For a total cost of 300-430 dollars a door.
Counting the cost of electricity to power the system, the total cost of the installation is estimated to
1500 dollars for a period of 5 years (total cost of ownership).
4. Reliability
The test program realized to proof this design is able to distinguish similar faces such as twins easily.
The confidence criterion has been configured severely, to make sure the local recognition system
wouldn’t give any false positive. This confidence has been set according to previous researches. [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]
Tests have been repeated several times on thousands of frames without any mistake from the software.
        </p>
        <p>
          To assess the efficiency of the software, some further tests and comparison have been conducted.
The computer has been presented pictures from five pairs of twins identified in the database and two
pairs of pictures of the same person on different pictures and has to differentiate them. Humans, on the
other hand, have been presented a similar set of pictures and were simply asked, having two seconds
for each picture to tell which subjects were twins and which were not. [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]
        </p>
        <p>The precision of the software couldn’t be assessed with accuracy, as so far, the program hasn’t
been cheated on successfully. Whenever the confidence of the local face analyser is too low, faces are
sent online for analysis. Since the program is at its final stage of development, the success rate has
been of a hundred percent. Therefore, the upcoming paragraph, assessing the reliability of such
systems, is based on external information and other systems.</p>
        <p>
          Unveiling its last iPhone, Apple claimed its face recognition system has a reliability of one in a
million, meaning that once in a million times two faces would be confused and recognised as being the
same, this is the closest comparison possible to do to the online services used. [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] To correlate this
number we can take the code of a credit card in Russia, 4 digits or 10’000 possibilities, fingerprints ,
reputable unreliable once in 50’000 samples or an average home key (6 tumblers, 7 heights), which
makes about 120’000 possibilities. Weather the reliability of the system is comparable to Apple’s
claim about its owns is discussable, but, nevertheless, the tests in laboratory are in favour of assessing
a very high index of reliability forcomparable face recognition systems.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>5. Conclusion</title>
      <p>As underlined in this presentation, face recognition is a fast-developing market at the moment, much is
already done but much is left to be built and this design has a place in the development of the market.
Every major actor involved in security should now consider getting themselves an access to this kind
of technology, especially now it is more accessible than ever and as the market trend makes it very
profitable.</p>
      <p>
        In the future, the detection will be improved by assessing the liveness of faces. Checking that we
are not given a picture of a face but that it is a genuine face we have in front of the camera. This can be
made by different methods, but the most adapted to a system of those dimensions is the analysis of the
micro-behaviour of the eyes. [
        <xref ref-type="bibr" rid="ref16 ref17 ref18 ref19">16-19</xref>
        ] The identification system on RAM will also be compared in
efficiency to a YOLO system (You Only Look Once) in order to asset their respective efficiency and
choose the most appropriate technology to keep a target acquired and analyse it only once.
      </p>
      <p>This kind of system could also be used on security cameras to get frames with a higher resolution
and filter them threw an artificial intelligence able to understand which frames are relevant by an
analysis of the pictures. Allowing selecting only relevant frames for storage, gives the possibility to
significantly augment the quality of the camera’s captors without being confronted to the problem of
the storage space saturation. Emotion recognition and specifically this design can be adapted to the
numerous of other uses such as home automations, alarms, research of wanted persons and many
others that haven’t been mentioned in this article. It is up for everyone, on this new market, to develop
their own ideas.</p>
      <p>Of course, this paper wasn’t about a purely technical breakthrough, however I hope that the reader
understands better now the face recognition market, how to use it efficiently and make it profitable, in
particular with the design offered.This kind of design will make the difference between an emerging
market and a fully grown and accessible one, bringing a new technology to the consumer. In other
words, I want everyone to understand how face recognition systems are now in the reach of their
hands.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <article-title>[1] Digital Photography review (Access mode: https://www</article-title>
          .dpreview.com/articles/5778663183/ ten-unique
          <article-title>-cameras-from-the-dawn-of-consumer-digital-photography) (20.8</article-title>
          .
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Philips</surname>
            <given-names>P J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Flynn</surname>
            <given-names>P J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scruggs</surname>
            <given-names>T</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bowyer</surname>
            <given-names>K W</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chang</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hoffman</surname>
            <given-names>K</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marques</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Min</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Worek</surname>
            <given-names>W 2005</given-names>
          </string-name>
          <article-title>Overview of the face recognition grand challenge</article-title>
          ,
          <source>Computer Vision and Pattern Recognition IEEE Computer Society Conference on Computer Vision</source>
          and Pattern Recognition DOI:
          <volume>10</volume>
          .1109/CVPR.
          <year>2005</year>
          .268
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Zhao</surname>
            <given-names>W</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chellappa</surname>
            <given-names>R</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Philips</surname>
            <given-names>P J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosenfeld</surname>
            <given-names>A</given-names>
          </string-name>
          2003
          <article-title>Face recognition: A literature survey</article-title>
          ACM
          <source>Computing Surveys</source>
          <volume>35</volume>
          <fpage>399</fpage>
          -
          <lpage>458</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Gates</surname>
            <given-names>K A</given-names>
          </string-name>
          <year>2011</year>
          <article-title>Our biometric future: facial recognition technology and the culture of surveillance</article-title>
          (New York University press) p
          <fpage>263</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Rybintsev</surname>
            <given-names>A V</given-names>
          </string-name>
          ,
          <article-title>Konushin V S and Konushin A S 2015 Consecutive gender and age classification from facial images based on ranked local binary patterns</article-title>
          <source>Computer Optics</source>
          <volume>39</volume>
          (
          <issue>5</issue>
          )
          <fpage>762</fpage>
          -
          <lpage>769</lpage>
          DOI: 10.18287/
          <fpage>0134</fpage>
          -2452-2015-39-5-
          <fpage>762</fpage>
          -769
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Nikitin</surname>
            <given-names>M Yu</given-names>
          </string-name>
          ,
          <article-title>Konushin V S and Konushin A S 2017 Neural network model for video-based face recognition with frames quality assessment</article-title>
          <source>Computer Optics</source>
          <volume>41</volume>
          (
          <issue>5</issue>
          )
          <fpage>732</fpage>
          -
          <lpage>742</lpage>
          DOI: 10.18287/
          <fpage>2412</fpage>
          -6179-2017-41-5-
          <fpage>732</fpage>
          -742
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Protsenko</surname>
            <given-names>V I</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kazanskiy N L and Serafimovich P G 2015</surname>
          </string-name>
          <article-title>Real-time analysis of parameters of multiple object detection systems</article-title>
          <source>Computer Optics</source>
          <volume>39</volume>
          (
          <issue>4</issue>
          )
          <fpage>582</fpage>
          -
          <lpage>591</lpage>
          DOI: 10.18287/
          <fpage>0134</fpage>
          -2452- 2015-39-4-
          <fpage>582</fpage>
          -591
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Jaiswal</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bhadauria S S and Jadon R S 2011</surname>
          </string-name>
          <article-title>Comparison between face recognition algorithm Eigenfaces</article-title>
          ,
          <source>Fisherfaces and Elastic Bunch Graph Matching Journal of Global Research in Computer Science</source>
          <volume>2</volume>
          (
          <issue>7</issue>
          )
          <fpage>187</fpage>
          -
          <lpage>193</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Yang M-H 2002 Kernel</surname>
          </string-name>
          <article-title>Eigenfaces vs</article-title>
          .
          <source>Kernel Fisherfaces: Face Recognition Using Kernel Methods Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition 215-220</source>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Turk</surname>
            <given-names>M A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Pentland</surname>
            <given-names>A O</given-names>
          </string-name>
          <year>2002</year>
          <article-title>Face recognition using Eigenface (The Media Laboratory</article-title>
          MIT)
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Kalinovskii</surname>
            <given-names>I A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Spitsyn</surname>
            <given-names>V G</given-names>
          </string-name>
          <year>2017</year>
          <article-title>Review and testing of frontal face detectors</article-title>
          <source>Computer Optics</source>
          <volume>40</volume>
          (
          <issue>1</issue>
          )
          <fpage>99</fpage>
          -
          <lpage>111</lpage>
          DOI: 10.18287/
          <fpage>2412</fpage>
          -6179-2016-40-1-
          <fpage>99</fpage>
          -111
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Microsoft</surname>
          </string-name>
          <article-title>'s Computer Vision API Version 2.0 documentation, Microsoft (Access mode: https://docs</article-title>
          .microsoft.com/en-us/azure/cognitive-services/computer-vision/home) (
          <volume>22</volume>
          .8.
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Vizilter Yu</surname>
            <given-names>V</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gorbatsevich</surname>
            <given-names>V S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vorotnikov</surname>
            <given-names>A V</given-names>
          </string-name>
          and
          <string-name>
            <surname>Kostromov</surname>
            <given-names>N A</given-names>
          </string-name>
          <year>2017</year>
          <article-title>Real-time face identification via CNN and boosted hashing forest</article-title>
          <source>Computer Optics</source>
          <volume>41</volume>
          (
          <issue>2</issue>
          )
          <fpage>254</fpage>
          -
          <lpage>265</lpage>
          DOI: 10.18287/
          <fpage>2412</fpage>
          -6179-2017-41-2-
          <fpage>254</fpage>
          -265
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <article-title>How secure is Face ID? (Access mode: https://www</article-title>
          .macworld.co.uk/feature/iphone/howsecure-is
          <string-name>
            <surname>-</surname>
          </string-name>
          face-id-
          <volume>3663992</volume>
          /) (
          <volume>01</volume>
          .
          <fpage>11</fpage>
          .
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Z</given-names>
            <surname>Caplova</surname>
          </string-name>
          , Obertov
          <string-name>
            <given-names>Z</given-names>
            ,
            <surname>Gibelli D M</surname>
            , Mazzarelli
          </string-name>
          <string-name>
            <given-names>D</given-names>
            ,
            <surname>Fracasso</surname>
          </string-name>
          <string-name>
            <given-names>T</given-names>
            ,
            <surname>Vanezis</surname>
          </string-name>
          <string-name>
            <given-names>P</given-names>
            ,
            <surname>Sforza</surname>
          </string-name>
          <string-name>
            <given-names>C</given-names>
            and
            <surname>Cattaneo C 2017</surname>
          </string-name>
          <article-title>The Reliability of Facial Recognition of Deceased Persons on</article-title>
          <source>Photographs Journal of Forensic Sciences</source>
          <volume>62</volume>
          <fpage>1286</fpage>
          -
          <lpage>1291</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Pan</surname>
            <given-names>G</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wu</surname>
            <given-names>Z</given-names>
          </string-name>
          and
          <string-name>
            <surname>Sun L 2008</surname>
          </string-name>
          <article-title>Liveness detection for face recognition, recent advances in face recognition IntechOpen 9</article-title>
          DOI: 10.5772/6397
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Blanz</surname>
            <given-names>V</given-names>
          </string-name>
          and
          <string-name>
            <surname>Vetter</surname>
            <given-names>T 2003</given-names>
          </string-name>
          <article-title>Face recognition based on fitting a 3D morphable model</article-title>
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          <volume>25</volume>
          (
          <issue>9</issue>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Kosinski</surname>
            <given-names>M</given-names>
          </string-name>
          and
          <string-name>
            <surname>Wang</surname>
            <given-names>Y 2018</given-names>
          </string-name>
          <article-title>Deep neural networks are more accurate than humans at detecting sexual orientation from facial images</article-title>
          <source>Journal of Personality and Social Psychology</source>
          <volume>114</volume>
          (
          <issue>2</issue>
          )
          <fpage>246</fpage>
          -
          <lpage>257</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Pan</surname>
            <given-names>G</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sun</surname>
            <given-names>L</given-names>
          </string-name>
          and
          <article-title>Wu Z 2017 Eyeblink-based Anti-Spoofing in Face Recognition from a Generic Webcamera IEEE</article-title>
          11th International Conference on Computer Vision DOI:
          <volume>10</volume>
          .1109/ICCV.
          <year>2007</year>
          .4409068
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>