<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Hybrid face recognition solution for security</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Y</forename><surname>Donon</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">IV International Conference on &quot;Information Technology and Nanotechnology&quot; (ITNT</orgName>
								<address>
									<postCode>2018</postCode>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">Samara National Research University</orgName>
								<address>
									<addrLine>Moskovskoe Shosse 34</addrLine>
									<postCode>443086</postCode>
									<settlement>Samara</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Hybrid face recognition solution for security</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">A7D69BE7A22E46D202B17270D1EB816E</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T04:15+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This article introduces a design that aims threw the combination of open source and closed source technologies, to make a, simple to implement,low-cost and high-performing face recognition solution. The solution provides identification, emotions and facial features recognition as well asdangerous objects spotting.This article exposes the concept of the solution, explains its importance on the market and provides details of a proof of concept prototype.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The market of face and image recognition technologies is booming and forecasted a brilliant future. Although it is seen more and more in specialised magazine or promoted by giants of information technologies, many smaller actors are left behind as they perceive the technology as inaccessible or too expensive.</p><p>Numerous researchesaboutthose systems have been made in the recent times and during those years of research, computers science has evolved beyond measure.But what really have changed since a few years, are the cameras. What makes this ground of research more prolific than ever todays is that we all havephones in our pockets which sensors have an average of 14 megapixels, that we can buy full HD webcams for less than a hundred dollars. 15 years ago, a digital camera's resolution would be a fifth of what a webcam hasnow and be ten times its price. <ref type="bibr" target="#b0">[1]</ref> Although face recognitions attempts have been around for more than 50 years now, it still appears as a new technology to most of people. If we had indeed technologies able to perform those tasks back in the sixties, pictures would have to be taken according to very precise specifications. Attempts were multiplied; it became a trend in the nineties, some artefacts from that time, such as the ORL Database if Faces from Cambridge are even still in use today. In the beginning of the two-thousands, an international contest has even been thrown on the subject of face recognition. <ref type="bibr" target="#b1">[2]</ref> Yet with all of that, it is only now and in the upcoming years that we really can and will perceive ground-breaking advance in those technologies. <ref type="bibr" target="#b2">[3]</ref> Nowadays, we have the tools, we have the necessary sensors for an efficient recognition and new actors on this market are emerging every day.Those solutions represent a trend on the security market of course; it allows to recognise not only people, but also specific objects and track them if necessary. The industry alsostarts to use emotion recognition systems to understand better their customer. <ref type="bibr" target="#b3">[4]</ref> In this paper, I will introduce a solution to exploit this new market and make it accessible to everyone threw a low-cost, high performing face recognition solution for security. A design that is easy to deploy without high computing capabilities. The goal of this paper is for everyone to understand the stakes of this market, how accessible it is now and how it can be used in our everyday life.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Market and projections</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Hybrid</head><p>As the market is still emerging but have been around for a long time, both open-source and closed source solutions exists. Closed source solutions are efficient to spot faces,can differentiate them, making an authentication possible. Those solutions, however, falls short when it comes to analyse a picture's details, such as emotions, facial details or objects. Closed source image recognition providers, however, are usually specialized and therefore extremely good when it comes to identifying those details. <ref type="bibr" target="#b3">[4]</ref> The design presented in this paper tries to take profit of this reality. Combining open source technologies and closed source ones, taking to both worlds what they are good at, allows making a first analysis on a local computer, even one having low computation capabilities and, over the internet, using solutions provided by the majors of image recognition, to analyse pictures in-depth,beyond the capacities of open source solutions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Projection</head><p>As mentioned, the face recognition market is still emerging. It is expected to be worth between 7.5 and 10 billion dollars by 2022, 2 to 3 time more than it was worth in 2016. The year before that, the main client of those systems was US Homeland Security.By now the use of such solution for security has already spread in several countries and is used by such actors as the British police. Since its beginning this technology has been viewed as a major asset in security systems. <ref type="bibr" target="#b3">[4]</ref> Open source solutions are forecasted to improve their algorithms in 2D and thermal face recognition, while it is believed that online services will keep the specialized market (complex emotions, facial features details, 3D modelling, etc...) , although open source alternatives exists and will also improve, but not with the same precision rate. <ref type="bibr" target="#b4">[5]</ref> The main uses between 2017 and 2022 are forecasted to be emotion recognition, tracking and monitoring, access control and law enforcement. <ref type="bibr" target="#b3">[4]</ref> Therefore the design suggested in this paper fits the needs of the market to have an affordable solution, using the full capacities offered by the different actors of face recognitions solutions. It also is appropriate as thiscurrent is forecasted to be stable over at least the upcoming four years.</p><p>Making profitable for SMEs (Small and medium-sized enterprises), which are 98% of economic environment, a multi-billions digital economy market threw the design presented is a breakthrough for face recognition as it makes it an accessible tool.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Results</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Functioning</head><p>In this design, if the picture is of sufficient quality for an optimal analysis <ref type="bibr" target="#b5">[6]</ref>, the system queries first a micro database of a handful of the most recent faces, loaded on the computer's RAM <ref type="bibr" target="#b0">(1)</ref>. This reduces the load on the disk's database and accelerates the program, as between two frames, it is usually the same faces that show up.</p><p>If the face hasn't been recognized on the first database, a query is sent to the second one, which can store up to a thousand of faces, depending of the capacities of the computer (2). This database is typically conceived to store the faces of all the employees of a company and manage access controls.</p><p>If no match is found in the second database (the confidency of the comparison between the shown face and the existing ones is too low), the system querries online services, that can analyse the picture, confirm that the idividual in unknown via an online database (3), and differenciate its emotions, facial features as well as alaysing its environement, detecting immediate threats, such as weapons.</p><p>Finnaly, the result of this detection will be added to the RAM-loaded database to avoid detecting and alaysing again the same face <ref type="bibr" target="#b3">(4)</ref>. Each query to an online service having of course a cost.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Results</head><p>The performance reached by the test program fitted all of our expectations, if sometimes the description suffers small imprecisions it offers a real-time identification on video with 5 frames per second, spotting simultaneously several object <ref type="bibr" target="#b6">[7]</ref>, more than enough for a security camera, giving even an impression of relative fluidity in the capture. With identified facial features such as hair colour or emotions, a very precise recognition differentiating identical twins without any hesitation and beingable to detect some specific object such as weapons, we can say that, on a technical point of view, the performance test of the design is a complete success.    Although some obvious progress are to be made on the hair colour detection, the features calculated are generally close to reality and most importantly allowsa human identification of a person, even without a the subsequent picture.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Technical specifications</head><p>A software has been developed as a design proof of concept. It has been developed in C#, using an OpenCV wrapper for this platform, OpenCvSharp, Fisherfaces recognition algorithm and Microsoft's Face and Vision API.</p><p>The use of Fisherfaces has been motivated over other methods for its search of discrimination criteria, which is more reliable to exclude possible faces match, enhancing the security offered by the solution. We widely favour a false negative, which leads to a control on the server that the person truly isn't identified in our database, than a false positive, which would allow an intruder to get through the system. <ref type="bibr" target="#b7">[8]</ref><ref type="bibr" target="#b8">[9]</ref><ref type="bibr" target="#b9">[10]</ref><ref type="bibr" target="#b10">[11]</ref> The use of the Microsoft cognitive systems has been decided as it fitted the technical needs of the environment, offered a good transparency and as they send back details from the analysis of the image such as face coordinates, allowing further extrapolation. The other considered providers whichbilling systems were adapted to this design were Google Cloud Platform and IBM Watson.</p><p>The goal being to make the market as accessible as possible, it was important to reduce every source of costs. The system has been tested on several Microsoft Windows platforms (Win 7 and superior versions), it function and manage real-time recognition on computers having 4Gb of RAM, a dual core processor and a SSD of 64 GB,inferior configurations haven't been tested.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Costs</head><p>The design described here is of course flexible, meaning any online service could be used alternatively to Microsoft's. The calculation of costs for such an access control system was made considering an arbitrary a company size of a hundred workers (big company on the SME environment). Considering each of the employees comes into the company's building twice a day every working day, it is 4000 controls a month. If those faces are all stored locally, they should be recognized and therefore not generate any cost. If every day fifty unknown person comes into the building it will make about a thousand controls that are not perceived by the local recognition system. Those numbers all falls under the "free calls pool" of Microsoft Azure subscription, even considering that some queries of the analysis must be done in several steps, generating as many calls. However, this represents a laboratory reality which always differs from the "field". For the same amount of people, used in a production environment, the price of the online analysis has been calculated to be about 10 to 15 dollars a month, taking in account all the frequent errors of the software. <ref type="bibr" target="#b11">[12]</ref> As a onetime cost, it is necessary to get a small computer and a webcam to run the software. Multiple devices have been assessed on that purpose, all in a price range of 250 to 350 dollars for the computer and as for the camera between 50 and 80 dollars. For a total cost of 300-430 dollars a door. Counting the cost of electricity to power the system, the total cost of the installation is estimated to 1500 dollars for a period of 5 years (total cost of ownership).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Reliability</head><p>The test program realized to proof this design is able to distinguish similar faces such as twins easily. The confidence criterion has been configured severely, to make sure the local recognition system wouldn't give any false positive. This confidence has been set according to previous researches. <ref type="bibr" target="#b12">[13]</ref> Tests have been repeated several times on thousands of frames without any mistake from the software.</p><p>To assess the efficiency of the software, some further tests and comparison have been conducted. The computer has been presented pictures from five pairs of twins identified in the database and two pairs of pictures of the same person on different pictures and has to differentiate them. Humans, on the other hand, have been presented a similar set of pictures and were simply asked, having two seconds for each picture to tell which subjects were twins and which were not. <ref type="bibr" target="#b13">[14]</ref> The precision of the software couldn't be assessed with accuracy, as so far, the program hasn't been cheated on successfully. Whenever the confidence of the local face analyser is too low, faces are sent online for analysis. Since the program is at its final stage of development, the success rate has been of a hundred percent. Therefore, the upcoming paragraph, assessing the reliability of such systems, is based on external information and other systems.    Unveiling its last iPhone, Apple claimed its face recognition system has a reliability of one in a million, meaning that once in a million times two faces would be confused and recognised as being the same, this is the closest comparison possible to do to the online services used. <ref type="bibr" target="#b13">[14]</ref> To correlate this number we can take the code of a credit card in Russia, 4 digits or 10'000 possibilities, fingerprints , reputable unreliable once in 50'000 samples or an average home key (6 tumblers, 7 heights), which makes about 120'000 possibilities. Weather the reliability of the system is comparable to Apple's claim about its owns is discussable, but, nevertheless, the tests in laboratory are in favour of assessing a very high index of reliability forcomparable face recognition systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>As underlined in this presentation, face recognition is a fast-developing market at the moment, much is already done but much is left to be built and this design has a place in the development of the market. Every major actor involved in security should now consider getting themselves an access to this kind of technology, especially now it is more accessible than ever and as the market trend makes it very profitable.</p><p>In the future, the detection will be improved by assessing the liveness of faces. Checking that we are not given a picture of a face but that it is a genuine face we have in front of the camera. This can be made by different methods, but the most adapted to a system of those dimensions is the analysis of the micro-behaviour of the eyes. <ref type="bibr" target="#b15">[16]</ref><ref type="bibr" target="#b16">[17]</ref><ref type="bibr" target="#b17">[18]</ref><ref type="bibr" target="#b18">[19]</ref> The identification system on RAM will also be compared in efficiency to a YOLO system (You Only Look Once) in order to asset their respective efficiency and choose the most appropriate technology to keep a target acquired and analyse it only once. This kind of system could also be used on security cameras to get frames with a higher resolution and filter them threw an artificial intelligence able to understand which frames are relevant by an analysis of the pictures. Allowing selecting only relevant frames for storage, gives the possibility to significantly augment the quality of the camera's captors without being confronted to the problem of the storage space saturation. Emotion recognition and specifically this design can be adapted to the numerous of other uses such as home automations, alarms, research of wanted persons and many others that haven't been mentioned in this article. It is up for everyone, on this new market, to develop their own ideas.</p><p>Of course, this paper wasn't about a purely technical breakthrough, however I hope that the reader understands better now the face recognition market, how to use it efficiently and make it profitable, in particular with the design offered.This kind of design will make the difference between an emerging market and a fully grown and accessible one, bringing a new technology to the consumer. In other words, I want everyone to understand how face recognition systems are now in the reach of their hands.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 .</head><label>1</label><figDesc>Figure 1. Design's business process.</figDesc><graphic coords="3,84.40,323.00,200.30,150.90" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 .</head><label>2</label><figDesc>Figure 2. This image illustrates the analysis of a person caught on camera. Some of the information, such as the approximation of the age are not correct, however, the capture allows a clear identification (the program select the frame having the "best quality of face").</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 .</head><label>3</label><figDesc>Figure 3. Assets that although the quality on an image be poor, the program is able to extrapolate correct information.</figDesc><graphic coords="3,317.70,323.55,200.35,150.60" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 .</head><label>4</label><figDesc>Figure 4. Twins differentiation.</figDesc><graphic coords="5,191.55,139.15,190.40,107.70" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 .</head><label>5</label><figDesc>Figure 5. The common test sample between the computer and humans for twins' assessment.</figDesc><graphic coords="5,197.30,265.90,190.85,198.40" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>Features analysis for figure2 and 3.</figDesc><table><row><cell>Feature</cell><cell>Values figure 2</cell><cell>Precision</cell><cell>Values figure 3</cell><cell>Precision</cell></row><row><cell></cell><cell></cell><cell>(appreciation if data</cell><cell></cell><cell>(appreciation if data</cell></row><row><cell></cell><cell></cell><cell>unavailable) figure</cell><cell></cell><cell>unavailable) figure</cell></row><row><cell></cell><cell></cell><cell>2</cell><cell></cell><cell>3</cell></row><row><cell>Facial</cell><cell>13.8</cell><cell>72%</cell><cell>20</cell><cell>97%</cell></row><row><cell>Smile</cell><cell>0.0%</cell><cell>Correct</cell><cell>0.0%</cell><cell>Correct</cell></row><row><cell>Emotion</cell><cell>Neutral 99.7%</cell><cell>Correct</cell><cell>Neutral 98.7%</cell><cell>Correct</cell></row><row><cell>Glasses</cell><cell>No glasses</cell><cell>Correct</cell><cell>Reading glasses</cell><cell>Correct</cell></row><row><cell>Hair</cell><cell>Bald 33%</cell><cell>15%</cell><cell>Bald 33%</cell><cell>15%</cell></row><row><cell>Hair</cell><cell>Black 99%</cell><cell>85%</cell><cell>Black 100%</cell><cell>Error</cell></row><row><cell>Hair</cell><cell>Blond 84%</cell><cell>10%</cell><cell>Blond 53%</cell><cell>Correct</cell></row><row><cell>Hair</cell><cell>-</cell><cell>-</cell><cell>Brown 42%</cell><cell>Correct</cell></row><row><cell>Hair</cell><cell>Other 70%</cell><cell>-</cell><cell>Other 38%</cell><cell>-</cell></row><row><cell>Description</cell><cell>A woman standing</cell><cell>Correct</cell><cell>A woman in a blue</cell><cell>Correct</cell></row><row><cell></cell><cell>in a room</cell><cell></cell><cell>shirt</cell><cell></cell></row><row><cell>Object</cell><cell>Knife</cell><cell>Correct</cell><cell>gun</cell><cell>Correct</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 .</head><label>2</label><figDesc>Software control.</figDesc><table><row><cell>Feature</cell><cell>Value</cell></row><row><cell>Couple tested</cell><cell>7</cell></row><row><cell>Total frames</cell><cell>2500+</cell></row><row><cell>Accuracy</cell><cell>100%</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3 .</head><label>3</label><figDesc>Human control.</figDesc><table><row><cell>Feature</cell><cell>Value</cell></row><row><cell>Human subjects</cell><cell>18</cell></row><row><cell>Couple tested</cell><cell>6</cell></row><row><cell>Total frames</cell><cell>72</cell></row><row><cell>Accuracy</cell><cell>61%</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_0">IV International Conference on "Information Technology and Nanotechnology" (ITNT-2018)</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<ptr target="https://www.dpreview.com/articles/5778663183/ten-unique-cameras-from-the-dawn-of-consumer-digital-photography" />
		<title level="m">Digital Photography review</title>
				<imprint>
			<date type="published" when="2013-08-20">20.8.2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Overview of the face recognition grand challenge</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">J</forename><surname>Philips</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P J</forename><surname>Flynn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Scruggs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">W</forename><surname>Bowyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Hoffman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Marques</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Min</forename><forename type="middle">J</forename><surname>Worek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename></persName>
		</author>
		<idno type="DOI">10.1109/CVPR.2005.268</idno>
	</analytic>
	<monogr>
		<title level="m">Computer Vision and Pattern Recognition IEEE Computer Society Conference on Computer Vision and Pattern Recognition</title>
				<imprint>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Face recognition: A literature survey</title>
		<author>
			<persName><forename type="first">W</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Chellappa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">J</forename><surname>Philips</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rosenfeld</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Computing Surveys</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page" from="399" to="458" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">K A</forename><surname>Gates</surname></persName>
		</author>
		<title level="m">Our biometric future: facial recognition technology and the culture of surveillance</title>
				<imprint>
			<publisher>New York University press</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page">263</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Consecutive gender and age classification from facial images based on ranked local binary patterns</title>
		<author>
			<persName><forename type="first">V</forename><surname>Rybintsev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Konushin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A S</forename><surname>Konushin</surname></persName>
		</author>
		<idno type="DOI">10.18287/0134-2452-2015-39-5-762-769</idno>
	</analytic>
	<monogr>
		<title level="j">Computer Optics</title>
		<imprint>
			<biblScope unit="volume">39</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="762" to="769" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Neural network model for video-based face recognition with frames quality assessment</title>
		<author>
			<persName><surname>Nikitin M Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Konushin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A S</forename><surname>Konushin</surname></persName>
		</author>
		<idno type="DOI">10.18287/2412-6179-2017-41-5-732-742</idno>
	</analytic>
	<monogr>
		<title level="j">Computer Optics</title>
		<imprint>
			<biblScope unit="volume">41</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="732" to="742" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Real-time analysis of parameters of multiple object detection systems</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">I</forename><surname>Protsenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Kazanskiy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P G</forename><surname>Serafimovich</surname></persName>
		</author>
		<idno type="DOI">10.18287/0134-2452-2015-39-4-582-591</idno>
	</analytic>
	<monogr>
		<title level="j">Computer Optics</title>
		<imprint>
			<biblScope unit="volume">39</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="582" to="591" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Comparison between face recognition algorithm Eigenfaces, Fisherfaces and Elastic Bunch Graph Matching</title>
		<author>
			<persName><forename type="first">S</forename><surname>Jaiswal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bhadauria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R S</forename><surname>Jadon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Global Research in Computer Science</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="187" to="193" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Kernel Eigenfaces vs. Kernel Fisherfaces: Face Recognition Using Kernel Methods</title>
		<author>
			<persName><forename type="first">M-H</forename><surname>Yang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition</title>
				<meeting>Fifth IEEE International Conference on Automatic Face Gesture Recognition</meeting>
		<imprint>
			<date type="published" when="2002">2002</date>
			<biblScope unit="page" from="215" to="220" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Face recognition using Eigenface</title>
		<author>
			<persName><forename type="first">M A</forename><surname>Turk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A O</forename><surname>Pentland</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2002">2002</date>
		</imprint>
		<respStmt>
			<orgName>The Media Laboratory MIT</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Review and testing of frontal face detectors</title>
		<author>
			<persName><forename type="first">I A</forename><surname>Kalinovskii</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V G</forename><surname>Spitsyn</surname></persName>
		</author>
		<idno type="DOI">10.18287/2412-6179-2016-40-1-99-111</idno>
	</analytic>
	<monogr>
		<title level="j">Computer Optics</title>
		<imprint>
			<biblScope unit="volume">40</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="99" to="111" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<ptr target="https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/home" />
		<title level="m">Microsoft&apos;s Computer Vision API Version 2.0 documentation, Microsoft</title>
				<imprint>
			<date type="published" when="2018-08-22">22.8.2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Real-time face identification via CNN and boosted hashing forest</title>
		<author>
			<persName><forename type="first">Vizilter</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Gorbatsevich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">S</forename><surname>Vorotnikov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N A</forename><surname>Kostromov</surname></persName>
		</author>
		<idno type="DOI">10.18287/2412-6179-2017-41-2-254-265</idno>
	</analytic>
	<monogr>
		<title level="j">Computer Optics</title>
		<imprint>
			<biblScope unit="volume">41</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="254" to="265" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<ptr target="https://www.macworld.co.uk/feature/iphone/how-secure-is-face-id-3663992/" />
		<title level="m">How secure is Face ID?</title>
				<imprint>
			<date type="published" when="2018-11-01">01.11.2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">The Reliability of Facial Recognition of Deceased Persons on Photographs</title>
		<author>
			<persName><surname>Caplova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Obertov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Gibelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Mazzarelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Fracasso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Vanezis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Sforza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Cattaneo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Forensic Sciences</title>
		<imprint>
			<biblScope unit="volume">62</biblScope>
			<biblScope unit="page" from="1286" to="1291" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Liveness detection for face recognition</title>
		<author>
			<persName><forename type="first">G</forename><surname>Pan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Sun</surname></persName>
		</author>
		<idno type="DOI">10.5772/6397</idno>
	</analytic>
	<monogr>
		<title level="j">recent advances in face recognition IntechOpen</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Face recognition based on fitting a 3D morphable model</title>
		<author>
			<persName><forename type="first">V</forename><surname>Blanz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Vetter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="issue">9</biblScope>
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Deep neural networks are more accurate than humans at detecting sexual orientation from facial images</title>
		<author>
			<persName><forename type="first">M</forename><surname>Kosinski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Personality and Social Psychology</title>
		<imprint>
			<biblScope unit="volume">114</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="246" to="257" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Eyeblink-based Anti-Spoofing in Face Recognition from a Generic Webcamera</title>
		<author>
			<persName><forename type="first">G</forename><surname>Pan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wu</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICCV.2007.4409068</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 11th International Conference on Computer Vision</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
