<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Anna V. Pyataeva</string-name>
          <email>anna4u@list.ru</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maria V. Verkhoturova</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Reshetnev Siberian State University of Science and Technology</institution>
          ,
          <addr-line>Krasnoyarsk</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Siberian Federal University</institution>
          ,
          <addr-line>Krasnoyarsk</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In current report, method for face detection and recognition based on visual data is proposed. For face detection Viola-Jones algorithm with Haar like features estimation based on cascade architecture were studied. The local binary patterns descriptor used for face recognition stage. Face recognition technique based on visual processing is significant for many applications. For example in the personal information protection, human-machine interaction, proctoring providing in distance learning platforms, access on the territory of objects with high security level e.t.c. Facial recognition approaches also vary considerably. At the initial stage of development of approaches to face recognition, geometric features were used to highlight the characteristic facial features [1, 2]. Nowadays, to solve this problem, deep learning technologies [3, 4], evolution algorithms [5], particle swarm method [6] and other approaches. On face recognition efficiency may influence various factors like varying expressions and illumination poor [7, 8], subject's pose variations [9], the own-age, gender, and -ethnicity [10-12] etc.</p>
      </abstract>
      <kwd-group>
        <kwd>face recognition</kwd>
        <kwd>face detection</kwd>
        <kwd>local binary pattern</kwd>
        <kwd>Viola-Jones algorithm</kwd>
        <kwd>Haar-like features</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>2.1</p>
      <sec id="sec-1-1">
        <title>Face detection</title>
        <p>
          In first face identification algorithm step one-against-all classification was used. This classification divides image
objects into two classes “face” and “no-face objects”. The Viola-Jones algorithm is identified as one of the classic
approaches to solving the problem of face recognition [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. The main area of application of the Viola-Jones method is
the problem of face detection [
          <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
          ]. The basis of the work of the Viola-Jones method is the identification of the
Haar like features and the use of a cascade classification model. A feature of the Viola-Jones method is to work with
an integral way of representing the image. The integral image representation is a matrix with the same size as the
original image. Each of its elements contains the sum of the pixel intensities locating to the left and above the current
element. integral image representation elements for each original image pixel are calculated by the Eq. 1.
(1)
where I(i, j) is the original image pixel intensity, (i, j) its coordinates. Thus, each element of the matrix L is the sum of
the intensity values of pixels in a rectangle from a pixel (0, 0) to a pixel with coordinates (x, y).
        </p>
        <p>The scanning window consisting of adjacent rectangles - the Haar primitives - is moved across the video image
being examined to calculate the Haar-like features. It becomes possible to separate face from other image objects by
selection such characteristic of the pixels intensity change features. By moving the scanning window over the entire
image, the Haar-like features are calculated. These features show the intensity difference value on the region of
interest. In the present work basic and additional Haar masks were used (Fig. 1). Use of additional Haar masks allows
detecting faces with different angles of rotation to the camera, even faces facing the camera with a rotation angle of
2.1</p>
      </sec>
      <sec id="sec-1-2">
        <title>Face recognition</title>
        <p>
          The second step is face recognition using local binary pattern (LBP) texture features estimation. The
LBP descriptors are computing for person identification. Classification based on local binary pattern operator is
widely uses in many applications [
          <xref ref-type="bibr" rid="ref17 ref18 ref19">17-19</xref>
          ]. To personal identification a face image is broken into non-intersecting
blocks. The LBP was introduced by Ojala et al. [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ] as a binary operator robust to lighting variations with low
computational cost and ability of simple coding of neighboring pixels around the central pixel as a binary string or
decimal value. The use of local binary patterns for solving the face recognition task is shown in Fig. 2.
more than 30 degrees [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. At the next stage the Haar-like features in the Viola-Jones algorithm are organized into a
cascade classifier. The result of the Viola-Jones classification algorithm is a set of attributes for each area, consisting
of 200 values of differences in intensity, allowing separating images containing a face from images without it.
        </p>
        <p>The operator LBPR(P) is calculated in a surrounding relatively a central pixel with intensity Ic by Eq. 2, where P is
a number of pixels in the neighborhood, R is a radius, Ic and In Y component values is from YUV color space. If (In –
Ic)  0, then s(In – Ic) = 1, otherwise s(In – Ic) = 0. Binary LBP code is computing as follows: the current LBP bit is
assigned the value “1” if for the current pixel the Y intensity is less than the central pixel intensity, “0” otherwise. In
225
33
…
169
this manner is calculated P-bit binary LBP code that describes a pixel neighborhood. In this paper, we take into
account 8 intensity values of neighboring pixels, that is, the radius R=1, to construct LBP binary code. The pixels are
traversed clockwise, the bit width of the LBP binary code is 8.</p>
        <p>Then a binary code is transformed to decimal code. For histogram computing amount of equal numbers is calculated
defining a position and a height of histogram columns. The constructed histograms for different parts of the face are
concatenated into one histogram. Chi-square distance, histogram intersection distance, Kullback-Leibler divergence,
and G-statistic are usually used during classification stage. In this research, the Euclidian Distance by Eq. 3 was
chosen for histogram comparison as the most recommended metric.</p>
        <p>P1
LBPR (P)   s  In  Ic   2n</p>
        <p>n0
D 
n
(hist1i  hist2i )2 ,
i1
(2)
(3)
where hist1i – column with number i of studied face image histogram, hist2i – column with number i of the face
image histogram from available facial dataset, n - the number of histogram columns. The block diagram of the
algorithm for using the local binary pattern operator for the face recognition task is shown in Fig. 3.</p>
        <sec id="sec-1-2-1">
          <title>Start</title>
        </sec>
        <sec id="sec-1-2-2">
          <title>Image selection</title>
        </sec>
        <sec id="sec-1-2-3">
          <title>Image is divided into blocks</title>
        </sec>
        <sec id="sec-1-2-4">
          <title>Block selection</title>
        </sec>
        <sec id="sec-1-2-5">
          <title>Reading the pixel intensity value, Ic</title>
          <p>Ic &lt; Ii
yes
s(x)=1
no</p>
          <p>no
s(x)=0
no</p>
        </sec>
        <sec id="sec-1-2-6">
          <title>Adding s (x) into LBP code Is the end of the block reached? yes</title>
        </sec>
        <sec id="sec-1-2-7">
          <title>Converting binary LBP code to a decimal number Is the whole image passed?</title>
          <p>yes
no</p>
        </sec>
        <sec id="sec-1-2-8">
          <title>No match found</title>
        </sec>
        <sec id="sec-1-2-9">
          <title>Histograms concatenation</title>
        </sec>
        <sec id="sec-1-2-10">
          <title>The calculation of the Euclidean distance</title>
        </sec>
        <sec id="sec-1-2-11">
          <title>Threshold value greater?</title>
        </sec>
        <sec id="sec-1-2-12">
          <title>User identification done yes End</title>
          <p>Histogram computing</p>
          <p>Thus, the combined histogram of facial fragments is compared on a threshold with each of the reference
histograms, based on this comparison, user identification is performed.
2</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>Experimental and results</title>
      <p>
        Experimental studies for the stages of face detection and recognition by video data were carried out separately.
For face detection stage sample videos, including 4916 examples of individuals and 8,500 examples with no faces
taken from the dataset Labeled Faces in the Wild Home [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] and the dataset Aberdeen [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] were applied. To verify
the quality of the face recognition algorithm, the YouTubeFaces (YTF) [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], McGillFaces Database [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] and Db
Fases Dataset [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ] datasets were used. The dataset contains YouTubeFaces 3425 of video images 1595 different
people, recruitment McGillFaces Database 60 videos with images of 40 different people set Db Fases Dataset 22 of
the video with images of 38 different people. Video images have different levels of illumination, contain a different
number of people of both sexes. At the same time, the number of people and the angle of rotation of the head of
person to the camera differ in the videos. Images have different sizes from 160 × 120 pixels to 1280 × 720 pixels. The
images contain both natural and human-made objects, people. In this case, video shooting was performed both
indoors and outdoors. In addition, people were completely free in their movements, which led to arbitrary scale and
facial expressions, the position of the head. The videos were in mp4 or avi format. Examples of frames of used videos
are shown in Table 1.
      </p>
      <sec id="sec-2-1">
        <title>YouTubeFaces\P1E_S2_М6.mp4.</title>
      </sec>
      <sec id="sec-2-2">
        <title>Number of frames:1025</title>
        <p>Resolution: 640×480.</p>
      </sec>
      <sec id="sec-2-3">
        <title>Number of faces: 3 female faces.</title>
      </sec>
      <sec id="sec-2-4">
        <title>YouTubeFaces\P1E_S2_М1.mp4.</title>
      </sec>
      <sec id="sec-2-5">
        <title>Number of frames:875.</title>
        <p>Resolution: 640×480.</p>
      </sec>
      <sec id="sec-2-6">
        <title>Number of faces: 1 female face, 1 male face.</title>
      </sec>
      <sec id="sec-2-7">
        <title>YouTubeFaces\P1E_S2_М3.mp4.</title>
      </sec>
      <sec id="sec-2-8">
        <title>Number of frames: 750.</title>
        <p>Resolution: 1280×720.</p>
      </sec>
      <sec id="sec-2-9">
        <title>Number of faces: 2 male faces.</title>
      </sec>
      <sec id="sec-2-10">
        <title>McGillFaces Database\mmdm2\ video \sx372.avi.</title>
      </sec>
      <sec id="sec-2-11">
        <title>Number of frames: 400</title>
        <p>Resolution: 640×480.</p>
      </sec>
      <sec id="sec-2-12">
        <title>Number of faces: 1 female face и 1 male face.</title>
      </sec>
      <sec id="sec-2-13">
        <title>YouTubeFaces\P1E_S2_D5.mp4.</title>
      </sec>
      <sec id="sec-2-14">
        <title>Number of frames: 900.</title>
        <p>Resolution: 1280×720.</p>
      </sec>
      <sec id="sec-2-15">
        <title>Number of faces: 2 male faces.</title>
      </sec>
      <sec id="sec-2-16">
        <title>YouTubeFaces\P1E_S1_С3.mp4.</title>
      </sec>
      <sec id="sec-2-17">
        <title>Number of frames: 1250.</title>
        <p>Resolution: 1280×720.</p>
      </sec>
      <sec id="sec-2-18">
        <title>Number of faces: 1 male face. Number of faces: 2 male faces. McGillFaces Database\mmdm2\ video\sx102.avi.</title>
        <p>FAR, %
0,00
0,00
0,00
0,00
0,00
0,00
5,80
0,00
0,00
4,70
0,00
TR, %
99,5
100
99,1
97,5
96,2
100
87,9
100
100
88,9
98,2</p>
      </sec>
      <sec id="sec-2-19">
        <title>Face recognition</title>
        <p>FRR, %
0,50
0,00
0,01
2,50
4,00
0,00
12,1
0,00
0,00
11,1
1,80</p>
        <p>As shown by the results of experimental studies, gender and age of people do not affect the quality of the
algorithm for detecting and recognizing faces. The quality of the algorithm is influenced by such factors as scene
illumination level, video resolution, speed of people moving on the stage, face rotation angle and the face openness
degree. Thus, an additional error in the algorithm of face detection and recognition is made by accessories worn on
the face, such as glasses, scarves, hats. Negative impact is also closing part of the face with hair, beard or mustache.
Emotional facial expression in most cases does not affect the results of the algorithm, but it can cause difficulties in
recognition, for example, with a wide smile or closed eyes of a person. In addition, when shading a part of the face,
the quality of the algorithm may decrease.</p>
        <p>Thus, the solution of the problem of facial recognition today is relevant for the implementation of various
practical tasks. In the present work, the Viola-Jones algorithm was used to face detection stage; local binary patterns
were used for facial recognition. Experimental studies conducted on heterogeneous video data confirm the
effectiveness of the proposed methods.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Turk</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pentland</surname>
            <given-names>A</given-names>
          </string-name>
          . Eigenfaces for recognition // J. Cognit. Neurosci.
          <year>1991</year>
          . No.
          <volume>3</volume>
          (
          <issue>1</issue>
          ). P.
          <volume>71</volume>
          -
          <fpage>86</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Belhumeur</surname>
            <given-names>P.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hespanha</surname>
            <given-names>J.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kriegman</surname>
            <given-names>D.J.</given-names>
          </string-name>
          <article-title>Eigenfaces vs. fisherfaces: recognition using class specific linear projection // IEEE Trans</article-title>
          .
          <source>Pattern Anal. Mach. Intell</source>
          .
          <year>1997</year>
          . No.
          <volume>19</volume>
          (
          <issue>7</issue>
          ). P.
          <volume>711</volume>
          -
          <fpage>720</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Taigman</surname>
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ranzato</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wolf</surname>
            <given-names>L</given-names>
          </string-name>
          .
          <article-title>Deepface: closing the gap to human-level performance in face verification //</article-title>
          <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source>
          .
          <year>2014</year>
          . P.
          <volume>1701</volume>
          -
          <fpage>1708</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Parkhi</surname>
            <given-names>O.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vedaldi</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zisserman</surname>
            <given-names>A</given-names>
          </string-name>
          .
          <source>Deep face recognition // Proceedings of the British Machine Vision Conference (BMVC)</source>
          .
          <year>2015</year>
          . Vol.
          <volume>1</volume>
          . P.
          <volume>41</volume>
          .
          <fpage>1</fpage>
          -
          <lpage>41</lpage>
          .
          <fpage>12</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Zhi</surname>
            <given-names>H.</given-names>
          </string-name>
          , Liu S.
          <source>Face recognition based on genetic algorithm // Journal of Visual Communication and Image Representation</source>
          .
          <year>2019</year>
          . Vol.
          <volume>58</volume>
          . P.
          <volume>495</volume>
          -
          <fpage>502</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Khan</surname>
            <given-names>S.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ishtiaq</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nazir</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shaheen</surname>
            <given-names>M.</given-names>
          </string-name>
          <article-title>Face recognition under varying expressions and illumination using particle swarm optimization //</article-title>
          <source>Journal of Computational Science</source>
          .
          <year>2018</year>
          . Vol.
          <volume>28</volume>
          . P.
          <volume>94</volume>
          -
          <fpage>100</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Nikan</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ahmadi</surname>
            <given-names>M.</given-names>
          </string-name>
          <article-title>A modified technique for face recognition under degraded conditions // Journal of Visual Communication</article-title>
          and
          <string-name>
            <given-names>Image</given-names>
            <surname>Representation</surname>
          </string-name>
          .
          <year>2018</year>
          . Vol.
          <volume>55</volume>
          . P.
          <volume>742</volume>
          -
          <fpage>755</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Ding</surname>
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tao</surname>
            <given-names>D</given-names>
          </string-name>
          .
          <article-title>Pose-invariant face recognition with homography-based normalization // Pattern Recognition</article-title>
          .
          <year>2017</year>
          . Vol.
          <volume>66</volume>
          . P.
          <volume>144</volume>
          -
          <fpage>152</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Liang</surname>
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zeng</surname>
            <given-names>X.X.</given-names>
          </string-name>
          <article-title>Pose-invariant 3D face recognition using half face /</article-title>
          / Signal Processing: Image Communication.
          <year>2017</year>
          . Vol.
          <volume>57</volume>
          . P.
          <volume>84</volume>
          -
          <fpage>90</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Wu</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            <given-names>D</given-names>
          </string-name>
          .
          <article-title>Effect of subject's age and gender on face recognition results // Journal of Visual Communication</article-title>
          and
          <string-name>
            <given-names>Image</given-names>
            <surname>Representation</surname>
          </string-name>
          .
          <year>2019</year>
          . Vol.
          <volume>60</volume>
          . P.
          <volume>116</volume>
          -
          <fpage>122</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Muikudi</surname>
            <given-names>P.B.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hills</surname>
            <given-names>P.J.</given-names>
          </string-name>
          <article-title>The combined influence of the own-age, -gender</article-title>
          , and
          <article-title>-ethnicity bases on face recognition // Acta Psychologia</article-title>
          .
          <year>2019</year>
          . Vol.
          <volume>194</volume>
          . P. 1-
          <fpage>6</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Segal</surname>
            <given-names>S.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Reyes</surname>
            <given-names>B.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gobin</surname>
            <given-names>K.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moulson M.C.</surname>
          </string-name>
          <article-title>Children's recognition of emotion expressed by own-race versus other-race faces //</article-title>
          <source>Journal of Experimental Child Psychology</source>
          .
          <year>2019</year>
          . Vol.
          <volume>182</volume>
          . P.
          <volume>102</volume>
          -
          <fpage>113</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Viola</surname>
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jones M.J. Rapid</surname>
          </string-name>
          <article-title>Object Detection using a Boosted Cascade of Simple Features //</article-title>
          <source>Proceedings IEEE Conf. on Computer Vision and Pattern Recognition</source>
          .
          <year>2001</year>
          . Vol.
          <volume>1</volume>
          . P.
          <volume>511</volume>
          -
          <fpage>518</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Irgens</surname>
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bader</surname>
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lé</surname>
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Saxena</surname>
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ababei</surname>
            <given-names>C.</given-names>
          </string-name>
          <article-title>An efficient and cost effective FPGA based implementation of the Viola-Jones face</article-title>
          detection algorithm // Hardware X.
          <year>2017</year>
          . No.1. P.
          <volume>68</volume>
          -
          <fpage>75</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Nguyen</surname>
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hefenbrock</surname>
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oberg</surname>
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kastner</surname>
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baden</surname>
            <given-names>S.</given-names>
          </string-name>
          <article-title>A software-based dynamic-warp scheduling approach for load-balancing the Viola-Jones face detection algorithm on gpus // J. Parallel Distrib</article-title>
          .
          <source>Comput</source>
          .
          <year>2013</year>
          . No.
          <volume>73</volume>
          (
          <issue>5</issue>
          ). P.
          <volume>677</volume>
          -
          <fpage>685</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Pyataeva</surname>
            <given-names>A.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Verkhoturova</surname>
            <given-names>M.V.</given-names>
          </string-name>
          <article-title>Face detection using the Viola - Jones algorithm</article-title>
          . // Proceedings of the International Scientific Conference «Regional Problems of Earth Remote Sensing»
          <article-title>RPERS 2018, Krasnoyarsk</article-title>
          , Russia,
          <year>2018</year>
          , P.
          <fpage>188</fpage>
          -
          <lpage>191</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Yuan</surname>
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shi</surname>
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xia</surname>
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            <given-names>S.</given-names>
          </string-name>
          <article-title>Encoding pairwise Hamming distances of Local Binary Patterns for visual smoke recognition // Computer Vision</article-title>
          and Image Understanding.
          <year>2019</year>
          . Vol.
          <volume>178</volume>
          . P.
          <volume>43</volume>
          -
          <fpage>53</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Xu</surname>
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jiang</surname>
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liao</surname>
            <given-names>Q.</given-names>
          </string-name>
          <article-title>Local polynomial contrast binary patterns for face recognition</article-title>
          // Neurocomputing.
          <year>2019</year>
          . Vol.
          <volume>355</volume>
          . P. 1-
          <fpage>12</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Hassaballah</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alshazly</surname>
            <given-names>H.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ali</surname>
            <given-names>A.A.</given-names>
          </string-name>
          <article-title>Ear recognition using local binary patterns: A comparative experimental study // Expert Systems with Applications</article-title>
          .
          <year>2019</year>
          . Vol.
          <volume>118</volume>
          . P.
          <volume>182</volume>
          -
          <fpage>200</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Ojala</surname>
            <given-names>T</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pietikäinen</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Harwood</surname>
            <given-names>D.</given-names>
          </string-name>
          <article-title>A comparative study of texture measures with classification based on feature distributions</article-title>
          .
          <source>Pattern Recognition</source>
          <year>1996</year>
          . No. 29. P.
          <volume>51</volume>
          -
          <fpage>59</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <article-title>Labeled Faces in the Wild Home database</article-title>
          . Available at: http://vis-www.cs.umass.edu/lfw/.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <article-title>Aberdeen dataset</article-title>
          . Available at: http://pics.psych.stir.ac.uk/2D_face_sets.htm.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <article-title>YouTubeFaces dataset</article-title>
          . Available at: http://www.cs.tau.ac.il/~wolf/ytfaces/index.html#download.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <article-title>McGillFaces dataset</article-title>
          . Available at: https://sites.google.com/site/meltemdemirkus/mcgill
          <article-title>-unconstrained-facevideo-database.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <article-title>Db Fases dataset</article-title>
          . Available at: http://www.videorecognition.com/db/video/faces/cvglab/.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>