<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Conceptual model of the face recognition process based on the image of the face and iris of personnel of critical infrastructure facilities⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sergey Bushuyev</string-name>
          <email>sbushuyev@ukr.net</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ihor Tereikovskyi</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksandr Korchenko</string-name>
          <email>agkorchenko@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ivan Dychka</string-name>
          <email>dychka@pzks.fpm.kpi.ua</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Liudmyla Tereikovska</string-name>
          <email>tereikovskal@ukr.net</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleh Tereikovskyi</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Kyiv National University of Construction and Architecture</institution>
          ,
          <addr-line>Kyiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>National Aviation University</institution>
          ,
          <addr-line>Kyiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute"</institution>
          ,
          <addr-line>Kyiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Today's challenges determine the need to improve the biometric authentication of personnel of critical infrastructure facilities. Common means of biometric authentication, which are usually based on the use of neural network technologies for facial image analysis, need improvement in the direction of adaptation to the conditions of recognition during the performance by personnel of their functional duties, which are characterized by the influence of interference during video recording and an increase in the probability attacks using dummies. Another area of improvement is determined by the availability of video recording tools, which provide the ability to recognize a person by the iris of the eye and the ability to recognize emotions. It is shown that the first stage of improvement of neural network means of biometric authentication is the development of a formalized description of the recognition process, which takes into account promising areas of improvement. A conceptual model containing a formalized description and criteria for evaluating the effectiveness of the recognition process is proposed. At the same time, for the first time, an approach to determining the parameters of obstacles was proposed, which involves comparing the parameters of obstacles with the location and number of key and control faces that they overlap. Recognition of attacks is proposed to be implemented based on the analysis of the dynamics of basic emotions, the dynamics of eye movement parameters and the environment. The results of this study are important in the context of the development of effective biometric authentication tools, as they provide a formalized description of the requirements for the functional capabilities of the main components of the process of recognizing the identity and emotions of personnel of critical infrastructure facilities.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;model</kwd>
        <kwd>critical infrastructure</kwd>
        <kwd>face image</kwd>
        <kwd>iris</kwd>
        <kwd>neural network</kwd>
        <kwd>information security 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>In modern conditions, even minor security breaches of critical infrastructure facilities (CIF)
can have a significant negative impact on the social and economic spheres of the state, as
well as its defence capability and national security. One of the main requirements for a
comprehensive information protection system of CIF is to ensure reliable identification and
authentication of CIF personnel. For this purpose, a wide range of tools is used, including
biometric authentication systems (BA) based on facial images (FI) and iris of the eye (IE)
recognition [1, 26]. This is primarily explained by the widespread availability of
highquality video cameras capable of accurately capturing relevant biometric parameters,
enabling sufficiently precise identification and authentication of personnel. However, the
results [13, 15, 16] and practical experience indicate insufficient accuracy of such
authentication systems in terms of face recognition under adverse conditions. Additionally,
in modern conditions, for effective performance of duties, CIF personnel must be in a
satisfactory psycho-emotional state, which can be monitored both during personnel access
to the object and while performing official duties through the analysis of FI and IE. This
explains the relevance of research aimed at enhancing CIF security through the
development and implementation of BA systems that provide both facial recognition and
assessment of the emotional state of personnel.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Literature Review and Problem Postulation</title>
      <p>In the process of analysis of scientific and practical works devoted to the development of
means of recognizing a person and the emotional state of a person, attention was focused
on the identification of promising solutions that can be used to ensure the effective
functioning of the BA systems of the CIF staff.</p>
      <p>In [30], the use of MobileSSD technology is described for the selection of human faces in
a video stream in real-time. The experiments were carried out under conditions of various
obstacles, including the presence of glasses on the face, and the presence of foreign objects
on the face that interfere with the fixation of characteristic points of the face. Also, the
effectiveness of the technology was determined in different conditions of the lighting level,
and the distance from the face to the camera. It was determined that turning the face most
often leads to the impossibility of its selection.</p>
      <p>In [16], the use of computer vision technologies to solve the problem of recognition of FI,
eyes and PEO in real time is defined. It is proposed to use the Blob Analysis approach to
identify FI, to use a threshold function for segmentation, and to use the Circle Hough
approach to highlight the IE. It is declared that the use of these solutions allows achieving
high recognition results on low-quality images.</p>
      <p>In [29], the use of deep learning methods for recognizing human emotions based on FI is
considered. Conduct recognition of 7 emotions. A previously trained Haar cascade classifier
was used to distinguish the face, eyes, and mouth. A convolutional neural network (CNN)
was used to recognize emotions, having previously trained it on the FER-2013 database.
Claimed recognition accuracy of 0.62 on the test sample.</p>
      <p>In [18], experimental studies were conducted to reveal the dependence of a person's
recognition of an actor's emotion based on his facial expression on the presence of a mask
on his face. It was found that the presence of a mask worsens the recognition of facial
emotions by about 20%.</p>
      <p>In [15], in addition to the effect of the presence of a mask, the effect of the presence of
sunglasses on the recognition of an actor's emotion and his identification by face is also
analyzed. It was found that, unlike the presence of a mask, the presence of sunglasses did
not reduce the accuracy of emotion identification and recognition.</p>
      <p>A study of the possibilities of such FI and ROO analysis systems as FaceReader from the
Noldus Information Technology company, Сaptemo from the Logic Pursuits company and
BioObserver from the Herta company was also conducted.</p>
      <p>Although the results of the analysis of scientific and practical robots and known software
and hardware solutions indicate the expediency of their application to CIF, these same
results testify to the complication of the identification and authentication procedure for FI
and IE under the influence of various interferences. It is also possible to conclude the most
promising analysis of FI and IE with the help of neural network means (NN means). Another
important direction of improvement of BA tools based on FI and IE is the need to increase
the effectiveness of protection against attacks with the help of dummies [31, 9, 17].</p>
      <p>At the same time, common means of recognizing such attacks are based on the analysis
of the quality of the controlled image [4], the analysis of its spatial characteristics [23, 27],
the verification of the dynamics of the parameters characteristic of the FI and IE of a living
person [3, 5, 26] and the execution by the controlled person of certain commands that cause
changing video registration parameters [10, 12]. At the same time, the same works noted
the difficulties of effective analysis of the quality of FI and IE under variable conditions of
video recording and the difficulty of determining spatial characteristics during video
recording with one camera.</p>
      <p>Also, the results of the conducted analysis indicate the absence of a formalized holistic
description of the BA process of CIF personnel according to FI and IE, which takes into
account the presence of typical problems, the possibility of attacks on the BA system with
the help of dummies, the need to identify the identity and emotional state of the staff, and
so on the mechanism for determining the effectiveness of the BA process.</p>
      <p>The main purpose of this study is to develop a conceptual model that provides a
formalized holistic description of the process of recognizing a person based on the image of
the face and the iris of the eye during biometric authentication of personnel of critical
infrastructure facilities using neural network tools, taking into account the need to identify
emotions and detect attacks using dummies.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Conceptual model development</title>
      <p>Designing a conceptual model for face recognition incorporating both facial and iris
recognition for critical infrastructure facilities involves several key components. Here's a
proposed breakdown:
1. Image Acquisition. The process begins with capturing images of personnel using
high-resolution cameras placed strategically at entry points or checkpoints within
the facility. Pre-processing. Raw images undergo pre-processing to enhance quality
and remove noise. This step may involve techniques like normalization, resizing, and
filtering to standardize the images.
2. Facial Recognition. Employ algorithms like Haar cascades or deep learning-based
methods to locate faces within the images. Feature Extraction. Utilize techniques like
Principal Component Analysis (PCA), Local Binary Patterns (LBP), or Convolutional
Neural Networks (CNNs) to extract discriminative features from the detected faces.
Matching. Compare the extracted features against a database of known personnel
using methods such as Euclidean distance, cosine similarity, or deep metric learning.
3. Iris Recognition. Locate and isolate the iris region within the captured face images
using techniques like Houh transforms or template matching. Feature Extraction.
Extract unique features from the iris pattern using methods like Gabor filters or
wavelet transforms. Matching. Compare the extracted iris features against a
database of enrolled personnel using algorithms like Hamming distance or phase
correlation.
4. Integration. Combine the results from facial and iris recognition modules to increase
the overall accuracy and reliability of the identification process. Employ fusion
techniques such as score-level fusion or decision-level fusion to integrate the
outputs from individual recognition modules.
5. Decision Making. Based on the fused recognition scores, decide the identity of the
personnel. Apply thresholding techniques to determine acceptance or rejection
based on the similarity scores obtained from the recognition process.
6. Access Control. Grant or deny access to the personnel based on the decision made
during the recognition process. Interface with the facility's access control system to
activate/deactivate entry mechanisms like doors, turnstiles, or gates.
7. Feedback and Iteration. Provide feedback to the system based on the outcomes of
recognition decisions to improve performance over time. Employ techniques such
as adaptive learning or retraining of the recognition models using new data to
enhance accuracy and robustness.</p>
      <p>Also, when developing the conceptual model, the results of [22, 24, 27] were taken into
account, which led to the use of the following terms:
• Neural network model - a model that describes the architecture of an artificial neural
network and characterizes the neurons that are part of it.
• IE is a colored ring in the front part of the pupil, consisting of muscle and connective
tissue and pigment cells, which changes the size of the pupil of the eye.
• FI - the image of the front part of the human head, which is bounded from above by
the forehead, below by the lower edge of the chin, and from the sides by the base of
the auricles.
• BA - authentication based on the results of the analysis of a person's biometric data.
• Attack using fakes (spoofing) – an attack based on providing a sensor to read fake
biometric data.
• Emotion is a mental reflection in the form of a direct, biased experience of the vital
meaning of phenomena and situations, determined by the relationship of their
objective properties to the needs of the subject.
• Basic emotions - anger, disgust, sadness, fear, surprise, contempt, joy.
•
•</p>
      <p>Key points are points on the face that are used to recognize emotions.</p>
      <p>Control points – points on a person's head that are used to recognize a person.</p>
      <p>This conceptual model forms the basis for implementing a comprehensive face
recognition system incorporating both facial and iris recognition for securing critical
infrastructure facilities.</p>
      <p>
        Taking into account the specifics of the problem of developing NN means BA of CIF
personnel, in the base case the proposed model is intended to describe the processes of
neural network processing of the registered video stream to recognize the identity of CIF
personnel and the presence of an attack using dummies on the BA system by FI and IE:
〈 ,  〉 → 〈 ,  ,  〉, (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
where  is a set of parameters characterizing video recording conditions;  is a set of
parameters characterizing the content of each frame of the video stream;  is a set of
parameters describing the result of person recognition;  is a set of parameters describing
the result of emotion recognition;  is the result of recognition of an attack using dummies;
 ,  ,  –power of sets  ,  ,  ;  - neural network recognition operator.
      </p>
      <p>Taking [2, 28] into account, it is accepted that the elements of the set  , which
characterize the conditions of video registration, include:  1 – the minimum permissible
level of illumination without the use of infrared illumination;  2 – the maximum permissible
level of illumination;  3,  4 – viewing angles of the video camera horizontally and vertically;
 5 – frame rate;  6 – color gamma format;  7 – video stream resolution;  8 – range of action
of infrared illumination;  9,  10 – the maximum possible angles of changing the direction of
video recording horizontally and vertically when the video camera functions in the object
tracking mode;  11 – illumination range in the visible range of light (white illumination);  12
– distance to the face;  13,  14,  15 – angles between the direction of video recording and the
projection of the face onto the planes Oxy, Oxz, Oyz.</p>
      <p>The general conditions for using a video surveillance system within one location are as
follows:  16 – illumination;  17 – number of video cameras;  18 – the maximum possible
number of monitoring objects;  19 – the presence of obstacles.</p>
      <p>Based on the analysis of standard solutions in the field of video surveillance, it was
determined that the parameter  5can take values from 7 to 60 frames per second. The
parameter  6 can be RGB, RGBA, BGR, monochrome or CMYK. The  7parameter can take the
following values: VGA (640x480 - 0.3 MP), HD (1280x720 - 1 MP, 1280x960 - 1.3 MP),
FullHD (1920x1080 - 2 MP), UHD (4K-3840x2160 - 8 MP, 8K - 7680x4320 - 33 MP). ).</p>
      <p>In addition, video surveillance systems provide opportunities for: changing the spatial
orientation of the video camera; changes in lighting; issuing commands to clarify the
position of the face in space and eliminate obstacles. The corresponding parameters are
marked as  20 −  23. In addition, video surveillance systems provide opportunities for:
changing the spatial orientation of the video camera; changes in lighting; issuing commands
to clarify the position of the face in space and eliminate obstacles. The corresponding
parameters are marked as ( 24) and video data packet reception frequency ( 25) are also
taken into account.</p>
      <p>Consider the set of parameters characterizing the content of each of the frames of the
video stream, the parameters of which are displayed in the elements of the set  . Since each
of the frames of the video stream is essentially a static monochrome or color image, a
separate element of the set  can be represented as:

 = | …
 1,1
  ,1
…  1,
…
…   ,</p>
      <p>… |  = 1 …  ,
where  is the nth frame;  s the number of frames of the video stream;  – horizontal
frame size;  is the vertical frame size;   , is the color of the pixel with coordinates( ,  ).</p>
      <p>When defining the set  , it is taken into account that each of its elements   is interpreted
as the confidence that the j-th representative of the CIF staff is recognized in the video
stream. At the same time, 0 ≤   ≤ 1. Taking into account the need to determine that an
illegitimate person should be recognized in the video stream and there may not be a human
object at all, the number of elements  is calculated as follows:</p>
      <p>=  + 2,
where  is the number of legitimate representatives of the CIF staff.</p>
      <p>In the case where  lies in the range from 1 to  ,   represents the confidence that the j-th
CIF personnel representative is recognized in the registered video stream. For  =  + 1,  
is the confidence that an illegitimate person is recognized in the registered video stream,
and for  =  + 2,   is the confidence that there are no people in the registered video
stream.</p>
      <p>The elements of the set  describe the emotions of the CIF personnel representative,
recognized based on the neural network analysis of the FI. Since in most authoritative works
it is accepted that the set of basic emotions includes joy, anger, disgust, fear, sadness,
surprise and neutrality, it is appropriate to describe the spectrum of basic emotions using
seven parameters. Thus, each of the seven elements of  (  ∈  , 0 ≤   ≤ 1) is correlated
with the manifestation of a basic emotion on the face.</p>
      <p>
        Thus, expressions (
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1-3</xref>
        ) are an analytical representation of the basic variant of the face
recognition model for the BA of CIF personnel according to FI and IE. In this case, the model
does not reflect the information processing operations performed to determine  ,  and  .
      </p>
      <p>To detail the basic version of the model, the decomposition of face recognition by FI and
IE at the BA of CIF personnel was carried out, taking into account the need to recognize
emotions and attacks using dummies.</p>
      <p>According to [23, 27], when decomposing the
procedure for recognizing a person, attention is focused on specific operations, the
effectiveness of which in modern BA systems of CIF personnel can be considered
insufficient. These operations should include:
•
•
•</p>
      <p>Selection of FI and IE in the video stream.</p>
      <p>Detection and leveling of recognition obstacles.</p>
      <p>Attack detection using dummies.</p>
      <p>
        According to [7, 19, 25], the recognition process can be presented as a sequence of the
following operations:
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
•
•
•
•
•
•
•
•
•
•
      </p>
      <p>Pre-processing of the image → .</p>
      <p>Selection of contours FI and IE → .

Determining the coordinates of control points → .</p>
      <p>Determining the coordinates of key points → .

Leveling of obstacles related to control points → .</p>
      <p>Leveling of obstacles that concern key points → .</p>
      <p>Neural network person recognition → .</p>
      <p>Neural network recognition of emotions → .</p>
      <p>Neural network recognition of additional parameters to identify attacks using
 γ
dummy → .</p>
      <p />
      <p>Neural network recognition of attacks → .</p>
      <p>The diagram of the decomposition of the procedure of recognition of the person
according to FI and IE at the BA of the CIF staff, taking into account the need to determine
emotions and detect attacks using dummies, is shown in Fig. 1.</p>
      <p>Means of processing and recognition
〈 ,  〉</p>
      <p>Checking the
parameters of</p>
      <p>the video
registration with
an acceptable
value</p>
      <p>Processing of
video stream
parameters</p>
      <p>Highlighting
contours of the
image of the
face and iris</p>
      <p>Identification
and leveling of
obstacles</p>
      <p>Application of
neural networks 〈 ,  ,  〉</p>
      <p>for face,
emotion and</p>
      <p>attack
recognition</p>
      <p>
        Taking into account the generally accepted technology of neural network analysis of a
video stream, the parameters of the model (̃ ) relating to the selection of FI and IE contours
should include the parameters that describe these contours in the pre-processed video
stream. By analogy with (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ), the frame of the processed video stream is described as follows:
 ̃ = | …
where  is the kth frame of the video stream;  is the number of frames of the input video
stream;  ̃ – horizontal frame size;  ̃ is the vertical frame size;  ̃̃, ̃ is the color of the pixel
with coordinates ( ̃ ,  ̃ ).
      </p>
      <p>The presence of obstacles is intended to be described using the set  . Note that the data
[6, 22] indicate the absence of a generally accepted approach to determining the parameters
of interference. Therefore, to describe obstacles related to the visibility of key and control
points, an approach is proposed that involves comparing the parameters of obstacles with
their location and the number of key and control points that they overlap. It is assumed that
the obstacle may overlap one or more zones shown in Fig. 2.</p>
      <p>Zone A corresponds to the upper part of the human head, zone B to the eyes, zone C to
the nose, D to the mouth, and E to the lower part of the face. Index 1 corresponds to the left
part of the face, and index 2 to the right part. The intensity of the disturbance localized in a
certain area of the face is suggested to be compared with the number of key and control
points that it overlaps. The intensity of interference on the IE is correlated with the area
covered by this interference. Note that the proposed approach allows you to adapt the
interference intensity to use in models with different numbers of key and control points.
Thus</p>
      <p>where  1, … ,  10 – a parameter that determines the intensity of interference localized in
the area of the face image A1, A2, B1, B2, C1, C2, D1, D2, E1, E2, respectively;  11,  12 – a
parameter that determines the intensity of interference on the IE of the left and right eye.</p>
      <p>= { 1,  2, … ,  12},

 =</p>
      <p>1
   =1
 
∑   ,
0 ≤   ≤ 1,
(5)
(6)
•
•
•
•
•
•
•
•
where  – the number of the corresponding area of the face image;   – the number of
control/key points in the zone  ;   – the degree of reduction in the visibility of the  - th
control/key point in the  - th area of the face due to the effect of interference.</p>
      <p>Based on the results [8, 20, 24], approaches to detect attacks on the BA system based on
the use of human face and eye dummies are proposed. The first approach involves the
detection of an attack based on the results of the analysis of parameters characterizing the
image of the environment, and the second - the detection of an attack based on the results
of the analysis of the dynamics of FI and IE parameters, which describe the basic emotions
of the CIF staff representative and additional parameters describing the dynamics of eye
movements. Therefore, the set  , which contains additional parameters used to recognize
attacks using dummies, includes:</p>
      <sec id="sec-3-1">
        <title>Change of gaze direction. Change in the size of the pupil of the eye. The presence of pulsations of blood vessels on the image of the eye. Eye blinking.</title>
        <p>When using the second approach, the set  includes the parameters describing:</p>
      </sec>
      <sec id="sec-3-2">
        <title>The limits of the dummy demonstration device; Characteristic changes in the quality of FI plots; Objects that are recorded in typical video recording conditions; Distance from video camera to FI.</title>
        <p>
          Considering (
          <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4">1-6</xref>
          ), individual operations of the recognition process are represented as
follows:
 ((∀ ∈  )  ∈  
) ∧ ((∀ ∈  )  ⊂  
) → 〈 ,  〉 ℎ
        </p>
        <p>,
〈 ,  〉 ℎ</p>
        <p>→   ,
〈 ,   〉 → ̃ ,
〈̃ 
〈̃ 
〈̃ 
, ̃ 
〈 , ̃ 〉 →
〈 , ̃ 〉 →
〈̃  ,  
〈̃  ,   〉 →
〈̃ 
〈̃ 
〉 →


,  
,  
〈̃  ,   〉,
〈̃  ,   〉,
- is the set of permissible values for  ∈  ;  
- set of admissible values for
– a tuple consisting of sets of  ,  , hat have passed the verification of
compliance of video registration parameters with permissible values;  
– a set of
pre
processed video stream parameters; ̃</p>
        <p>, ̃  - sets of parameters of control and key points;
  ,   – sets of interference parameters; ̃  , ̃ 
points after leveling of obstacles;</p>
        <p>– sets of parameters of control and key
- sets of parameters of the interference.</p>
        <p>Also, according to [11, 21, 14], the conceptual model includes a set of criteria for
assessing the accuracy of the recognition process. For operations →
are related to the classification of objects, the specified criteria include Accuracy, Recall,
   
, →
, → Γ  
, → , which
Precision and F1-score. For the operations → , →
segmentation, the Dice criterion was used:

, →</p>
        <p>, →

, → , which are related to semantic
(18)
where  – is the number of points describing the selected object;   ,
is the value
characteristic of the nth pixel of the selected object;   , - is the value characteristic of the
ith pixel of the expected output signal.</p>
        <p>Using the proposed conceptual model of recognition (7-18) in the development of NN
means, it is necessary to take into account the level of development of technologies of neural
network analysis of the video stream and the criteria for evaluating the effectiveness of BA
tools.</p>
        <p>Let's look at the benefits of using a conceptual model.</p>
        <p>The conceptual model of facial and iris recognition offers several benefits for critical
infrastructure security. Enhanced Security. This approach offers a stronger layer of security
compared to traditional methods like keycards or passwords. Facial and iris recognition are
unique biometric identifiers that are difficult to forge or replicate. Multimodal
Authentication. By combining facial and iris recognition (multimodal approach), the system
adds an extra layer of verification. Even if someone manages to spoof a face, a mismatch in
the iris recognition would deny access. Improved Access Control Efficiency. Facial and iris
recognition systems can automate the access control process, reducing wait times and
streamlining entry for authorized personnel. Reduced Reliance on Physical Credentials.
Physical access cards can be lost, stolen, or copied. Facial and iris recognition eliminates the
need for physical credentials, minimizing the risk of unauthorized access. Potential for
Deterrence. The very presence of a sophisticated facial and iris recognition system can
deter potential intruders, knowing they face a significant hurdle to gain access. Auditability.
The system can maintain a record of access attempts, allowing for easier identification and
investigation of suspicious activity. The conceptual model provides a framework for a
secure and efficient access control system that leverages the unique identification
capabilities of facial and iris recognition technology.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>As a result of the analysis of scientific and practical works, it is shown that to build effective
NN means of person recognition based on FI and IE of CIF personnel, it is necessary to
supplement the methodological base by developing a conceptual model that will provide a
formalized description of the recognition process.</p>
      <p>It was determined that the recognition procedure includes the operations of checking
the admissibility of the video registration parameters, refining the parameters of the video
stream, selecting FI and IE contours, detecting and leveling interference, and applying NN
means. For each of the specified operations, a list of efficiency assessment criteria adapted
to the characteristics of modern means of implementation is substantiated.</p>
      <p>For the first time, approaches to determining the parameters of obstacles for recognizing
faces and emotions and recognizing attacks using dummies are proposed. The approach to
determining the parameters of obstacles involves comparing the parameters of obstacles
with the location and number of key and control faces that overlap. Approaches to the
recognition of attacks with the help of dummies involve the detection of such attacks based
on the analysis of the dynamics of basic emotions, eye movement parameters, and the
environment during video recording.</p>
      <p>Analytical expressions have been developed that provide a formalized description of
each of the operations, and together, determined by the accuracy assessment criteria, form
a conceptual model of the process of recognizing a person by FI and IE at the BA of CIF
personnel using NN means, taking into account the need to determine emotions and detect
attacks using dummies.</p>
      <p>With the use of the developed recognition model, the prospects of improving NN means
BA systems due to the use of the proposed approaches to parameter determination were
determined obstacles and recognizing attacks using dummies. The development and
implementation of a face recognition system integrating both facial and iris recognition
technologies offer a robust solution for enhancing security at critical infrastructure
facilities. By following the conceptual model outlined and the subsequent steps,
organizations can Improved Security, Enhanced Efficiency Increased Reliability
Adaptability and Scalability, and Continuous Improvement.</p>
      <p>In essence, the implementation of a face recognition system incorporating facial and iris
recognition technologies represents a proactive approach to security management,
fostering a safe and secure environment for critical infrastructure facilities and their
personnel.
[5] Chandrani, S., Washef, A., Soma, M., &amp; Debasis, M. (2015). Facial Expressions: A
CrossCultural Study. In Emotion Recognition: A Pattern Analysis Approach (pp. 69-87). Wiley
Publ. doi:10.1002/9781118910566.
[6] Connaughton, R., Bowyer, K. W., &amp; Flynn, P. J. (2013). Fusion of Face and Iris Biometrics.</p>
      <p>In Handbook of Iris Recognition (pp. 219-237). Springer.
[7] Dychka, I., Chernyshev, D., Tereikovskyi, I., Tereikovska, L., &amp; Pogorelov, V. (2020).</p>
      <p>Malware Detection Using Artificial Neural Networks. In Z. Hu, S. Petoukhov, I. Dychka,
&amp; M. He (Eds.), Advances in Computer Science for Engineering and Education II.
ICCSEEA 2019. Advances in Intelligent Systems and Computing (Vol. 938, pp. 3-12).</p>
      <p>Springer, Cham. doi:10.1007/978-3-030-16621-2_1.
[8] Held, G. (2003). Securing wireless LAN's. Macon, Georgia, USA: Publisher.
[9] Linartz, J.-P., &amp; Tuyls, P. (2003). New shielding functions to enhance privacy and
prevent misuse of biometric templates. In Proceedings of 4th International Conference
on Audio And Video Based Biometric Person Authentication (pp. 393-402).
[10] Lu, X., &amp; Jain, A. K. (2011). Deformation modeling for robust face matching. IEEE</p>
      <p>
        Transactions on Pattern Analysis and Machine Intelligence, 30(8), 1347–1358.
[11] Ma, X., Fu, M., Zhang, X., Song, X., Becker, B., Wu, R., Xu, X., Gao, Z., Kendrick, K., &amp; Zhao,
W. (2022). Own Race Eye-Gaze Bias for All Emotional Faces but Accuracy Bias Only for
Sad Expressions. Frontiers in Neuroscience, 16, 1-11. doi:10.3389/fnins.2022.852484.
[12] Matusugu, M., Katsuhiko, M., Yusuke, M., &amp; Yuji, K. (2003). Subject independent facial
expression recognition with robust face detection using a convolutional neural
network. Neural Networks, 16(5-6), 555–559.
[13] Mian, A. S., Benamoun, M., &amp; Owens, R. (2011). Keypoint detection and local feature
matching for textured face recognition. International Journal of Computer Vision, 80(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),
1–13.
[14] Nazarkevich, M., Vozniy, Ya., &amp; Nazarkevich, G. (2021). Development of a machine
learning method for biometric protection with new filtering methods. Cyber Security:
Education, Science, Technology, 3(11), 16–30. doi:10.28925/2663-4023.2021.11.1630.
[15] Noyes, E., Davis, J., Petrov, N., Gray, K., &amp; Ritchie, K. (2021). The effect of face masks and
sunglasses on identity and expression recognition with super-recognizers and typical
observers. Royal Society Open Science, 8(
        <xref ref-type="bibr" rid="ref3">3</xref>
        ), 201169. doi:10.1098/rsos.201169.
[16] Ranjith, G., Pallavi, K., &amp; Mahendra, V. (2023). Human Face, Eye and Iris Detection in
Real-Time Using Image Processing. In J.K. Mandal, M. Hinchey, &amp; K.S. Rao (Eds.),
Innovations in Signal Processing and Embedded Systems. Algorithms for Intelligent
Systems (pp. 101-116). Springer, Singapore. doi:10.1007/978-981-19-1669-4_34.
[17] Ratha, N., Conneli, J., &amp; Bolle, R. (2001). Enhancing security and privacy in
biometricsbased authentication systems. IBM Systems Journal, 40(
        <xref ref-type="bibr" rid="ref3">3</xref>
        ), 614-634.
[18] Rinck, M., Primbs, M. A., Verpaalen, A. M., &amp; Bijlstra, G. (2022). Face masks impair facial
emotion recognition and induce specific emotion confusions. Cognitive Research:
Principles and Implications, 7(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), 83, 203-217. doi:10.1186/s41235-022-00430-5.
[19] Royer, J., Blais, C., Charbonneau, I., Déry, K., &amp; Tardif, J. (2018). Greater reliance on the
eye region predicts better face recognition ability. Cognition, 181, 12–20.
doi:10.1016/j.cognition.2018.08.004.
[20] Stallings, W., &amp; Brown, L. (2022). Computer security: principles and practice (4th ed.).
      </p>
      <p>Pearson.
[21] Tariq, U., Lin, K., Li, Z., Zhou, Z., Wang, Z., Le, V., Huang, T. S., Lv, X., &amp; Han, T. X. (2012).</p>
      <p>
        Emotion recognition from an ensemble of features. IEEE Transactions on Systems, Man,
and Cybernetics, Part B: Cybernetics, 42(
        <xref ref-type="bibr" rid="ref4">4</xref>
        ), 1017–1026.
[22] Tereikovskyi, I., Hu, Z., Chernyshev, D., Tereikovska, L., Korystin, O., &amp; Tereikovskyi, O.
(2022). The method of semantic image segmentation using neural networks.
International Journal of Image, Graphics and Signal Processing (IJIGSP), 14(6), 1-14.
doi:10.5815/ijigsp.2022.06.01.
[23] Tereykovska, L., Tereykovskiy, I., Aytkhozhaeva, E., Tynymbayev, S., &amp; Imanbayev, A.
(2017). Encoding of neural network model exit signal, that is devoted for distinction of
graphical images in biometric authenticate systems. News of the National Academy of
Sciences of the Republic of Kazakhstan Series of Geology and Technical Sciences, 6(426),
217–224.
[24] Toliupa, S., Kulakov, Y., Tereikovskyi, I., Tereikovskyi, O., Tereikovska, L., &amp;
Nakonechnyi, V. (2020). Keyboard Dynamic Analysis by Alexnet Type Neural Network.
In IEEE 15th International Conference on Advanced Trends in Radioelectronics,
Telecommunications and Computer Engineering (pp. 416-420).
doi:10.1109/TCSET49122.2020.235466.
[25] Toliupa, S., Tereikovskiy, I., Dychka, I., Tereikovska, L., &amp; Trush, A. (2019). The Method
of Using Production Rules in Neural Network Recognition of Emotions by Facial
Geometry. In 3rd International Conference on Advanced Information and
Communications Technologies (AICT) (pp. 323–327).
doi:10.1109/AIACT.2019.8847847.
[26] Vinette, C., Gosselin, F., &amp; Schyns, P. (2004). Spatio-temporal dynamics of face
recognition in a flash: it’s in the eyes. Cognitive Science, 28, 289–301.
doi:10.1016/j.cogsci.2004.01.002.
[27] Viola, P., &amp; Jones, M. (2005). Fast Multi-view Face Detection. Mitsubishi Electric
      </p>
      <p>Research Laboratories Technical Report TR2005-097, 67.
[28] Viola, P. (2005). Robust real-time face detection. International Journal of Computer</p>
      <p>
        Vision, 58(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ), 137–155.
[29] Viswanath Reddy, A., et al. (2021). Facial Emotions over Static Facial Images Using Deep
Learning Techniques with Hysterical Interpretation. Journal of Physics: Conference
Series, 2089, 1-17. doi:10.1088/1742-6596/2089/1/012014.
[30] Vysotska, O., Davydenko, A., &amp; Khrystevych, V. (2022). Segmentation of a person's face
in a video stream to monitor employees' compliance with safety conditions during
work and training. Information Security, 24(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ), 94-107.
doi:10.18372/24107840.24.16934.
[31] Zhuravlov, D., &amp; Polshakova, O. (2023). Detection of face spoofing attacks on biometric
identification systems. Interdepartmental scientific and technical collection "Adaptive
automatic control systems", 1(42), 108-114.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Ali</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thakur</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Tappert</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>User authentication and identification using neural network. I-manager'</article-title>
          <source>s Journal on Pattern Recognition, (2)</source>
          ,
          <fpage>28</fpage>
          -
          <lpage>39</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Bagitova</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tereikovskyi</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Babayev</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tereikovska</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Tereikovskyi</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Model for processing images of online social networks used to recognize political extremism</article-title>
          .
          <source>Journal of Mathematics, Mechanics and Computer Science</source>
          ,
          <volume>119</volume>
          (
          <issue>3</issue>
          ),
          <fpage>91</fpage>
          -
          <lpage>103</lpage>
          . doi:
          <volume>10</volume>
          .26577/JMMCS2023v119i3a8.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Batista</surname>
            ,
            <given-names>J. C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Albiero</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bellon</surname>
            ,
            <given-names>O. R.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Silva</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>AUMPNet: Simultaneous Action Units Detection and Intensity Estimation on Multipose Facial Images Using a Single Convolutional Neural Network</article-title>
          .
          <source>In 12th IEEE International Conference on Automatic Face &amp; Gesture Recognition</source>
          (pp.
          <fpage>866</fpage>
          -
          <lpage>871</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Callet</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Viard-Gaudin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Barba</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2006</year>
          ).
          <article-title>A Convolutional Neural Network Approach for Objective Video Quality Assessment</article-title>
          .
          <source>IEEE Transactions on Neural Networks</source>
          ,
          <volume>17</volume>
          (
          <issue>5</issue>
          ),
          <fpage>1316</fpage>
          -
          <lpage>1327</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>