<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Hybrid Intelligent System for Recognizing Biometric Personal Data</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Nickolay Rudnichenko</string-name>
          <email>nickolay.rud@gmail.com</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vladimir Vychuzhanin</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tetiana Otradskya</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Igor Petrov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Irina Shpinareva</string-name>
          <email>iryna.shpinareva@onu.edu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National University Odessa Maritime Academy</institution>
          ,
          <addr-line>Didrichson street 8, Odessa, 65029</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Odessa I. I. Mechnikov National University</institution>
          ,
          <addr-line>Dvoryanskaya 2, Odessa, 65082</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Odessa National Polytechnic University</institution>
          ,
          <addr-line>Shevchenko Avenue 1, Odessa, 65001</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The paper presents the results of the development of a hybrid intelligent system for recognizing biometric personal image data for the identification of human faces in various noise conditions from image data. Diagrams of use cases and classes intelligent system software implementation have been developed. An input sample of image data for 500,000 people of various ages was formed and prepared. A set of functional libraries and technologies of the Python programming language, including Tensorflow and Keras packages, were used for the system software implementation. Pattern recognition models of Local Binary Patterns, Eigenfaces and a hybrid model using a Deep Neutral Network based on the convolutional architecture of a multilayer perceptron have been implemented. The created intelligent system can be used in the experimental mode, which provides the possibility of configuration and parameters setting of the implemented recognition models with the task of superimposing coefficients or removing noise on input images for model training, as well as in the applied mode of preventive notification about the risks of detecting dangerous persons, which is necessary for industrial testing the functionality of the system in real time when submitting data streams for analysis. The conducted experimental studies of the computational processes speed and the accuracy of person identification allowed us to establish that among the developed and researched models, the proposed hybrid model of an artificial neural network has the highest accuracy. The authors proposed further ways of developing and improving the work of the created intelligent system for a wider range of its use.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Pattern recognition theory</kwd>
        <kwd>artificial neural networks</kwd>
        <kwd>machine learning</kwd>
        <kwd>hybrid intelligence systems</kwd>
        <kwd>deep learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Currently, with the development of computer technology, it has become possible to solve the
problems of the complex pattern recognition theory (PRT), which were previously considered
technically impossible due to a number of computational complexities and various reasons. In
connection with the constant growth of requests for the processing, collection and analysis of
various data, including media content, the tasks of partial or complete automation of the
identifying objects process in images and video streams in order to ensure the autonomy of the
detection processes of anomalous phenomena or subjects that are wanted. In particular, similar
tasks are considered relevant in the field of security, ensuring access of individuals to closed or
restricted objects, for example, in the operation of access control systems at enterprises, when
monitoring potential violators of order in shopping centers, public places [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>In the context of a risk-oriented approach to preventively informing security services about
possible risks of terrorist or other negative actions and events, it is expedient to ensure an effective
process various biometric data identification.</p>
      <p>
        Among the signs used in practice, by which the recognition of individual subjects is carried
out, there are indirect (gait, clothing, speed of movement), anthropometric (height, arm span, stride
length) and personalized (face, palm or finger prints) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        In the modern direction of objects identification in images, there is an active growth of technical
solutions offered both in the hardware and software segments. However, not all of the existing
systems for supporting biometric personal data recognition processes can fully take into account
the entire spectrum of signs, due to the noise effects that arise, for example, the presence of false
makeup objects on the face, glasses, scars [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        Therefore, an urgent task is the development and use of methods, algorithms and hybrid
intellectual systems (HIS) for recognizing images of a collective nature, taking into account sets
of signs and their correlation among themselves, obtained with the help of recording surrounding
world means (photo and video cameras) with the aim of minimizing human involvement in these
processes [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        Among existing PRT methods in practice, both classical, statistical and characteristic
approaches are used, as well as more innovative principles and models of machine learning (ML)
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In the basic setting of the task of biometric data recognition, the entire set of objects is divided
into fragments, each of which represents a certain image, which is defined by a set of its individual
manifestations. Thus, in fact, the recognition process is reduced to solving the problem of
classification based on ML. The method of classification of an element to an image is a decisive
rule [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        The metric is a measure recognition quality used to determine the distance between the
elements of objects, the smaller the metric value - the greater objects similarity degree.
Recognition system efficiency depends on the choice of image display form and metric types. The
procedure for recognizing biometric data on image images is reduced to assigning sets of initial
data to one of the possible classes by identifying the most significant features that unambiguously
describe and identify the data [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>In this regard, the research of different PRT models for the comparative analysis of their
effectiveness in the recognition task is an urgent and actual field.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Description of Problem and Related Works</title>
      <p>In the classic formulation of the PRT problem, mathematical models are used for the human
biometric data recognition, trying (in contrast to ML approaches) to replace the computational
experiment with conceptual logical reasoning, factor analysis, and mathematical proofs.</p>
      <p>Recently, in practice, in research [8] on solving problems of PRT in biometrics, monochrome
images are analyzed, which makes it possible to reduce the size and analyze images in the form of
functions on a plane.</p>
      <p>Thus, if we consider some set of points on a given plane T, where the function f(x,y) reflects
and describes any of its characteristics (brightness, clarity, transparency, etc.) at each individual
point of the image, then such a function is a formal form of image recording [9].</p>
      <p>The set of all possible functions f(x,y) on a given plane T is an integral model of the images
set. In this regard, the introduced concept of similarity between all images of objects can be
reflected as the task of recognizing biometric features that uniquely identify each individual. The
specific type of such a production depends significantly on the subsequent stages of image
recognition.</p>
      <p>In order to solve the problem of biometric data recognition, different methods have historically
been developed, including approaches based on neural networks, the Karunen-Loev
transformation, lines of equal intensity, and deformable comparable standards [10].</p>
      <p>In the creation of recognition algorithms, a key aspect is the automatic selection of facial
elements (in particular, the eyes, nose and mouth) on various images, after which the obtained
geometric characteristics are applied directly to the recognition of the face characteristic features.
The absence of comparison procedures based on data [11] is typical when describing these
approaches.</p>
      <p>In practice, when solving applied problems, such a typology of PRT methods is often used:
methods based on the principle of separation; approaches based on "potential functions";
evaluation methods (voting); statistical methods; methods based on the apparatus of logic algebra;
methods and models of artificial neural networks [12].</p>
      <p>In most situations, the effectiveness of their individual use is justified only in limited
conditions, the adaptation and hybridization of the used approaches within the framework of
intelligent decision support systems is more promising.</p>
      <p>
        Among the popular methods used by the authors of various scientific and applied literature [
        <xref ref-type="bibr" rid="ref3 ref7">3,
7, 11, 13</xref>
        ] should be highlighted: comparison of the type of correspondence between elements
against each other and comparison of the accumulated and representative series of biometric data.
Currently, a number of typical classifications and methods of PRT are used, in particular,
approaches of comparison with a reference object, clustering, community of properties.
      </p>
      <p>1. Comparison with a reference object, also called the principle of enumeration, a method
consisting in the fact that any of the available classes allocated in the system is matched with a
certain set of images.</p>
      <p>2. The principle of clustering, which is based on the presentation of features in the form of
some datasets without clearly defined relationships. In this case, the image is represented as a
vector in the characteristic space X, each individual class is compared with a set of vectors in this
space. Based on this feature space is divided into separate areas corresponding to classes (clusters)
[14]. At the same time, these areas may overlap with each other.</p>
      <p>This method is effectively used in research in the field of image processing and objects related
to quantitative data.</p>
      <p>3. The commonality of properties, the basis of the method is the use of connections between
each image elements. When using this approach, the recognition takes place according to an
algorithm that allows us to select individual image properties specified by the user, on the basis of
which they are compared with the selected classes properties.</p>
      <p>A generalizing property is the image generation algorithm. In the case when the algorithm for
generating new images is used, all classes can be specified by using algorithms for generating
structures of a specific type [15].</p>
      <p>There are also a number of more modern algorithms for face recognition, for example, the
Eigenfaces and Local Binary Pattern methods.</p>
      <p>The key idea of the Eigenfaces approach is to use the method of principal components to search
for vectors that optimally describe biometric images. Using such a method, it is permissible to
detect various changes in a given sample of images for training, to describe this modification in
the basis of several orthogonal vectors, called eigenvectors.</p>
      <p>The calculation of the main components actually boils down to the calculation of the
eigenvectors and covariant matrix values formed on the image analysis basis. A once-created set
of eigenvectors in the training sample is used for sequential coding of other images represented by
a such vectors weighted combination [16]. The task of the algorithm is to present the image as the
sum of the basic components (image), i.e.
,
(1)
where
– centered (after subtracting the mean) i-th image of the original sample,
represent
weights and eigenfaces.</p>
      <p>Using a given number of eigenvectors, a brief approximation of the face image is formed, which
is placed in the database in the form of a set of coefficient vectors used to search for correlating
features. The sum of all principal components with their product of their eigenvectors is the image
reconstruction.</p>
      <p>The disadvantage of the method is the need to create idealized conditions for illumination,
facial expression, the absence of extraneous objects and abnormal facial features, because
otherwise, the main components will not adequately reflect interclass variations [17].</p>
      <p>The Local Binary Pattern algorithm is a description of the neighborhood of an image pixel in
binary form.</p>
      <p>The stages of the algorithm include: selection of radius values and the number of points; setting
each of the points of its number; difference calculation in brightness values between each of the
extreme pixels and the central one; checking the difference value for negativity; transformation of
the image circumference from 0 and 1 to a line. Mathematically, the logic of this algorithm is as
follows
,
(2)
where
- step function that returns 0 when x&gt;0 and 1 otherwise.
- brightness values of the
pixel numbered neighbors, — the brightness most central pixel value [18].</p>
      <p>The disadvantage is the ambiguity of identification and the high noise impact on the final
images. The results of the considered methods analysis and approaches allow us to draw a
conclusion about the expediency of combining the considered approaches with ML models within
the hybridization framework, which will increase the overall efficiency and accuracy of pattern
recognition for more prompt and targeted information of special service personnel about possible
law enforcement violations risks.</p>
      <p>Thus, the work’s goal is the development and research the possibilities of a hybrid intelligent
system for recognizing biometric personal data in images based on the ML models use.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Technics and System Development</title>
      <p>The Python programming language, the PyCharm IDE development environment, and the Django
framework were used to develop HIS. NumPy, Pandas, Pandas, SciKit-Learn, Apache Mahout,
Tensorflow, Keras libraries are used to build models of pattern recognition and data processing.
For data storage, a database (DB) was developed using MySQL.</p>
      <p>To solve the recognition task, the Local Binary Patterns (LBP), Eigenfaces (EF) models used
to identify contours, objects, and signs in images, as well as Deep Learning Neutral Network
(DNN) of the convolutional type for identifying hidden patterns in images, were selected and
implemented by software biometric data and recognition process automation [19].</p>
      <p>Face images and, separately, images of the eye are used as the considered biometric data and
are evaluated as a whole (a total sample of data for analysis is formed).</p>
      <p>The system is designed to work in two modes (fig.1): research and direct use (industrial). In
research mode, the HIS user has the following key options: go to and view the main page of the
system; viewing the list of preset images in the DB in the form of a list with a count of the images
number for each person; adding a new person specifying a new name and uploading an image;
updating data on an existing person in the database (name and image); choosing one of three
algorithms (Local Binary Patterns, Eigenfaces based on the Haar method and an artificial neural
network based on the model of a multi-connected convolutional network); starting the recognition
process; initialization of the all models retraining process on updated data (if new images were
added); viewing the recognition results (matching images for each of the selected algorithms,
evaluation of metrics) [20].</p>
      <p>Class diagrams of the main logic of image recognition, project models, and project dependency
classes with the Django framework are shown in Fig. 2.</p>
      <p>As we can see, the key logic of the recognition process general management in the project is
concentrated in the class face_recognizer.recognizers. Classifier, used by the object class, and the
classes face_recognizer.recognizers.Eigenfaces, face_recognizer.recognizers.LocalBinaryPattern
anf face_recognizer.recognizers.DNN are intended for the implementation of specific functions
and business logic for each of the corresponding methods.
In the research mode, the user, having entered the main page of the system, chooses to go to the
image recognition page by clicking on the corresponding hyperlink.</p>
      <p>Being on a new page by using the functionality of the recognition module, a new image is
loaded into the application memory from the file system from local media (the *.jpg format is
supported), after which one or more methods of image recognition are selected and the recognition
process, which is logged in time, is initialized within the project memory.</p>
      <p>After passing through the recognition processes, the values of the metric estimates according
to the selected recognition methods, as well as the corresponding images closest to the input, are
displayed on the new page by using the results display module. After that, clicking on the hyperlink
will take us to the main page of the software application. In the mode of using the system in an
online format, its sequence of work consists of the following stages [21]:
● input data is submitted via a video stream from a web camera or from a mobile device;
● on the basis of the submitted input data, their sequential processing is performed with
a check for the presence of a live object;
● identification of objects in the image is carried out by passing input data to the created
models;
● the image is compared with a standard face (including eyes) available in the database;
● a message about the verification result is displayed during authorization.</p>
      <p>In the mode of use, the system can be deployed on the basis of the following working
environment components (fig.3):
● ServerCompany, a software solution is installed for processing data coming from
DevicesStaff;
● DataBase Company, a separate database that stores data on the activity of identified users;
● Devices Staff, devices that are used to process incoming stream data.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Experiments and results analysis</title>
      <p>To study the effectiveness and speed of implemented models work in the software within the
developed HIS framework, 10 sets of images were downloaded and taken from free data sources
on the Internet [18], as well as based on the use of web cameras installed around the city and
accessible through the web interface different people of from different angles of each for training
the created models. The training sample is 80%, the test sample is 20% of the total data number.
For the DNN model, 8 dense layers with different numbers of neurons were created, relu was
selected as the activation function, the total number of training epochs was 500, and the training
error on the test sample was about 15%. The results of model studies are shown in table 1. A
comparison of the performance of pattern recognition methods based on the metrics average values
is shown in Fig. 4.
8,51
8,81
9,97
10,23
10,54
11,03
12,32
0,78
0,82
0,85
0,91
0,96
60
65
57
52
63
348
355
358
362
379
90
95
95
93
94
In addition, to assess the risk of incorrect classification from the security point view, a fuzzy model
analysis module of the output confidence level Y (Confidence) was introduced in the selected class
on the basis of 4 linguistic terms: matches by facial features L{Low, Middle, Hight}, eye features
W{Low, Middle, Hight}, defects N{Low, Middle, Hight} and integral coincidences
C{BelowThreshold, ThresholdEqual, AboveThreshold}, formed on the basis of the use of triangle
membership functions, the fuzzy rule base includes 168 entries for different values of estimates.
A fragment of the rules assessment for assessing incorrect classification risk is shown in the fig.5.
The 3D surface of a fuzzy risk assessment model is shown in the fig.6.
The analysis of the resulting fuzzy risk assessment model makes it possible to establish a greater
degree of the output variable dependence on the noise values and the accuracy of eye recognition,
as evidenced by the nature of transitions and recessions on the resulting three-dimensional surface.</p>
      <p>If the total level of hybrid model accuracy is higher than 0.9, which corresponds to a high
degree of confidence in recognizing an unwanted subject based on his biometric data, the system
can notify the security service and send an information request to the given email address.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>Based on the analysis of the obtained results, it should be noted that the hybrid model using DNN
is the most accurate, while the duration of using this model is significantly higher, compared to
other methods (performed separately), which is caused by the specifics of the implementation and
serialization of the model in the file form its loading and training. The obtained results confirm
the effectiveness of HIS and the biometric recognition models implemented within it. The
developed hybrid system can be flexibly supplemented with new functionality due to the
modularity of its structure. The areas of its application can be the direction of biometric
identification when accessing confidential information, informing about the unwanted subjects
detection risks or for educational and demonstrative tasks.</p>
      <p>Further ways of developing the project include the addition of a large number of object
identification algorithms, data cleaning and preprocessing, artifact removal and profiling methods
for the purpose of research and comparison of the different objects configurations features work
to recognize each other.
Transactions on Information Forensics and Security, 2021, рр. 35-54.</p>
      <p>DOI:10.1109/TIFS.2021.3122823
[8] J.Chaki, N.Dey, S Fuqian, R.Sherratt, Pattern Mining Approaches Used in Sensor-Based
Biometric Recognition: A Review, IEEE Sensors Journal, 19, 2019, рр.3569-3580.</p>
      <p>DOI:10.1109/JSEN.2019.2894972
[9] E.Turki, R.Alabboodi, M.Mahmood, A Proposed Hybrid Biometric Technique for Patterns
Distinguishing, Journal of Information Science and Engineering, 36, 2020, рр. 337-345.</p>
      <p>DOI:10.6688/JISE.202003_36(2).0012
[10] O.Castillo, D.Jana, D.Giri, A.Sk, Recent Advances in Intelligent Information Systems and
Applied Mathematics, Studies in Computational Intelligence, 863, 2020.
DOI:10.1007/9783-030-34152-7
[11] S.Jha, R.Yadava, K.Hayashi, N.Patel, Recognition and sensing of organic compounds using
analytical methods, chemical sensors, and pattern recognition approaches, Chemometrics and
Intelligent Laboratory Systems, 185, 2018, рр.18-31. DOI:10.1016/j.chemolab.2018.12.008
[12] G.Chowdary, G.Suganya, P.Mariappan, A.Phamila, K.Krishnasamy, Machine Learning and
Deep Learning Methods for Building Intelligent Systems, Medicine and Drug Discovery: A
Comprehensive Survey, 1, 2021. DOI:10.48550/arXiv.2107.14037
[13] M.Xiao, H.Yi, Haibo, Intelligent grading system based on deep learning, The International</p>
      <p>Journal of Electrical Engineering &amp; Education, 1, 2021. doi: 10.1177/0020720920983994
[14] A.Koubaa, A.Azar, Deep Learning for Unmanned Systems, Studies in Computational</p>
      <p>Intelligence, 984, 2021. DOI:10.1007/978-3-030-77939-9
[15] Y.Liu, L.Ma, J.Zhao, Secure Deep Learning Engineering: A Road Towards Quality
Assurance of Intelligent Systems, in: ICFEM 2019: Formal Methods and Software
Engineering, 2019, pp. 3-15. DOI:10.1007/978-3-030-32409-4_1
[16] M.Aydın, K.Depboylu, N.Erdem, Biometric Data Harvesting: Proposals on Remote
Biometric Data Gathering and Measurements in Human Behaviours Scope, in: 4th
International Congress on Human Studies, Ankara, 2021
[17] R.Ullah, H.Hayat, A.Siddiqui, U.Siddiqui, J.Khan, F.Ullah, S.Hassan, L.Hasan, W.Albattah,
M.Muhammad, M.Karami, A Real-Time Framework for Human Face Detection and
Recognition in CCTV Images, Mathematical Problems in Engineering, 2, 2022.</p>
      <p>DOI:10.1155/2022/3276704
[18] N.Rudnichenko, V.Vychuzhanin, I.Petrov, D.Shibaev, Decision Support System for the
Machine Learning Methods Selection in Big Data Mining, in: Proceedings of The Third
International Workshop on Computer Modeling and Intelligent Systems (CMIS-2020),
CEUR-WS, 2608, 2020, pp. 872-885
[19] N.Rudnichenko, V.Vychuzhanin, V.Mateichyk, A.Polyvianchuk, Complex Technical
System Condition Diagnostics and Prediction Computerization, in: Proceedings of The Third
International Workshop on Computer Modeling and Intelligent Systems (CMIS-2020),
CEUR-WS, 2608, 2020, pp.1 -15
[20] I.Petrov, V.Vychuzhanin, N.Rudnichenko, T.Otradskya Data Mining Information System for
Complex Technical Systems Failure Risk Evaluation / CMIS-2022 Computer Modeling and
Intelligent Systems 2022, ceur-ws.org/Vol-3137, рр.250- 261
[21] N.Rudnichenko, S.Antoshchuk, V.Vychuzhanin, A.Ben, I.Petrov, Information System for the
Intellectual Assessment Customers Text Reviews Tonality Based on Artificial Neural
Networks, Proceedings of the 9th International Conference "Information Control Systems &amp;
Technologies", Odessa: Odessa National Polytechnic University, Ukraine, September 24–26,
2020, рр. 371-385</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>O.</given-names>
            <surname>Castillo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Melin</surname>
          </string-name>
          ,
          <article-title>Hybrid Intelligent Systems in Control, Pattern Recognition and Medicine</article-title>
          ,
          <source>Studies in Computational Intelligence</source>
          ,
          <volume>827</volume>
          ,
          <year>2020</year>
          . DOI:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -34135- 0
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <source>Representation Learning and Pattern Recognition in Cognitive Biometrics: A Survey</source>
          ,
          <year>Sensors</year>
          ,
          <volume>22</volume>
          ,
          <year>2022</year>
          . DOI:
          <volume>10</volume>
          .3390/s22145111
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Manikandan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Thaventhiran</surname>
          </string-name>
          ,
          <source>Pattern Recognition Concepts</source>
          ,
          <source>Machine Learning and Big Data: Concepts</source>
          ,
          <source>Algorithms, Tools and Applications</source>
          ,
          <year>2020</year>
          , рр.
          <fpage>131</fpage>
          -
          <lpage>152</lpage>
          . DOI:
          <volume>10</volume>
          .1002/9781119654834.ch6
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B.</given-names>
            <surname>Mangata</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Nakashama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Muamba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Christian</surname>
          </string-name>
          ,
          <article-title>Implementation of an access control system based on bimodal biometrics with fusion of global decisions: Application to facial recognition and fingerprints</article-title>
          ,
          <source>Journal of Computing Research and Innovation</source>
          ,
          <volume>7</volume>
          ,
          <year>2022</year>
          , рр.
          <fpage>43</fpage>
          -
          <lpage>53</lpage>
          . DOI:
          <volume>10</volume>
          .24191/jcrinn.v7i2.
          <fpage>289</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Jaya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Latha</surname>
          </string-name>
          ,
          <source>An Analysis of Pattern Recognition and Machine Learning Approaches on Medical Images, Applications of Artificial Intelligence for Smart Technology</source>
          ,
          <year>2021</year>
          , рр.
          <fpage>35</fpage>
          -
          <lpage>54</lpage>
          . DOI:
          <volume>10</volume>
          .4018/978-1-
          <fpage>7998</fpage>
          -3335-2.
          <fpage>ch003</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>B.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Sayeed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Qi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Darwiche</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Little</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Darwish</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Foote</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Palacio-Lascano</surname>
          </string-name>
          ,
          <article-title>Computer Vision and Machine Learning-Based Gait Pattern Recognition for Flat Fall Prediction</article-title>
          , Sensors,
          <volume>22</volume>
          ,
          <year>2022</year>
          . DOI:
          <volume>10</volume>
          .3390/s22207960
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bassit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Hahn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Peeters</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kevenaar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Veldhuis</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Peter,
          <article-title>Fast and Accurate Likelihood Ratio Based Biometric Verification Secure Against Malicious Adversaries</article-title>
          , IEEE
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>