=Paper= {{Paper |id=Vol-2098/paper10 |storemode=property |title=Determining of Parameters in the Construction of Recognition Operators in Conditions of Features Correlations |pdfUrl=https://ceur-ws.org/Vol-2098/paper10.pdf |volume=Vol-2098 |authors=Shavkat Kh. Fazilov,Nomaz M. Mirzaev,Sobirjon S. Radjabov,Olimjon N. Mirzaev }} ==Determining of Parameters in the Construction of Recognition Operators in Conditions of Features Correlations== https://ceur-ws.org/Vol-2098/paper10.pdf
 Determining of Parameters in the Construction
   of Recognition Operators in Conditions of
             Features Correlations

                      Shavkat Kh. Fazilov, Nomaz M. Mirzaev,
                    Sobirjon S. Radjabov, and Olimjon N. Mirzaev

    Scientific and Innovation Center of Information and Communication Technologies,
                             Tashkent, Republic of Uzbekistan
                      sh.fazilov@mail.ru,nomazmirza@rambler.ru,
                        s_radjabov@yahoo.com,mirzaevon@mail.ru
                                 http://www.insoft.uz



         Abstract. The problems of constructing an extreme recogniton oper-
         ator in the framework of the model of recognition operators based on
         radial functions are considered in the context of features correlations. To
         solve the problem of finding the optimal value of parameters, a heuris-
         tic approach based on the successive application of local procedures for
         calculating parameter values at each stage is proposed.
         In order to assess the efficiency of the proposed method, an experimental
         study was carried out to solve a model problem generated from specified
         distribution parameters and the problem of recognizing a person by photo
         portraits.
         The analysis of the results of the conducted experimental research showed
         that the proposed method of determining parameters in the construction
         of recognition operators in conditions of features correlations allows to
         improve recognition accuracy in solving applied problems. In this case,
         the constructed extreme model of recognition operators signicantly re-
         duces the number of computational operations when recognizing an un-
         known object.

         Keywords: Extremal recognition operators · Features correlation · Pa-
         rameters of recognition models




1      Introduction

Pattern recognition is one of the most intensively developing areas in the field
of computer science. This is due to the fact that the methods and algorithms
for pattern recognition have become increasingly used in science, technology,
     Copyright c by the paper’s authors. Copying permitted for private and academic purposes.
    In: S. Belim et al. (eds.): OPTA-SCL 2018, Omsk, Russia, published at http://ceur-ws.org
    Determining of Parameters in the Construction of Recognition Operators     119

production and everyday life in recent years. Therefore, an increasing number of
specialists pay attention to the problem of pattern recognition and the number
of scientific publications on this subject is constantly growing.
    An analysis of existing literature sources on pattern recognition shows that
the development of this direction is divided into two stages. The first stage of
development consisted of the projects of various technical devices or algorithms
for solving specific applications. The value of the developed methods is deter-
mined, first of all, by the achieved experimental results. At the second stage, the
transition from individual algorithms to the construction of models - families of
algorithms for a single description of methods for solving classification problems
was carried out. At this stage, the recognition problem was formulated as an op-
timization problem, which allowed using the known optimization methods and
stimulated the appearance of new methods, in particular [1–3].
    Several well-known models of recognition algorithms have been constructed
and studied up to now [1, 4–8]: models based on the principle of separation;
statistical models; models based on the potential principle; models based on
mathematical logic; models based on the calculation of estimates. However, anal-
ysis of these models shows that currently the models of recognition algorithms,
oriented to solving problems, where objects are described in the space of inde-
pendent (or weakly dependent) features, are mainly being developed. In practice,
often there are applied problems of recognizing of objects defined in the large
dimensional feature space. When solving such problems, the assumption of the
independence of features is often not fulfilled [9]. Consequently, there remains
an insufficiently resolved issue related to the creation of recognizing algorithms
that can be applied to solving applied problems of diagnosing, forecasting and
classifying objects in conditions of large dimensions of the feature space and
correlated features.
    In [10], a model of recognition operators based on the evaluation of the fea-
tures correlations in conditions of features correlations was described. The main
idea of this model is to find correlations between the features characterizing
objects belonging to the same class. We note that in [10] the problem of con-
structing an extreme recognition operator is not considered.
    For completeness of the presentation, we briefly consider the above-mentioned
model, determined by specifying seven steps:
    1) extracting the subset of strongly correlated features:
                                    n                     o
                              G B = T1 , T2 , . . . , Tn 0 ;                    (1)

   2) forming of the set of representative features:
                                n                             o
                         RX = xi1 , . . . , xiq , . . . , xin0 ;               (2)

   3) determining the models of correlations Fq in each subset Tq (q = 1, n0 ) for
Kj (j = 1, l):
                             n                             o
                       M = Fi1 , . . . , Fiq , . . . , Fin0 ;                  (3)
120       Sh. Fazilov et al.

      4) extracting the set of preferred correlation models:
                                  n                              o
                           RM = Fj1 , . . . , Fjq , . . . , Fjn00 ;             (4)

    5) determining the difference function between the object S and the objects
of the class Kj (j = 1, l);
    6) determining the proximity function between the object S and the objects
of the class Kj (j = 1, l).
    We have defined a model of recognition operators of the type of potential
functions based on the evaluation of the features correlations. An arbitrary op-
erator B from this model is completely determined by specifying the set of
parameters π̃ [11]:
                                                                 
                    π̃ = n0 , {w̃}, {c}, {ω̃}, {λi }, {ξi }, {γu } .         (5)

    It is known [4] that an arbitrary recognition algorithm A can be represented
as a sequential execution of the operators B (recognition operator) and C (de-
cision rule):

                                          A = B · C.                            (6)
    It follows from (6) that the problem of finding the optimal algorithm A can
be considered as a search for the optimal recognition operator B for a fixed
decision rule C(c1 , c2 ).
    The aim of this paper is to solve the problems related to calculating the
values of the parameters (5) of the model for constructing extremal recognition
operators based on the potentials principle in condition of features correlations.
This uses a heuristic approach based on the consistent application of local opti-
mization procedures at each stage.
    The formal description of the problem of determining the parameters π̃ is as
follows.


2      Statement of the Problem
A model of recognition operators based on the potential principle is given. Any
operator B from 
                this model is completely determined      by specifying the set of
                    0
parameters π̃ = n , {w̃}, {c}, {ω̃}, {λi }, {ξi }, {γu } . We denote the set of all
recognition operators from the proposed model by B(π̃, S). Then the problem of
constructing extreme recognizing operators based on the potential principle can
be formulated as the problem of finding the extreme operator B(π̃ ∗ , S) among
the recognizing operators B(π̃, S). Here π̃ is a vector of configurable parameters;
π̃ ∗ is the vector of optimal parameters. The recognition quality criterion is given
in the form
                                       m
                                    1 X                           
                        ϕA (π̃) =         K kα̃(Sj ) − A(π̃, Sj )kB ,           (7)
                                    m j=1
      Determining of Parameters in the Construction of Recognition Operators         121
                                          
                                              0, for x = 0;
                                 K(x) =
                                              1, otherwise,
where m - remount of training set; k · kB norm of a Boolean vector.
    Then the problem of constructing extreme recognition operators consists in
finding the optimal value of the components of the vector-parameter π̃ ∗ for a
given model of recognizing operators ensuring the fulfillment of the following
condition:

                                  π̃ ∗ = arg min ϕA (π̃).                             (8)
                                                π̃

   Thus, the problem of determining parameters in the construction of extreme
recognition operators is reduced to the optimization problem (8).


3     Proposed Method

For solving the formulated problem, it is reduced to finding the optimal values
of the parameters at each stage. After each iteration, the value of the quality
functional (7) is calculated. If it is less than the specified threshold or the number
of iterations is greater than the specified one, the search procedure stops. We
consider procedures for determining the values of the parameters of each stage
separately.


3.1     The Procedure for Determining Subsets of Strongly Correlated
        Features

Let Tq(q = 1,n0 ) be subset of strongly correlated features. The proximity mea-
sure L Tp , Tq between the subsets Tp and Tq can be given in various ways, for
example:
                                                 m
                                 1         X X X
            L Tp , Tq =                            υk dk (xi , xj ),                  (9)
                            (Np − 1)(Nq − 1)
                                                     xi ∈Tp xi ∈Tp k=1


                                                  
                           Np = card Tp , Nq = card Tq ,

where dk (xi , xj ) a proximity measure between the features xi and xj over the
k-th object.               n                 o
    Determining of GB = T1 , T2 , . . . , Tn0 is carried out as follows.
    Step 1. The first step assumes that each subset contains only one element.
In this case we have the following n subsets:


      T1 = {x1 }, T2 = {x2 }, . . . , Tn = {xn }       (N1 = N2 = . . . = Nn = 1).   (10)
122      Sh. Fazilov et al.

   We define the initial link matrix kL1ij k as L1ij = bij [12]. Next, we consider
the execution of an arbitrary u-th step (k > 1).
   Step u. We suppose that u0 subsets T1 , . . . , Tn0 are defined and the link matrix
  (u−1)
kLij    ku0 ×u0 is constructed at the step (u − 1), where u0 = n − u + 1.
   Then at the u-th step the following operations are performed:
   1) combining of Tp Tq into one subset, if

                              (u−1)
      L(Tp , Tq ) = maxkLij           ku0 ×u0 , where i, j ∈ [1, 2, . . . , u0 ] and i 6= j;   (11)
    2) formation of a new u-th order communication matrix: kLuij k.
    The process of combining features continues until n0 subsets (n0 - some given
number), i.e. n0 independent subsets of the features T1 , T2 , . . . , Tn0 , where each
feature is strongly correlated in its subset, are obtained.

3.2    The Procedure for Determining a Representative Feature in
       each Subset of Strongly Correlated Features
At this stage, different methods can be used to select uncorrelated (represen-
tative) features from a subset of strongly correlated features. The main idea of
choosing is to allocate the most ”independent” (or weakly dependent) set of
features.
    Let Tq (q = 1, n0 ) be subsets of strongly correlated features. It is assumed
that Nq is the number of elements (power) of these subsets:

                               Nq = card(Tq ),           (q = 1, n0 ).                         (12)
     Then the procedure of this stage can be described as follows. In the beginning
it is assumed that u = 0.
     Step 1. Selection as a representative feature of the isolated elements of the
subset Tq , which differ sharply from other features. At this step, the following
actions are performed:
     - the value of u increases by one and the condition u > n0 is checked. If this
condition is met, then the algorithm stops;
     - if Nq = 1, then the element belonging to the subset Tq refers to the number
of representative features, and the transition to the previous action is performed.
     Step 2. Selecting a representative feature when a subset of strongly correlated
features contains more than two elements, i.e. under the condition Nq > 2, the
following sequence of operations is performed for all elements of Tq , except for
the element under consideration: - for each element Tq , the proximity of each
element to other elements of a given subset of features is calculated
                                 i−1                     Nq
                                 X                       X
                          µi =           ρ(xi , xj ) +           ρ(xi , xj ),                  (13)
                                 j=1                     j=i+1
                                                m
                                                X
                               ρ(xi , xj ) =          υk dk (xi , xj );                        (14)
                                                k=1
      Determining of Parameters in the Construction of Recognition Operators            123

      - an element of the subset Tq , that is closest to other elements is determined

                                       µj =       max          µi ;                     (15)
                                               i∈[1,...,Nq ]

    - the feature xj is selected as a representative feature.
    Step 3. When Nq = 2, the following actions are performed:
    - for each element Tq , the proximity estimate for the representative elements
of the other subsets that were selected at the previous selection stages is calcu-
lated:
                          N0
                          X
                  µiτ =         ρ(xi , xj );   iτ = 1, 2, . . . , κ;   τ ∈ [1, 2],      (16)
                          j=1

where κ–number of subsets that consist of two elements. N0 is the number of
isolated elements and elements selected from subsets with a power of more than
two;
    - element of the subset Tq , which is significantly different from other selected
representative features, is determined:

                                         µj = min µiτ ;                                 (17)
                                                τ ∈[1,2]

    - the feature xj is selected as a representative feature;
    - go to step 1.
    As a result of this procedure, an n0 -dimensional space X of features, each of
which is representative of the distinguished subset of strongly correlated features,
is formed:
                                                                
                             X = xi1 , . . . , xiq , . . . , xin0 .             (18)


3.3     The Procedure for Determining the Correlation Models in each
        Subset of Features for a Class

Let xiq be a representative feature belonging to the set Tq . The correlation
models between the features xiq and xi (xiq ∈ Tq , xi ∈ Tq \xiq ) are defined for
each class Kj in the form
                                     
                        xi = Fj c, xiq , xi ∈ Ωq \xiq ,                      (19)

where c is a vector of unknown parameters; Fj is some correlation model that
belongs to some given class {F }. It is assumed that the parametric form and the
number of parameters are known.
    For the sake of simplicity, it is assumed that the set {F } consists of only the
                                              xiq (xiq ∈ Iq is an independent
linear models. It is assumed that the feature
variable, and the feature xi xi ∈ Tq \xiq                  is a dependent variable. Then the
model of correlation takes the form
124     Sh. Fazilov et al.



                                     xi = c1 xiq + c0 ,                        (20)

where c1 , c0 are unknown parameters of the correlation model.
   To determine the numerical values of these parameters, we use the least
square method [13].


3.4   The Procedure for Identifying Preferred Correlation Models

Let J0 – be training set, E1 – set of objects belonging to the class Kj : E1 =
J0 ∩ Kj , and E2 – set of objects not belonging to the class Kj : E2 = J0 \E1 . We
consider the procedure for identifying an important (presumed) model of correla-
tion on the basis of estimating the dominance of the models under consideration
[10]:

                                                 Di
                                          Ti =      ,                          (21)
                                                 ∆i

Here Di – a selective error calculated for objects not belonging to the subset K̃j :

                                         1     X
                              Di =               |ζi (S)|,                     (22)
                                     card(E2 )
                                                  S∈E2

∆i – the selective variance of the error calculated for objects belonging to the
subset K̃i :

                                         1     X
                              ∆i =               |ζi (S)|,                     (23)
                                     card(E1 )
                                                  S∈E1
                                                           
                             ζi (S) = yi (S) − Fj c, xiq (S) ,                 (24)

where xiq (S) – representative feature q-th subset (xiq ∈ Iq ) of the object S;
yi – an arbitrary feature of the same object, except for the representative one
(yi ∈ Tq \xiq ).
    On the basis of formula (21), we compute the dominance estimate for all the
                                  subset Iq \xiq . As a result, we obtain (Nq −
characteristics that belong to the
1) values of Ti i = 1, (Nq − 1) . The choosing of the important correlation is
conducted as follows:

                         Tq∗ = max{T1 , . . . , Ti , . . . , T(Nq −1) }        (25)

    Repeating this procedure for all n00 subsets, we get a set of preferred correla-
tion models for the class Kj . It is assumed that Nq > 1. Otherwise, the number
of preferred models decreases as much as the number of subsets has only one
element.
      Determining of Parameters in the Construction of Recognition Operators                     125

3.5     The Procedure for Determining the Difference Function
        between the Object Su and the Objects of the Class Kj

With the help of this procedure, difference function that characterizes the quan-
titative measure of the remoteness of the object S from the source of the potential
is defined. In this case, the source of the potential is given as the model of the
correlation (19) in each subset Rq (q = 1, n00 ) for each class Kj .
    The difference function dq (Kj , S) between objects of class Kj and object S
in Rq can be defined as follows:
                                            00
                                           n
                                           X
                           dq (Kj , S) =           |ai − Fj (c, aq )|,                           (26)
                                           q=1

where ai , aq the value of the i-th and iq -th feature, corresponding to the object
S. It is assumed that the feature xi (i 6= iq ) belongs to a subset of strongly
correlated features – Rq ; aq – the value of the basic (representative) feature
xiq (xiq ∈ Rq ).
     We consider the problem of constructing a function characterizing the quanti-
tative estimation of the difference between the objects Su and Sv in the subspace
of strongly correlated features.
     Let an admissible object in the subspace of strongly correlated features Jq
(Jq = (χiq , χ1 , . . . , χq0 ), q 0 = card(Jq ) − 1) be given:

                                S = (aiq , a1 , . . . , aq0 ).                                   (27)
      The difference between this object and the class Kj is defined as follows:
                                              00
                                            n
                                            X
                             d(Kj , S) =            gq dq (Kj , S),                              (28)
                                            q=1

where gq recognition operator parameter (q = 1, n00 ). The set of these parameters
forms a vector g̃ = (g1 , . . . , gq , . . . , gn00 ).
    The problem is to determine the values of the unknown parameters {gq }
(q = 1, n00 ) of the difference function (26) over a given set of objects S̃ m .
    For this problem, we introduce a functional that characterizes the importance
of dq (Kj , S):
                                                   00
                                                 n
                                                        gq R1q
                                                 P
                                                 q=1
                                  R(g̃) = n00                                                    (29)
                                          P
                                                        gq R2q
                                                 q=1


          l
                               !                        l
                                                                                             !
          X     1 X                                     X       1         X
 R1q =              dq (Kj , S) ,          R2q =                                   dq (Kj , S)   (30)
          j=1
                mj                                      j=1
                                                              m − mj
                   S∈K̃j                                                 S∈C K̃j
126       Sh. Fazilov et al.

       Without loss of generality, we introduce restrictions on the coefficients (g1 ,
. . . , gq , . . . , gn00 ) in the form

                                             00
                                        n
                                        X
                                                  gq = 1.                           (31)
                                        q=1

   Taking into account the imposed restrictions on the components of the vector,
we can formulate the problem of determining g as follows

                                  g = arg min R(g)                                  (32)
                                                  Su ∈S̃ m

   To find the values of the vector g we construct a Lagrange function of the
form:

                                        00
                                       n
                                              gq R1q
                                       P
                                                                        00    !
                                                                   n
                                       q=1                         X
                           L(g̃, λ) = n00                 −λ                 gq ,   (33)
                                      P
                                              gq R2q               q=1
                                       q=1

where λ Lagrange multiplier.
   We differentiate (33) with respect to (i = 1, n00 ) and, by equating it to zero,
we obtain the following system of equations:

                               ∂L        R1i − R2i
                                   =                !2 + λgi .                      (34)
                               ∂gi       n00
                                             gi R2i
                                         P
                                         i=1


      By summing (34) over each derivative of gi , we find the value λ

                                             00
                                        n                        
                                                   R1i − R2i
                                        P

                                  λ = i=1                        !2 .               (35)
                                                  n00
                                                        gi R2i
                                                  P
                                                  i=1


      By substituting λ in (34), we compute the value gi (i = 1, n00 :

                                           R1 − R2i
                                  gi = n00  i        .                            (36)
                                            R2i − R2i
                                       P
                                        i=1

   Thus, by repeating the calculation process for all features, the difference
function is determined.
      Determining of Parameters in the Construction of Recognition Operators     127

3.6     The Procedure for Determining the Proximity Function
        between the Object Su and the Objects of the Class Kj
Let the proximity function between the objects Su and S be given as potential
functions of the second type [14]:
                                                        1
                           f (ξ, d(Kj , S)) =                    .               (37)
                                                  1 + ξd(Kj , S)
    The main tasks in constructing the proximity function between objects on
the basis of a potential function is to determine the value of the smoothing
parameter ξ. However, the search for the optimal value of this parameter in the
creation of a potential function has not been studied enough, and for each specific
case, elements of search and creativity are contained in finding ξ [15, 16]. The
authors of [15] assert that, depending on ξ, the resolving power of the potential
function is located. So, as ξ increases, fast decay of the potential function occurs
by moving away from objects of the given class, which leads to the creation of a
”relief” peaked above the ”top” of this image. A decrease in ξ results not only
to smoothing out the peaks, but also to the leveling the ”heights” of images of
different classes, which makes recognition difficult and leads to a large number
of vague and erroneous solutions.
    To calculate the smoothing parameter of a potential function, the approach,
the idea of which is borrowed from [16], is used.
    For the sake of simplicity, we introduce some restrictions (ie. we consider the
problem for only two classes) and the notations:

                  S̃ m = K̃1 ∪ K̃2 , K̃1 ∩ K̃2 = ∅, K̃1 6= ∅, K̃2 6= ∅;          (38)
                                           
                                               au , if Pj (S) = 1;
                           d(K̃j , S) =                                          (39)
                                               bu , if Pj (S) = 0,
where au - measure of the difference between the object S and the objects of
the class K̃j , when S belongs to the class K̃j (au > 0); bu is the measure of the
difference between the object S and the objects of the class K̃j , when S does
not belong to the class K̃j (bu > 0).
    Then the problem of determining the smoothing parameter is formulated as
follows:

                                 ξ = arg max R(ξ),                               (40)
                                           Su ∈S̃ m

                                M0               N0
                                                                        !2
                             1 X       1      1 X       1
                  R(ξ) =                    +                                ,   (41)
                             M0 u=1 1 + ξau   N0 u=1 1 + ξbu

                      ξ ∈ (0, ξmax ), ξmax = max(amax , bmax ),                  (42)

                    amax =     max {au }, bmax =            max {bu },           (43)
                             au ∈[1,M0 ]                  bu ∈[1,N0 ]
128    Sh. Fazilov et al.



             M0 = 0.5(m1 (m1 − 1) + m2 (m2 − 1)), N0 = m1 × m2 ,             (44)


                            m1 = card(K̃j ), m2 = m − m1 .                   (45)

    To solve problem (10) by taking into account the unimodality of function (11)
(as confirmed by experimental studies), we use the Fibonacci Search method [17].
    Thus, with the application of the proposed procedures, the values of all pa-
rameters of the considered model of recognition operators are determined. To
assess the operability of the heuristic approach examined, experimental studies
were conducted.


4     Experiments

An experimental study of the efficiency of the proposed approach in the con-
struction of recognition operators was carried out on the example of solving a
number of problems, in particular, the model problem, the problem of person
identification by the geometrical features of a photo portrait and the problem of
diagnosing cotton diseases from leaf images.
    As test models of recognition operators, the following were chosen: the clas-
sical model of recognition operators of the type of potential functions (B1 ), the
model of recognition operators based on the calculation of estimates (B2 ), and
the model of recognition operators based on the evaluation of the features cor-
relations (B3 ) [10].
    A comparative analysis of the above mentioned models of recognition op-
erators in solving problems was carried out according to the following criteria:
accuracy of recognition of test sample objects; time spent on training; time spent
on recognizing objects from the test sample.
    To calculate the above criteria in order to exclude the successful (or unsuc-
cessful) partitioning of the initial sample B to Bo and Bk sets (B = B0 ∪ Bk ,
B0 is the training sample, Bk is the test sample), the cross-validation method
was used [18]. The experiments were carried out on a Pentium IV Dual Core 2.2
GHz computer with 1 Gb of RAM.


4.1   Model Problem

The initial data of the recognized objects for the model problem are generated
in the space of the correlated features. The number of classes in this experiment
is equal to two. The volume of the initial sample is 1000 implementations (500
implementations for objects of each class). The number of features in the model
example is 200. The number of subsets of strongly correlated features is 5.
      Determining of Parameters in the Construction of Recognition Operators   129

4.2     The Problem of Person Identification

The initial data used for the problem of person identification by the geometric
features of a photo portrait consist of 360 portraits. The number of classes in
this experiment is equal to six. Each photo portrait is characterized by the
corresponding parameters (see Fig. 1): the distance between the center of the
retina of the left eye and the center of the tip of the nose; distance between the
center of the retina of the left eye and the center of the oral opening; distance
between the center of the retina of the right eye and the point of the tip of the
nose; distance between the centers of the retina of the eyes, etc.




          Fig. 1. Examples of some distances between anthropometric points.


    There are 60 different one-person portraits photographed at different times,
but with the same shooting conditions in each class. To distinguish these pa-
rameters, we used the algorithm for searching for characteristic features of the
face, described in [19–21].


4.3     The Problem of Diagnosis of Cotton Diseases

The initial data used in the problem of diagnosing cotton diseases from leaf
images consist of 200 images. The number of classes in this experiment is equal
to two:
    - images of cotton leaves, sick with wilt (K1 );
    - images of cotton leaves without wilt (K2 ).
    The number of images in each class is the same and is equal to 100. To extract
the features characterizing the phytosanitary state of cotton by the original
image of the leaves, we used the algorithm described in [22].
130     Sh. Fazilov et al.

5     Results


As mentioned earlier, all tasks were solved using the recognition operators B1 ,
B2 and B3 .




Fig. 2. Indicators of the speed of training (a) and recognition (b) and recogtion accu-
racy (c).



   Fig. 2,a shows the training speed of the recognition model on the training
sample of the problems under consideration, and Fig. 2,b shows the speed of
recognition of objects. The results of solving the problems under consideration
with the use of B1 , B2 and B3 during the test are shown in Fig. 2,c.
    A comparison of these results shows (see Fig.2) that the model of recognition
operators B3 allowed to increase the accuracy of recognition of objects described
in the space of correlated features (more than 6-10% higher than B1 and B2 ).
This is because the models B1 and B2 do not take into account the features
correlations. However, for model B3 , there is some increase in training time,
which requires further investigation.
    Determining of Parameters in the Construction of Recognition Operators      131

6   Discussion
The developed procedures are oriented to determine unknown parameters within
the model of recognition operators, which differ from traditional recognition op-
erators such as potential functions in that they are based on the evaluation of
the features correlations. Therefore, it is advisable to use these procedures in
those cases when there is some correlation between the features. Undoubtedly,
this correlation should be different for objects of each class. This allows to de-
scribe the objects of each class with an individual model. If the relationship
between the features is weak, then the classical model of recognition operators
is used (for example, the model considered in [4, 1, 5]). Consequently, the models
of recognition operators considered in [10] are not an alternative to models of
recognition operators of the type of potential functions, but only complement
them.
    In the case when a sufficiently strong correlation is found between the features
of all the objects under consideration, then in the process of forming a set of
representative features (described in the first and second stages of specifying the
model), the features repeating the same information are excluded, which ensures
the selection of features that are sufficiently representative of all those features
that are not contained in the given set [10].
    The results of the conducted experimental research show that the proposed
optimization procedures for constructing extreme recognition operators allows
to solve the problem of pattern recognition more accurately in conditions of
features correlations.


7   Conclusions
Procedures for constructing an extreme recognition operator based on potential
functions are proposed in the context of features correlations. These procedures
allow to expand the scope of application of recognition operators based on po-
tential functions.
    The results of solving a number of problems have shown that the proposed
procedures for determining parameters in the construction of an extreme recog-
nition operator improve accuracy and significantly reduce the number of com-
putational operations by recognizing an unknown object given in the space of
correlated features.

Acknowledgement. The work was carried out with partial financial support
of the grant BV-M-F4-003 of the State Committee for the Coordination of the
Development of Science and Technology of the Republic of Uzbekistan.


References
 1. Zhuravlev, Yu.I., Ryazanov, V.V., Senko, O.V.: Recognition. Mathematical meth-
    ods. Software system. Practical applications. Fazis, Moscow (2006), (in Russian)
132     Sh. Fazilov et al.

 2. Eremeev, A.V.: Methods of discrete optimization in evolutionary algorithms. In:
    5th All-Russian conference on Optimization Problems and Economic Applications.
    pp. 17–21. Omsk (2012) (in Russian)
 3. Kelmanov, A.V.: On some hard-to-solve problems of cluster analysis. In: 5th All-
    Russian conference on Optimization Problems and Economic Applications. pp.
    33–37. Omsk (2012) (in Russian)
 4. Zhuravlev, Yu.I.: Selected Scientific Works. Magister, Moscow (1998) (in Russian)
 5. Merkov, A.: Pattern Recognition: An Introduction to Statistical Learning Methods.
    Editorial URSS, Moscow (2011) (in Russian)
 6. Murty, M.N., Devi, D.V.S.: Introduction to Pattern Recognition and Machine
    Learning. World Scientific, New Jersey (2015)
 7. Cuevas, E., Zaldivar, D., Perez-Cisneros, M.: Applications of Evolutionary Com-
    putation in Image Processing and Pattern Recognition. Springer, New York (2016)
 8. Pattern Recognition Techniques: Technology and Applications. Edited by Yin, P.-
    Y. lTexLi, New York (2016)
 9. Fazilov, Sh.Kh., Mirzaev, O.N., Radjabov, S.S.: State of the art of the problems
    of pattern recognition. J. Problems of computational and applied mathematics 2,
    99–112 (2015), (in Russian)
10. Fazilov, Sh.Kh., Mirzaev, N.M., Mirzaev, O.N.: Building of recognition operators in
    condition of features correlations J. Radio Electronics, Computer Science, Control
    1, 58–63 (2016), (in Russian)
11. Mirzaev, N.M., Radjabov, S.S., Jumayev, T.S.: On the parametrization of models
    of recognition algorithms based on the evaluation of the features correlations. J.
    Problems of Informatics and Energetics 2-3, 23–27 (2016) (in Russian)
12. Mirzaev, O.N.: Extraction of subsets of strongly correlated features in the con-
    struction of extreme recognition algorithms. J. Bulletin of Tashkent University of
    Information Technologies 3, 145–151 (2015) (in Russian)
13. Draper, N.R., Smith, H.: Applied Regression Analysis. Wiley, New York (1998)
14. Tou, J.T., Gonzalez, R.C.: Principles of Pattern Recognition. Addison Wesley,
    Reading (1974)
15. Ogibalov, P.M., Mirzadjanzade, A.Kh.: Mechanics of Physical Processes. MSU,
    Moscow (1976) (in Russian)
16. Fazilov, Sh.Kh., Rakhmanov, A.T., Mirzaev, O.N.: Determination of the smoothing
    parameter of a potential function in the construction of recognition algorithms. In:
    12th International conference on Informatics. pp. 412–414. Voronezh (2012), (in
    Russian)
17. Cottle, R.W., Thapa, M.N.: Linear and Nonlinear Optimization. Springer, New
    York (2017)
18. Braga-Neto, U.M., Dougherty, E.R.: Error Estimation for Pattern Recognition.
    Springer, New York (2016)
19. Kuharev, G.A., Kamenskaya, E.I., Matveev, Yu.N., Shegoleva N.L.: Methods
    for Processing and Recognizing Faces in Biometrics. Politexnika, St. Petersburg
    (2013), (in Russian)
20. Starovoitov, V., Samal, D.: A geometric approach to face recognition. In: The
    IEEE EURASIP Workshop on Nonlinear Signal and Image Processing. pp. 210–
    213 (1999)
21. Wang, N., Ke, L., Du, Q., Liang, L.: Face recognition based on geometric features.
    In: International Conference on Computational Science and Engineering Applica-
    tions. pp. 641–647. Sanya (2015)
    Determining of Parameters in the Construction of Recognition Operators     133

22. Mirzaev, N.M.: Feature extraction in the problem of diagnosing of plants phyto-
    condition by leaves images. J. Vestnik of Ryazan state radioengineering 3 (41),
    21–25 (2012), (in Russian)