<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Hyperplane Clustering of the Data in the Vector Space of Features Based on Pseudo Inversion Tools</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Iurii Krak</string-name>
          <email>krak@univ.kiev.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hrygorii Kudin</string-name>
          <email>gkudin@ukr.net</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Veda Kasianiuk</string-name>
          <email>veda.kasianiuk@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mykola Efremov</string-name>
          <email>nick.yefremov.in@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>Kyiv, 64/13, Volodymyrska str., 01601</addr-line>
          ,
          <institution>Ukraine Glushkov Cybernetics Institute</institution>
          ,
          <addr-line>Kyiv, 40, Glushkov ave., 03187</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Method of the hyperplane data clustering in the vector space characteristics features based on the results of the theory of perturbation of pseudo-inverse and projective matrices and solutions of systems of linear algebraic equations (SLAE) is proposed. The algorithm of a hyperplane clustering with the verification of a given criterion for the effectiveness of the proposed method of the clustering is developed. An example of using the method of scaling characteristic features for recognizing the fingerspelling alphabet of the sign language is given.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>One of the important problems of classification and clustering of information is the problem of
minimizing the dimension of the space of features, the choice of criteria for optimal solutions in the
processes of practical use. Such problems are effectively solved by the method of multidimensional
scaling of empirical data on the proximity of objects, with the help of which the dimension of the
measured objects essential characteristics space is determined and the configuration of points
objects in this space - is constructed. This space is a multidimensional scale, similar to the commonly
used scales in various applications in the sense that the values of the specially generated essential
characteristics of the measured objects correspond to certain positions on the axes of the new space
[1]-[5].</p>
      <p>The purpose of this work is the development of mathematical methods for the synthesis of systems
for solving problems of classification and clustering based on information about the characteristic
features of objects [6]-[10]. These problems are proposed to be solved by constructing hyperplanes in
a space derived from the original space of features using theory of perturbation of pseudo-inverse and
projection matrices and solving systems of linear algebraic equations. The paper proposes a method
for synthesizing a piecewise hyperplane cluster to isolate the most effective characteristic features and
an algorithm for constructing piecewise hyperplane clusters that allow you to find an effective
solution to the problems . The performance and efficiency of the proposed approach is shown on the
example of the scaling of characteristic features for recognizing the letters of the fingerprint alphabet
of the sign language [8], [9], [11].</p>
    </sec>
    <sec id="sec-2">
      <title>2. Method of synthesis of the hyperplane cluster concepts</title>
      <p>
        The problem of synthesis of a piecewise hyperplane cluster for a training sample of vectors
0 = x : x( j)  E m , j = 1, n, where x(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),, x(n) - vectors from the Euclidean space of features Em
is to build a cluster so that the training sample points in this space are located quite close, in the sense
of a given distance criterion, to some set of hyperplanes that are formed according to this sample.
      </p>
      <p>
        Note that in the formulation of this problem for clustering of information, the components of the
set of hyperplanes are not known in advance. Therefore, for the correct construction of piecewise
hyperplane clustering procedures, it is assumed that vectors x(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),, x(n) from the space of features
Em can belong to one of several hyperplanes L( A(k),b(k)), where A(k)  E sm , b(k)  E s , k = 1,2,...
some given dimension s , (s  m) . Here A(k) and b(k) are matrix and vector parameters, respectively,
for a fixed hyperplane L( A(k),b(k)) k = 1,2,... .
      </p>
      <p>The proposed method of a piecewise-hyperplane cluster synthesis is based on using the
representation of hyperplanes by means of a set of solutions (pseudo solutions) of systems of
algebraic equations.</p>
      <p>A(k)x = b(k) ,</p>
      <p>L( A(k),b(k)) =
= {x  Em : x = A+ (k)b(k) + Z ( A(k))z, z  Em} .</p>
      <p>Here A+ - pseudo inverse matrix, Z - projection matrix.</p>
      <p>Let us give some mathematical results on the inversion (pseudo inversion) of matrices and the
construction of projection matrices, which are important for solving the problem of synthesizing a
piecewise-hyperplane cluster.</p>
      <p>
        Let the matrix be given A = (aij ),i = 1, m j = 1, n and we write down, which are important in
further studies, the representations of this matrix in columns or rows, respectively:
A = (a(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )...a(n)), a( j)  E m , j = 1, n,
      </p>
      <p>A = (a(T1)</p>
      <p> a(Tm) )T , a(Ti)  E n ,i = 1, m ,
where T - transpose symbol.</p>
      <p>We consider the singular decomposition of an arbitrary matrix A of dimension m  n of
r
rank r  min(m, n) in the form A = iuiviT , where: 12  ...  2r  0 - non-zero eigenvalues of
i=1
matrices AAT , AT A ; vi  En ,i = 1, r - orthonormal set of eigenvectors of the matrix AT A , which
correspond to non-zero eigenvalues i2 ,i = 1, r , AT Avi = i2vi , i = 1, r, viT v j =  ij ui  Em ,i = 1, r
orthonormal set of eigenvectors of the matrix AAT , which also correspond to non-zero
eigenvalues i2 ,i = 1, r , AAT ui = i2ui , i = 1, r, uTi u j =  ij ,  ij - Kronecker’s symbol.</p>
      <p>Let us give the definition of a pseudo inverse matrix in the Penrose optimization form
[12]. For a matrix A Emn , a pseudo inverse matrix A+  Emn is defined by the relation:
x 2 .
bE m A+b= arg min
x  A(b)
Here  A (b) = Arg mxiEnn Ax − b 2 .</p>
      <p>Also, using the singular representation of the matrix A Emn , the pseudo inverse matrix
r
A+  Enm can be represented as [13]: A+ =  juTj −j1 .</p>
      <p>j=1</p>
      <p>Will consider important for practical application, matrices that are defined and calculated
using matrices A и A+ :</p>
      <p>r
1) a projection matrix P( A) = A+ A    T that is an orthogonal projector onto the
i i
i=1
subspace LAT generated by the row vectors of the matrix A ;</p>
      <p>
        2) projection matrix Z( A) =I n−P( A) - orthogonal projection onto a subspace orthogonal to a
subspace LAT , I n - unit matrix;
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
      </p>
      <p>Then, for given values A, x( j), j = 1,n , the optimal value of the vector of the right-hand side of
the system of equations determining the hyperplane is determined from the conditions
bopt = Axˆ =
= arg min  2 (x : x( j), j = 1, n, L( A, b)), xˆ =
bRS
1 n</p>
      <p> x( j).
n j=1</p>
      <p>From here, the distance  (x : x( j), j = 1, n, L( A, b)) of the set of points x( j) , j = 1, n to the
hyperplane, with an optimal vector bopt , is calculated by the following formula:</p>
      <p>
        ~
where X = (~x(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )).. ~x(n)), ~x( j) = x( j) − xˆ, j = 1,n .
      </p>
      <p>The optimal matrix A Esm is defined as a solution of the problem</p>
      <p>L( A,bopt ( A))) = (umT−s+1
</p>
      <p>umT )T .</p>
      <p>r
Wherein trAo+pt Aopt X~X~ T = 2j , (u1,,um )T (u1,,um ) = Im .</p>
      <p>j=m−s+1</p>
      <p>
        Using the above results on pseudo inversion of matrices and calculation of distances, a piecewise
hyperplane clustering method is proposed. The idea of the method is to perform a sequence of steps,
at each of which the parameters of the hyperplanes L( A(k),b(k)), k = 1,2,... are found. These
hyperplanes are constructed within the framework of performance the requirements of the
implemented hyperplane clustering efficiency criterion. As initial parameters of piecewise hyperplane
clustering it is assumed that all the training set vectors x(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),, x(n) from the space of features Em
are optimally approximated by the hyperplane L( Aopt (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),bopt (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )) . At these values Aopt (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),bopt (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) , the
performance efficiency criterion of hyperplane clustering is checked. If the efficiency criterion is
satisfied, this means that the result of the construction of the piecewise hyperplane cluster is achieved
by building the cluster with one hyperplane and those characteristic features are the optimal set. If the
conditions of the efficiency criterion in the framework of this (first) hyperplane are not met, then the
transition to the construction of the second hyperplane of the cluster is carried out. For this purpose,
the vectors that determine the non-fulfillment of the efficiency criterion are excluded from the training
sample, that is, a subset is formed 1 = x : x( j1)  Em , j1 = 1, n1. Then the actions on the optimal
approximation of the subset 1 by the hyperplane L( Aopt (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ),bopt (
        <xref ref-type="bibr" rid="ref2">2</xref>
        )) are repeated, and at the new
optimal values Aopt (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ),bopt (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) the fulfillment of the clustering efficiency criterion is checked. When
the criterion is met, the result of the piecewise hyperplane cluster construction is reached, then the
resulting vectors of characteristic features are added to the optimal set of features. When not
performing - it is necessary to exclude from the training sample those vectors that cause to the
nonfulfillment of the efficiency criterion and repeat the procedure. Note that this finite recurrent process
will always ensure the construction of an optimal piecewise hyperplane cluster.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Algorithm of synthesis of the piecewise hyperplane cluster</title>
      <p>
        Using the proposed method of synthesizing piecewise hyperplane clusters, the algorithm of such a
synthesis can be represented as the following sequence of actions:
1. Formation of a single-link cluster (the number of a link in a cluster is an index in brackets):
1) All vectors of the training sample (0) = x : x( j) Em, j = 1,n from the space of features are
to be optimally approximated by a hyperplane L( Aopt (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),bopt (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )) , which is defined as the set of
solutions (pseudo-solutions) of systems of algebraic equations (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ), where A(k) and b(k) are
respectively the matrix and vector parameters of a certain hyperplane (basic formulas for constructing
a hyperplane cluster are implemented).
      </p>
      <p>
        2) To form a set (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) = x : x( j1)  Em, j1 = 1,n1 of points (0) for which the condition is satisfied,
namely:
( b1opt (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) − A1opt (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )x( j1))T R( A1Topt (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )) 
 ( b1opt (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) − A1opt (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )x( j1))  hmin ,
where hmin - valid distance of vectors from components of the cluster of hyperplanes. The linear
dependence or independence of vectors that can be removed from the set makes it possible to simplify
the form of the formulas for the distances of these vectors from the corresponding hyperplanes.
      </p>
      <p>
        3) The stop of the algorithm at the stage of the first cluster link can be completed after the distance
of each of the vectors of the training sample (0) = x : x( j) Em, j = 1,n to the hyperplane
L( Aopt (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),bopt (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )) does not exceed the allowable distance.
      </p>
      <p>
        4) If the set (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) = x : x( j1)  Em, j1 = 1,n1 includes at least two vectors, then go on to form the
second link of the cluster.
      </p>
      <p>
        2. Formation of the second cluster link. Consists of the following:
1) From the set obtained in the process of building the first link (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) = x : x( j1)  Em, j1 = 1,n1, a
hyperplane L( Aopt (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ),bopt (
        <xref ref-type="bibr" rid="ref2">2</xref>
        )) defined as the set of solutions (pseudo - solutions) of a system of
algebraic equations
      </p>
      <p>
        Ak (
        <xref ref-type="bibr" rid="ref2">2</xref>
        )x = bk (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ), k = 1,2,...
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
hyperplanes (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ):
is constructed.
2) To calculate optimal Akopt(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ), bkopt(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) for L( Ak (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ),bk (
        <xref ref-type="bibr" rid="ref2">2</xref>
        )) , k = 1,2,...
3) To form a set 0k (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ), k = 1,2,... by the distance of the vector x( j)0 (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) to each of the
 2 (x : x( j), L( Akopt (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ),bkopt (
        <xref ref-type="bibr" rid="ref2">2</xref>
        )))= = (bkopt (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) − Akopt (
        <xref ref-type="bibr" rid="ref2">2</xref>
        )x( j))T RT ( Akopt (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ))   (bkopt (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) − Akopt (
        <xref ref-type="bibr" rid="ref2">2</xref>
        )x( j)),
i.e., perform actions similar to paragraph 3 of the construction of the first link.
      </p>
      <p>
        4) To go to step 2 with new subsets 1j (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) , 2j (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) , j = 1,2,. The superscript j denotes the
number of iterations at the second link stage.
      </p>
      <p>The algorithm is stopped at the stage of the second cluster link after the distances of each of the
vectors of the corresponding partitioning element to the corresponding hyperplane are not improved.</p>
      <p>The efficiency criterion of the carried out hyperplane clustering is checked (for example, the
requirement of the level of compactness of the cluster links). When the criterion is fulfilled, the
cluster is completed, and if it is unfulfilled, the transition to the formation of the third cluster link is
carried out. Then the process repeats and, since it is finite, we will always find a solution to the
clustering problem.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental studies</title>
      <p>To test the effectiveness of the proposed method of scaling information, we used characteristic
features for recognizing the dactyl letters of the Ukrainian sign language alphabet [6], [7]. As
characteristic features, 52 features were taken, which were divided into 6 groups, depending on the
method of their getting. The construction of piecewise hyperplane clusters was carried out with
groups of features that characterize the geometric - topological parameters of the human hand when
showing the letters of the dactyl alphabet and for which an acceptable recognition quality was
obtained [6]. Using the example of the classification of nine dactylemes (А, Б, В, Г, Ж, І, Є, И, Й)
according to three and five characteristic features, the separation of these dactylemes on the scaling
plane was obtained. Wherein three characteristic features were taken: compactness, directionality,
elongation, and in experiments with five features: the ratio of width to height, four angles values
between vectors drawn from the center of the hand to the most distant points of the hand.</p>
      <p>Experimental results using these three characteristic features are given in Fig. 1, where image of
the dactylemes location on the scaling plane is normalized on the interval 0,1. Without reducing the
generality of research, dactylem A was placed at the origin.</p>
      <p>Dact
ylemes
А
Б
В
Г
Ж
І
Є
И
Й</p>
      <p>X</p>
      <p>X
0.0000
1.0345
0.5065
0.5271
0.3878
0.5353
0.3718
0.5851
0.4585</p>
      <p>The results of clustering with using of five characteristic features are shown in Fig. 2., where the
dactylem A was also placed at the origin.</p>
      <p>The results of the experiments showed that using of five characteristic features makes it possible to
get a clearer separability (the distance of the dactylemes from the scaling plane ranged from 0.1580 to
0.3828), while using three features the distances from the scaling plane were significantly less
(ranging from 0.0306 to 0.1274) that is, three to five times less. The exception was dactylem Б, in the
first case (0.0177) and in the second case (0.0073), the separability from the scaling plane was not
insignificant in both cases.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion and conclusions</title>
      <p>
        The paper proposes method and algorithm for multidimensional scaling of recognition objects
characteristic features using the means of matrix pseudo inversion, building piecewise hyperplane
clusters that provide a solution of the problem [14]-[18]. Proposed method makes it possible to
analyze information about the set of characteristic features and to identify those that are essential for
solving the recognition problem that is important due to their significant number and for difficult
separable classes [19]-[24]. The effectiveness of the proposed approach is shown by the example of
obtaining clusters for dactylemes sign language in order to obtain the optimal number of characteristic
features for effective recognition.
6. References
[16] A.V. Barmak, Y.V. Krak, E.A. Manziuk &amp;, V.S. Kasianiuk. Information technology of
separating hyperplanes synthesis for linear classifiers. Journal of Automation and Information
Sciences, 51(
        <xref ref-type="bibr" rid="ref5">5</xref>
        ) (2019) 54-64. Doi: 10.1615/JAutomatinfScien.v51.i5.50.
[17] D. Oosterlinck, D. F. Benoit &amp; P. Baecke. From one-class to two-class classification by
incorporating expert knowledge: Novelty detection in human behaviour. European Journal of
Operational Research, 282(
        <xref ref-type="bibr" rid="ref3">3</xref>
        ), (2020) 1011-1024. https://doi.org/10.1016/j.ejor.2019.10.015.
[18] D. Abati, A. Porrello, S. Calderara, R. Cucchiara. Latent space autoregression for novelty
detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
(2019) 481-490.
[19] D. Gong, L. Liu, V. Le, B. Saha, M.R. Mansour, S. Venkatesh, A.v.d. Hengel. Memorizing
normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly
detection. In: Proceedings of the IEEE International Conference on Computer Vision, (2019)
1705 - 1714
[20] C. You, D. P. Robinson, R. Vidal. Provable self-representation based outlier detection in a union
of subspaces. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR
2017, Honolulu, HI, USA, July 21-26, (2017) 4323 – 4332.
[21] C.-H. Lai, D. Zou, G. Lerman. Robust subspace recovery layer for unsupervised anomaly
detection. in Proc. Int. Conf. Learn. Represent., (2020) 1 - 28. arXiv:1904.00152.
[22] Z. Cheng, E. Zhu, S. Wang, P. Zhang &amp; W. Li. Unsupervised Outlier Detection via
Transformation Invariant Autoencoder. IEEE Access, 9, (2021) 43991-44002. doi:
10.1109/ACCESS.2021.3065838.
[23] H. Wang, M. J. Bah, &amp; M. Hammad. Progress in outlier detection techniques: A survey. IEEE
      </p>
      <p>Access, 7, (2019) 107964-108000. doi: 10.1109/ACCESS.2019.2932769.
[24] L. Ruff, J. R. Kauffmann, R. A. Vandermeulen, G. Montavon, W. Samek, M. Kloft, ... &amp; K. R.</p>
      <p>Müller. A unifying review of deep and shallow anomaly detection. Proceedings of the IEEE.
2021. doi: 10.1109/JPROC.2021.3052449.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Devison</surname>
          </string-name>
          .
          <article-title>Multidimensional scaling</article-title>
          .
          <source>Moscow: Finansy I Statistyka</source>
          ,
          <year>1988</year>
          . p.
          <fpage>254</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P.</given-names>
            <surname>Perera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Oza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. M.</given-names>
            <surname>Patel</surname>
          </string-name>
          .
          <string-name>
            <surname>One-Class Classification</surname>
            :
            <given-names>A Survey.</given-names>
          </string-name>
          (
          <year>2021</year>
          ) arXiv preprint arXiv:
          <volume>2101</volume>
          .
          <fpage>03064</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M. Z.</given-names>
            <surname>Zaheer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Astrid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mahmood</surname>
          </string-name>
          , &amp;
          <string-name>
            <given-names>S. I.</given-names>
            <surname>Lee</surname>
          </string-name>
          .
          <article-title>Cleaning label noise with clusters for minimally supervised anomaly detection</article-title>
          ,
          <year>2021</year>
          . arXiv preprint arXiv:
          <volume>2104</volume>
          .
          <fpage>14770</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>W.</given-names>
            <surname>Sultani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Chen</surname>
          </string-name>
          , &amp;
          <string-name>
            <surname>M. Shah</surname>
          </string-name>
          .
          <article-title>Real-world anomaly detection in surveillance videos</article-title>
          .
          <source>In Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          , (
          <year>2018</year>
          )
          <fpage>6479</fpage>
          -
          <lpage>6488</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>I.V.</given-names>
            <surname>Krak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.I.</given-names>
            <surname>Kudin</surname>
          </string-name>
          &amp;
          <string-name>
            <given-names>A.I.</given-names>
            <surname>Kulias</surname>
          </string-name>
          .
          <article-title>Multidimensional Scaling by Means of Pseudoinverse Operations, Cybernetics</article-title>
          and
          <string-name>
            <given-names>Systems</given-names>
            <surname>Analysis</surname>
          </string-name>
          ,
          <volume>55</volume>
          (
          <issue>1</issue>
          ) (
          <year>2019</year>
          )
          <fpage>22</fpage>
          -
          <lpage>29</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10559-019- 00108-9.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.</given-names>
            <surname>Cheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          , &amp;
          <string-name>
            <given-names>B.</given-names>
            <surname>Li</surname>
          </string-name>
          .
          <article-title>Outlier Detection Ensemble with Embedded Feature Selection</article-title>
          .
          <source>In Proceedings of the AAAI Conference on Artificial Intelligence</source>
          <volume>34</volume>
          (
          <issue>04</issue>
          ), (
          <year>2020</year>
          )
          <fpage>3503</fpage>
          -
          <lpage>3512</lpage>
          . https://doi.org/10.1609/aaai.v34i04.
          <fpage>5755</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S. A. N.</given-names>
            <surname>Alexandropoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. B.</given-names>
            <surname>Kotsiantis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. E.</given-names>
            <surname>Piperigou</surname>
          </string-name>
          , &amp;
          <string-name>
            <given-names>M. N.</given-names>
            <surname>Vrahatis</surname>
          </string-name>
          .
          <article-title>A new ensemble method for outlier identification</article-title>
          .
          <source>In 2020 10th International Conference on Cloud Computing, Data Science &amp; Engineering</source>
          ,
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          , (
          <year>2020</year>
          )
          <fpage>769</fpage>
          -
          <lpage>774</lpage>
          . doi:
          <volume>10</volume>
          .1109/Confluence47617.
          <year>2020</year>
          .
          <volume>9058219</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Iu.G.</given-names>
            <surname>Kryvonos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Iu.V.</given-names>
            <surname>Krak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.V.</given-names>
            <surname>Barmak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.V.</given-names>
            <surname>Shkilniuk</surname>
          </string-name>
          .
          <article-title>Construction and identification of elements of sign communication</article-title>
          .
          <source>Cybernetics and Systems Analysis</source>
          .
          <volume>49</volume>
          (
          <issue>2</issue>
          ) (
          <year>2013</year>
          ). p.
          <fpage>163</fpage>
          -
          <lpage>172</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Yu.V.</given-names>
            <surname>Krak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.A.</given-names>
            <surname>Golik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.S.</given-names>
            <surname>Kasianiuk</surname>
          </string-name>
          .
          <article-title>Recognition of dactylemes of Ukrainian sign language based on the geometric characteristics of hand contours defects</article-title>
          .
          <source>Journal of Automation and Information Sciences</source>
          .
          <volume>48</volume>
          (
          <issue>4</issue>
          ) (
          <year>2016</year>
          ). p.
          <fpage>90</fpage>
          -
          <lpage>98</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>J. T. O'Brien</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Nelson</surname>
          </string-name>
          .
          <article-title>Assessing the Risks Posed by the Convergence of Artificial Intelligence and Biotechnology</article-title>
          . Health security,
          <volume>18</volume>
          (
          <issue>3</issue>
          ), (
          <year>2020</year>
          )
          <fpage>219</fpage>
          -
          <lpage>227</lpage>
          . https://doi.org/10.1089/hs.
          <year>2019</year>
          .
          <volume>0122</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>I.V.</given-names>
            <surname>Krak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.V.</given-names>
            <surname>Barmak</surname>
          </string-name>
          &amp;
          <string-name>
            <given-names>S.O.</given-names>
            <surname>Romanyshyn</surname>
          </string-name>
          .
          <article-title>The method of generalized grammar structures for text to gestures computer-aided translation</article-title>
          .
          <source>Cybernetics and Systems Analysis</source>
          ,
          <volume>50</volume>
          (
          <issue>1</issue>
          ) (
          <year>2014</year>
          )
          <fpage>116</fpage>
          -
          <lpage>123</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10559-014-9598-4.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>R.</given-names>
            <surname>Penrose</surname>
          </string-name>
          .
          <article-title>A generalized inverse for matrices</article-title>
          .
          <source>Proceeding of the Cambridge Philosophical Society. 51</source>
          .
          <year>1955</year>
          . p.
          <fpage>406</fpage>
          -
          <lpage>413</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ben-Israel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.N.E.</given-names>
            <surname>Greville</surname>
          </string-name>
          .
          <source>Generalized inverse: Theory and Applications</source>
          . (2-nd Ed.). - Springer-Verlag, New York,
          <year>2003</year>
          . p.
          <fpage>420</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>G.</given-names>
            <surname>Markowsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Savenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lysenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nicheporuk</surname>
          </string-name>
          .
          <article-title>The Technique for Metamorphic Viruses' Detection Based on Its Obfuscation Features Analysis</article-title>
          .
          <source>In ICTERI Workshops. CEUR Workshop Proceedings</source>
          , Vol.
          <volume>2104</volume>
          , (
          <year>2018</year>
          ),
          <fpage>680</fpage>
          -
          <lpage>687</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>I.</given-names>
            <surname>Krak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Barmak</surname>
          </string-name>
          , &amp; E. Manziuk.
          <article-title>Using visual analytics to develop human and machinecentric models: A review of approaches and proposed information technology</article-title>
          ,
          <source>Computitional Intelligence</source>
          (
          <year>2020</year>
          )
          <fpage>1</fpage>
          -
          <lpage>26</lpage>
          . https://doi.org/10.1111/coin.12289.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>