<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>ORCID:</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Formation of the Optimal Plan</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Nataliya Boyko</string-name>
          <email>nataliya.i.boyko@lpnu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rostyslav Hlynka</string-name>
          <email>hlynka1608@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Data Mining</institution>
          ,
          <addr-line>Mathematical Programming, Linear Programming, Nonlinear Programming</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Lviv Polytechnic National University</institution>
          ,
          <addr-line>Profesorska Street 1, Lviv, 79013</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>The paper presents three methods for data classification and finding the optimal plan: the study of the quadratic programming problem, the double problem and the Support Vector Machine method. It is known that linear programming is used to solve resource allocation problems. Also, its purpose is widely used to determine the highest profit or lowest cost, inventory management, the formation of an optimal transportation plan or to determine research, and so on. An important approach to the application of linear programming problems is the use of the duality principle, which is methodologically related to the theory of systems of dependent inequalities. This aspect better explains the concept of duality in linear programming problems with general mathematical rigor. Quadratic Programming, Problem of the Quadratic Programming, Support Vector Machine COLINS-2021: 5th International Conference on Computational Linguistics and Intelligent Systems, April 22-23, 2021, Kharkiv, Ukraine</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Known methods of transition from a primal problem to a dual one are based on qualitative
transformations and are meaningful. Formalization and proof of the correctness of the algorithm for
constructing a dual problem for an arbitrary form of representation of a primal problem will make it
easy to obtain correct pairs of known dual problems. The relevance of research is due to the
requirements for simplification of solutions of linear programming
problems based on the
development of a formal algorithm for the transformation of a primal problem to a dual linear
optimization problem [
        <xref ref-type="bibr" rid="ref1 ref6">1, 6</xref>
        ].
      </p>
      <p>
        Quadratic programming is a area of mathematical programming devoted to the theory of solving
problems characterized by a quadratic dependency between variables [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The usage of this method is
relevant today, as the use of mathematical models is an important factor in improving the planning of
the company. Mathematical representation of data allowed to create and model different options for
choosing the optimal solution [
        <xref ref-type="bibr" rid="ref11 ref3 ref9">3, 9, 11</xref>
        ].
      </p>
      <p>
        The paper considers the Support Vector Machine (SVM) method that is taught by examples and
used to classify objects. It is established that SVM can be successfully used to control complex
electromechanical systems, it can ensure the adaptability of control algorithms, perform the functions
of an observer, an identifier of unknown parameters, a reference model, with its help you can control
complex nonlinear objects, as well as objects with stochastic parameters [
        <xref ref-type="bibr" rid="ref10 ref17 ref4">4, 10, 17</xref>
        ].
      </p>
      <p>The aim of the work is to solve a dual problem by SVM, the comparison with the primal problem
and classification of the dataset.</p>
      <p>Achieving this goal involves solving specific tasks:
 determine the problem of the method of SVM for the dual problem;
 compare dual SVM and primary;</p>
      <p>2021 Copyright for this paper by its authors.
 analyze this method;
 apply it in practice.</p>
      <p>The object of research is to solve a dual problem by the method of SVM.</p>
      <p>The purpose of the study is to apply the problems of linear and nonlinear programming to study
the properties of the studied problems, to determine their advantages and disadvantages.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Methods</title>
      <p>
        Mathematical programming is an applied mathematical discipline that investigates the extremum
of a function (maximum or minimum search problems) and develops methods for solving them. Such
problems are also called optimization [
        <xref ref-type="bibr" rid="ref5 ref7">5, 7</xref>
        ].
      </p>
      <p>The area of mathematical programming can be applied to the type of objective function and to the
system of constraints. As a result, we obtain a division into:
 Linear programming - objective function and constraint functions included in the constraint
system are linear (first order equation).
 Nonlinear programming - the objective function or one of the constraint functions included in
the constraint system is nonlinear (higher order equations).
 Integer (discrete) programming - if at least one variable has an integer constaint.</p>
      <p>
        Dynamic programming - if the parameters of the objective function and / or system of constraints
change over time or the objective function has an additive / multiplicative form or the
decisionmaking process itself is multi-step [
        <xref ref-type="bibr" rid="ref12 ref8">8, 12</xref>
        ].
      </p>
      <p>Depending on that all the information about the process is known in advance, the field of
mathematical programming is divided into:
 Stochastic programming - not all information about the process is known in advance: the
parameters included in the objective function or in the constraint function are random or have to
make decisions in conditions of risk.
 Deterministic programming - all information about the process is known in advance.
Depending on the number of objective functions, the tasks are divided into:
 Single-criteria;
 Multicriteria.</p>
      <p>The optimization problem can be classified as follows: those problems that describe the properties
of the constraint system and, accordingly, others that are determined by the objective function:
 Unconditional optimization problems or problems without restrictions - they do not impose
restrictions on quantitative variables.
 Conditional optimization problems or constrained problems - in these problems, quantitative
variables are constrained.
 Optimization problems for incomplete data - they have a goal function or a system of
constraints depend on some parameter p (numerical, vector), the value of which is completely
undefined at the time of solving the problem.</p>
      <p>The first type includes optimization problems, the task of which is to minimize or maximize the
quadratic function of several variables with linear constraints on these variables.</p>
      <p>
        Quadratic programming problems include a special class of NP problems in which the objective
function f (x) is quadratic and concave (or convex), and all constraints are linear [
        <xref ref-type="bibr" rid="ref13 ref15">13, 15</xref>
        ].
      </p>
      <p>Each linear programming problem can be matched to another that relates in some way to the
original task. Such problems are called dual, or conjugate. Joint consideration of dual pairs of
problems is very important in the economic analysis of the optimal plan. The correspondence between
the original and dual problems is to build a dual problem on the basis of the first problem (as the
source can be considered any of the conjugate pair of problems). Dual problems are symmetric and
asymmetric.</p>
      <p>The quadratic programming includes the SVM method. He constructs a model in the form of
points in space using a binary linear classifier. This model goes through a series of iterations in which
new patterns are displayed in a given space and determine the side of the gap. On the basis of these
data the forecast of belonging of samples to a certain category is made.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Observation and analysis of the existing methods and means</title>
      <p>Each linear programming problem corresponds to a dual, formed by certain rules directly from the
condition of the primal problem. Comparing these two formulated problems, we conclude that the
dual linear programming problem is formed from a primal problem by the following rules:
1. Each constraint of a primal problem corresponds to a variable of a dual problem. The number
of unknowns of a dual problem is equal to the number of constraints of the primal problem.
2. For the primary problem, a certain variable corresponds to the specified constraint of the
double problem and, accordingly, their number determines the number of unknowns in the primary
problem.
3. If the objective function of the primary problem goes to max, then the objective function of
the double problem goes to min, and vice versa.
4. In the objective function of a double problem, the coefficients of the variables are free values
of the system of constraints for the primary problem.
5. The column of free members of the double problem is the coefficients for the variables in the
objective function of the primary problem.
6. The coefficients of variables in the system of constraints of the primary problem are written
in the matrix and accordingly it is transposed to determine the coefficients of constraints for the
double problem.</p>
      <p>
        As a result of intensive research in the field of machine learning, aimed at improving the quality of
classifiers, a new generation of methods appeared, in particular – SVM. This method refers to
machine learning methods based on vector spatial models, the purpose of which is to find dividing
surfaces between classes as far as possible from all points of the study population (perhaps ignoring
some points such as emissions or noise) [
        <xref ref-type="bibr" rid="ref14 ref16">14, 16</xref>
        ].
      </p>
      <p>If the training set contains two classes of data that allow linear division, then there are a large
number of linear classifiers with which you can divide this data. It is intuitively clear that a dividing
surface passing through the middle of a strip separating two classes. For example, the perceptron
allows you to find at least one linear, other methods, such as the naive Bayesian method, find the best
linear separator using a certain criterion. In particular, SVM necessarily assumes that the decisive
function is completely determined by a subset of data that affect the position of the delimiter. In
vector space, a point can be considered as a vector passing through the origin. Consider a dataset of n
points in the form (x1 , y1 ),..., (xn , yn ) , where y   1,1 identifies to which class the point x
1
belongs.</p>
      <p>Each point is a p-dimensional real vector. SVM means to find the maximum hyperplane that
separates groups of points belonging to y  1 from y  1.</p>
      <p>In Equation 1 we write a hyperplane through many points that satisfy the condition:
w * x  b  0 , (1)
where w is a optional vector to the hyperplane. Parameter
hyperplane from the origin on the normal vector (Figure 1).
b
w
determines the displacement of the</p>
      <p>In Equation 2 we define the data that are not linearly separated:
max(0,1  y (wx  b)) .</p>
      <p>i i
(2)
In Equation 3, we minimize the function:
 1 n 
 n i 1 max(0,1  yi (wxi  b))   w 2 , (3)
where  determines the trade-off between the size of the margin and the guarantee that the point lies
is very small, the second operand becomes
on the correct side of the margin. Hence, if 
insignificant, and the function will behave as with a hard margin.</p>
      <p>The calculation of the soft margin classifier is to minimize the expression of the form (Equation 4):
 1 
 n  in  1 max(0,1  yi (wxi  b))   w 2 .</p>
      <p>Therefore, in a further study, we will consider a classifier with a bounded boundary.
3.1.</p>
    </sec>
    <sec id="sec-4">
      <title>Primal problem</title>
      <p>2
1
2</p>
      <p>Equation 4 presents the minimization of a bounded optimization problem with a differentiated
objective function. For each i we enter a variable e - the least positive number that satisfies
i
y (w * x  b)  ei and e  0 .</p>
      <p>i i i</p>
      <p>Equation 5 presents the problem of optimization taking into account additions.</p>
      <p>minimize
1 n
 e   w 2 .</p>
      <p>i
n
maxmizef(c 1 ,..., cn )  i1 ci 
 in  1
 n y c (x * x ) y c ,</p>
      <p>j  1 i i i j j j
n i  1
Solving the primary problem for the dual Lagrange, we obtain a simplified problem (Equation 6):
1
n 1
if:  yi ci  0 and 0  ci  .</p>
      <p>j1 2n</p>
      <p>The quadratic function solves double maximization problems. its results satisfy linear constraints.
Equations 7-8 determine the variables ci to determine the second problem.</p>
      <p>n
maxmizef(c 1 ,..., cn )  i1 ci 
 in  1
 n y c (x * x ) y c .</p>
      <p>j  1 i i i j j j
n y c x .</p>
      <p>w  i1 i i i</p>
      <p>Moreover, ci  0 just when the point lies on the right side of the field and 0  ci 
lying on the edge of the field. Equation 9 shows a linear combination of reference vectors, which
determines the offset through a point on the field boundary.</p>
      <p>y i (wxi  b)  1  b  wxi  yi . (9)
3.2.</p>
    </sec>
    <sec id="sec-5">
      <title>Comparison of problems</title>
      <p>Equation (Formula 8) gives the optimal value of w in terms of с . Suppose we have adjusted the
parameters of our model to the training set, and now we want to make a prediction for the new point
x . Then we will calculate wT x  b and forecast y  1, if and only if this value is greater than zero.
But, using (Formula 3), this value can also be written as (Equation 10):
wT x  b  in1 ( yi ci (xi ))T x  b  in1 yi ci (xi , x)  b . (10)
(4)
(5)
(6)
(7)
(8)
1
2n
, when</p>
      <p>So, if we find ci to make a prediction, we have to calculate a value that depends only on the
internal product between x . Moreover, we have previously seen that ci will be equal to all but zero
support vectors. Thus, many terms in the above sum will be zero, and we really only need to find the
internal products between x and the reference vectors (which are often only a small number) to
calculate (Formula 9) and make our prediction.</p>
      <p>Considering the dual form of the optimization problem, we got a good idea of the structure of the
problem, and we can write the whole algorithm in terms of only the internal products between the
vectors of the input functions. This property is important to apply kernels to our classification
problem. The obtained algorithm, supporting vector machines, will be able to effectively learn in
spaces with high dimensions.</p>
      <p>Also dual SVM requires fewer kernel estimates than the primary. Therefore, it gives a more stable
result in less computational time (Table 1).</p>
    </sec>
    <sec id="sec-6">
      <title>4. Experiments</title>
      <p>The study requires solving a double problem by the method of reference vectors, comparison with
the primary problem and classification of the data set.</p>
      <p>Achieving this goal involves solving specific tasks:
 determine the problem of the method of support vectors for the dual problem;
 compare dual SVM and primal;
 analyze the algorithm of the method;
 apply it in practice.</p>
      <p>The Iris dataset was chosen for the implementation of the support vector method. This is a
wellknown set of data used in the area of machine learning.</p>
      <p>Dataset attribute information:
1. Sepallength.
2. Sepalwidth.
3. Petallength.
4. Petalwidth.
5. Classes:
 Iris Setosa;
 Iris Versicolour;
 Iris Virginica.</p>
      <p>The best way to analyze a large data set is to visualize it. Data visualization refers to the
approaches used to understand data through visual representation.</p>
      <p>The main purpose of visualization – interpretation of a large data set into visual graphics to easily
understand complex data relationships and quickly get an imagination of the dataset.</p>
      <p>The histogram shown in Figure 2 shows the frequencies of the data set, which can be used to
understand the trend of length / width of the petals for each type of plant.</p>
      <p>From Figure 2 it is seen that the type of petal virginica is much larger than others and sepals are
mostly too, setosa has the smallest size and small range of values, versicolor - medium in size.</p>
      <p>The box plot (Figure 3) shows the distribution of data by quartiles, the average values and
statistical emissions are highlighted. The vertical lines (whiskers) drawn to the rectangles reflect the
variability of the values outside the upper and lower quartiles, any point on these lines is considered a
statistical ejection. The other two species are larger and have larger tendrils, which characterize a
large scope.</p>
      <p>The heatmap (Figure 4) shows the levels of correlation between attributes and their linear
dependencies:
 (-0.09; 0.0)(0.0; 0.09) - linear independence;
 (-0.3; -0.1)(0.1; 0.3) - low linear dependence;
 (-0.5; -0.3) (0.3; 0.5) – medium linear dependence;
 (-1.0; -0.5) (0.5; 1.0) - high linear dependence.</p>
      <p>The values of the length of the sepals depend entirely on the values of its width, or vice versa. In
the case of a petal, the length and width are independent of each other. It is also interesting that the
length of the petal depends on the length and width of the sepals.</p>
      <p>In Figure 5 you can see the linear separation between classes.</p>
      <p>Setosa is linearly separated from Versicolor and Virginica, and Versicolor and Virginica are not
linearly separated. It means that in the first case it is necessary to apply SVM with a hard margin, and
in the second - with soft.</p>
      <p>From this statistical in the Table 2 it is seen: that the size of the sepals mostly is larger than the size
of the petal. Also, since the variance and standard deviation characterize the scattering of values
around the distribution center, it can be concluded that the variation of petal size values between plant
species is much larger than for the sepals.</p>
      <p>Two arbitrary points from the iris dataset are selected for analysis, and the optimal hyperplane is
calculated.</p>
      <p>Let first arbitrary point – A(12,4.8) , second – B(100,6.3) , where x – index in the dataset, y –
length of the sepal (Figure 6).</p>
      <p>The point A will be considered negative, B – positive, thus yA  1 , yB  1 .</p>
    </sec>
    <sec id="sec-7">
      <title>Statistical analysis</title>
    </sec>
    <sec id="sec-8">
      <title>5. Results</title>
      <p>where w – normal vector to the hyperplane and
- perpendicular distance from the hyperplane to
w
the origin.</p>
      <p>To find w and b dual form should be introduced. It contains a quadratic objective function with
constraints (equation 12).
if iL1ai yi  0,
ai  0
</p>
      <p>i
After data substitution:</p>
      <p>L
max LD   ai 
a i1
1</p>
      <p> ai a j yi y j xi , x j ,
2 i, j
(12)
1 1 12  12 
maax LD  iL  1 ai  2i, j ai a j yi y j xi , x j  a1  a2  2 (a1a1 *1*1*  4.8,  4.8 
12  100  100  100 
 2 * a1a2 *1* (1) *  4.8,  6.3   a2a2 * (1) * (1) *  6.3 ,  6.3  ) </p>
      <p>1
 a  a  (167.04a 2  2460.48a a  10039.69a 2 )</p>
      <p>1 2 2 1 1 2 2
Using the Lagrange equation we can solve this problem:</p>
      <p>1
L( X , )  x  x  (167.04x2  2460.48x x  10039.69x2   * (x  x  0) .</p>
      <p>1 2 2 1 1 2 2 1 2</p>
      <p>Under the condition of the extremum of the Lagrange function, we equate the partial derivatives to
zero.</p>
      <p>Built system:
Calculating w and b :
w  iL1 ai yi xi </p>
      <p>25
96837
96837 96837 96837</p>
      <p>Figure 7 shows the hyperplane that best classifies our data, and theoretically all positive points will
be on the left and negative points on the right. Dark blue color indicates the hyperplane itself, dotted
lines form a margin.</p>
      <p>Comparing the result of the program and using the SVM of the sklearn library, the following result
of the program is given:
w   0.02272067  0.00038728 </p>
      <sec id="sec-8-1">
        <title>Indices of support vectors  1</title>
      </sec>
      <sec id="sec-8-2">
        <title>Support vectors  100 6.312</title>
        <p>0;
4.8;</p>
      </sec>
      <sec id="sec-8-3">
        <title>Number of support vectors for each class  1 1;</title>
        <p>Coefficients of the support vector in the decision function  0.00025819
0.00025819  .</p>
        <p>The values of w and b are calculated manually and coincide with the help of the program, so the
calculations are correct. Also, since we have only two points in this case, they act as reference
vectors, one for each class. The coefficients of these reference vectors in the objective function
correspond to Lagrange factors calculated manually.</p>
        <p>Figure 8 shows the calculated results using the program for the entire dataset:</p>
        <p>Figure 8 shows the sepal length of two linearly separated plant species of the iris dataset - y ,
which must be classified using the SVM algorithm programmatically. We see that the data are linearly
separated and easy to classify. Consider the problem is to find the optimal solution.</p>
        <p>In Figure 9 two types of plants are classified by the hyperplane calculated programmatically. Since
one of the parameters of the classification is the index, and iris dataset takes into account the
indexation, namely (setosa - (1; 50), virginica - (50; 100), versicolor - (100; 150)), so the support
vectors and their number for each class are found correctly. Plants with an index less than 75 will
belong to setosa, and more - to versicolor. But it is necessary to remember about possible cases where
the sepal_length value will be considered.</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>6. Conclusion</title>
      <p>The dual method of support vectors is to solve the Lagrange problem, with found Lagrange
multipliers it is easy to calculate the normal vector w and draw a separate hyperplane.</p>
      <p>Solving the primary problem, it is obtained the optimal w, but nothing about the Lagrange
multipliers. To classify the point x, it is needed to clearly calculate the scalar product wT x , which can
be very expensive. Solving the dual problem: obtained Lagrange factors (where ai  0 for all but a
few points - support vectors). This problem is very efficiently calculated if there are few support
vectors. Also, with a scalar product that includes only data vectors, it is possible to use kernel trick for
nonlinear problems.</p>
      <p>Dual SVM in nonlinear problems is more stable and faster than the primary because it performs
fewer kernel estimates.</p>
      <p>Using the Lagrange function helps to distribute the data linearly. The kernel trick is used to
separate data in different ways, but not line.</p>
      <p>SVM can be successfully used to control complex electromechanical systems, it can ensure the
adaptability of control algorithms, perform the functions of an observer, an identifier of unknown
parameters, a reference model, it can be used to control complex nonlinear objects.</p>
    </sec>
    <sec id="sec-10">
      <title>7. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>V.</given-names>
            <surname>Estivill-Castro</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Lee</surname>
          </string-name>
          , Amoeba:
          <article-title>Hierarchical clustering based on spatial proximity using Delaunay diagram</article-title>
          ,
          <source>in: 9th Intern. Symp. on spatial data handling</source>
          , Beijing, China,
          <year>2000</year>
          , pp.
          <fpage>26</fpage>
          -
          <lpage>41</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>C.</given-names>
            <surname>Boehm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kailing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kriegel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kroeger</surname>
          </string-name>
          ,
          <article-title>Density connected clustering with local subspace preferences</article-title>
          ,
          <source>IEEE Computer Society, Proc. of the 4th IEEE Intern. conf. on data mining</source>
          , Los Alamitos,
          <year>2004</year>
          , pp.
          <fpage>27</fpage>
          -
          <lpage>34</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>N.</given-names>
            <surname>Boyko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Shakhovska</surname>
          </string-name>
          ,
          <article-title>Information system of catering selection by using clustering analysis, in: 2018 IEEE Ukraine Student, Young Professional and Women in Engineering Congress (UKRSYW), Kyiv</article-title>
          , Ukraine,
          <year>2018</year>
          , pp.
          <fpage>7</fpage>
          -
          <lpage>13</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Harel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Koren</surname>
          </string-name>
          ,
          <article-title>Clustering spatial data using random walks</article-title>
          ,
          <source>in: Proc. of the 7th ACM SIGKDD Intern. conf. on knowledge discovery and data mining</source>
          , San Francisco, California,
          <volume>200</volume>
          , pp.
          <fpage>281</fpage>
          -
          <lpage>286</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.K.</given-names>
            <surname>Tung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hou</surname>
          </string-name>
          , J. Han,
          <article-title>Spatial clustering in the presence of obstacles</article-title>
          ,
          <source>in: The 17th Intern. conf. on data engineering (ICDE'01)</source>
          , Heidelberg,
          <year>2001</year>
          , pp.
          <fpage>359</fpage>
          -
          <lpage>367</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>N.</given-names>
            <surname>Boyko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bronetskyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shakhovska</surname>
          </string-name>
          ,
          <source>Application of Artificial Intelligence Algorithms for Image Processing, in: CEUR. Workshop Proceedings of the 8th International Conference on “Mathematics. Information Technologies</source>
          . Education”,
          <source>MoMLeT&amp;DS-2019</source>
          , Vol-
          <volume>2386</volume>
          urn: nbn: de:
          <fpage>0074</fpage>
          -
          <lpage>2386</lpage>
          -1, Shatsk, Ukraine, June 2-4,
          <year>2019</year>
          , pp.
          <fpage>194</fpage>
          -
          <lpage>211</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R.</given-names>
            <surname>Agrawal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gehrke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Gunopulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Raghava</surname>
          </string-name>
          ,
          <article-title>Automatic sub-space clustering of high dimensional data, Data mining knowledge discovery</article-title>
          , vol.
          <volume>11</volume>
          (
          <issue>1</issue>
          ),
          <year>2005</year>
          , pp.
          <fpage>5</fpage>
          -
          <lpage>33</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ankerst</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ester</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.-P.</given-names>
            <surname>Kriegel</surname>
          </string-name>
          ,
          <article-title>Towards an effective cooperation of the user and the computer for classification</article-title>
          ,
          <source>in: Proc. of the 6th ACM SIGKDD Intern. conf. on knowledge discovery and data mining</source>
          , Boston, Massachusetts, USA,
          <year>2000</year>
          , pp.
          <fpage>179</fpage>
          -
          <lpage>188</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>С.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , Y. Murayama,
          <article-title>Testing local spatial autocorrelation using</article-title>
          .
          <source>Intern. J. of Geogr. Inform. Science</source>
          , vol.
          <volume>14</volume>
          ,
          <year>2000</year>
          , pp.
          <fpage>681</fpage>
          -
          <lpage>692</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>V.</given-names>
            <surname>Estivill-Castro</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Lee</surname>
          </string-name>
          , Amoeba:
          <article-title>Hierarchical clustering based on spatial proximity using Delaunay diagram</article-title>
          ,
          <source>in: 9th Intern. Symp. on spatial data handling</source>
          , Beijing, China,
          <year>2000</year>
          , pp.
          <fpage>26</fpage>
          -
          <lpage>41</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.J.</given-names>
            <surname>Peuquet</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Gahegan, ICEAGE: Interactive clustering and exploration of large and high-dimensional geodata</article-title>
          ,
          <source>Geoinformatica</source>
          , vol.
          <volume>3</volume>
          ,
          <string-name>
            <surname>N.</surname>
          </string-name>
          <year>7</year>
          ,
          <issue>2003</issue>
          , pp.
          <fpage>229</fpage>
          -
          <lpage>253</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>N.</given-names>
            <surname>Boyko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Basystiuk</surname>
          </string-name>
          ,
          <source>Comparison Of Machine Learning Libraries Performance Used For Machine Translation Based On Recurrent Neural Networks, in: 2018 IEEE Ukraine Student</source>
          ,
          <article-title>Young Professional and Women in Engineering Congress (UKRSYW), Kyiv</article-title>
          , Ukraine,
          <year>2018</year>
          , pp.
          <fpage>78</fpage>
          -
          <lpage>82</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>C.</given-names>
            <surname>Aggarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <article-title>Finding generalized projected clusters in high dimensional spaces</article-title>
          ,
          <source>in: ACM SIGMOD Intern. conf. on management of data</source>
          ,
          <year>2000</year>
          , pp.
          <fpage>70</fpage>
          -
          <lpage>81</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>R.</given-names>
            <surname>Thanki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Borra</surname>
          </string-name>
          ,
          <article-title>Application of Machine Learning Algorithms for Classification and Security of Diagnostic Images, Machine Learning in Bio-Signal Analysis</article-title>
          and
          <string-name>
            <given-names>Diagnostic</given-names>
            <surname>Imaging</surname>
          </string-name>
          ,
          <year>2019</year>
          , pp.
          <fpage>273</fpage>
          -
          <lpage>292</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>D.J.</given-names>
            <surname>Peuquet</surname>
          </string-name>
          , “
          <article-title>Representations of space and time”</article-title>
          . N. Y.: Guilford Press (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>C.M. Procopiuc</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Jones</surname>
            ,
            <given-names>P.K.</given-names>
          </string-name>
          <string-name>
            <surname>Agarwal</surname>
            ,
            <given-names>T.M.</given-names>
          </string-name>
          <string-name>
            <surname>Murali</surname>
            ,
            <given-names>A Monte</given-names>
          </string-name>
          <article-title>Carlo algorithm for fast projective clustering</article-title>
          ,
          <source>in: Intern. conf. on management of data, ACM SIGMOD</source>
          , Madison, Wisconsin, USA,
          <year>2002</year>
          , pp.
          <fpage>418</fpage>
          -
          <lpage>427</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>K.</given-names>
            <surname>Chitra</surname>
          </string-name>
          , Dr. D.
          <string-name>
            <surname>Maheswari</surname>
          </string-name>
          ,
          <article-title>A Comparative Study of Various Clustering Algorithms in Data Mining</article-title>
          .
          <source>International Journal of Computer Science and Mobile Computing</source>
          , Vol.
          <volume>6</volume>
          Issue.8,
          <year>August 2017</year>
          , pp.
          <fpage>109</fpage>
          ‒
          <lpage>115</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>