=Paper= {{Paper |id=Vol-3312/paper4 |storemode=property |title=Creating Neural Network and Single Solution Human-Based Metaheuristic Methods of Solving the Traveling Salesman Problem |pdfUrl=https://ceur-ws.org/Vol-3312/paper4.pdf |volume=Vol-3312 |authors=Anait Karapetyan,Eugene Fedorov,Oleksii Smirnov |dblpUrl=https://dblp.org/rec/conf/momlet/KarapetyanFS22 }} ==Creating Neural Network and Single Solution Human-Based Metaheuristic Methods of Solving the Traveling Salesman Problem== https://ceur-ws.org/Vol-3312/paper4.pdf
Creating Neural Network and Single Solution Human-Based
Metaheuristic Methods of Solving the Traveling Salesman
Problem
Anait Karapetyan 1, Eugene Fedorov 1, Oleksii Smirnov 2
1
    Cherkasy State Technological University, 460 Shevchenko ave, Cherkasy, 18006, Ukraine
2
    Central Ukrainian National Technical University, 8 Universytetskyi ave., Kropyvnytskyi, 25006, Ukraine


                 Abstract
                 The problem under review is improving the efficiency of the solution search for the Traveling
                 Salesman Problem. We propose a shortest path neural network training method that executes
                 a calibration of winner neuron weights and his neighbours based on the calculation of the centre
                 of gravity, which allows paralleling the learning process and also utilizes the effective dynamic
                 width of the topological neighbourhood (based on simulated annealing) for the calculation of
                 topological neighbourhood function. A proposed modification to the single solution human-
                 based metaheuristic allows for potential integer solutions and utilizes a 2-opt permutation in
                 local search neighbourhood creation and a 4-opt permutation in the case of global search. This
                 will increase efficiency in the search for the optimal path by decreasing the computational
                 complexity and increasing the accuracy of solutions.

                 Keywords 1
                 Travelling salesman problem, single solution human-based metaheuristics, self-organizing
                 feature map, simulated annealing, dynamic effective width of the topological neighbourhood,
                 rule of the centre of gravity calculation, 2-opt and 4-opt permutations.

1. Introduction
     A harsh socio-economic environment in Ukraine is a consequence of the war that Russia started
and the direct cause of the increase in the destruction of civil infrastructure and, as a result, the loss of
life and damage to property. Human and material recourse shortages are the cause for the integration of
new information technologies that make it possible to increase the efficiency of rescue units’
performance. One of these tasks that requires using new models and methods is optimizing the time for
the rescue units to travel to the location of destruction.
     There is an urgent need to develop methods aimed at solving combinatorial optimization problems
used in intelligent computational systems.
     Optimization methods that find the exact solution have high computational complexity. Random
search methods do not guarantee convergence. Optimization methods that find an approximate solution
through local search have a high probability of hitting a local extremum. In this regard, there is a
problem of insufficient efficiency of optimization methods, which needs to be addressed.
     Metaheuristics (or modern heuristics) and artificial neural networks are used to speed up finding a
quasi-optimal solution to optimization problems and reduce the probability of hitting a local extremum.
     The object of inquiry. The process of finding solutions to optimization problems.


MoMLeT+DS 2022: 4th International Workshop on Modern Machine Learning Technologies and Data Science, November, 25-26, 2022, Lviv-
Leiden, Ukraine
EMAIL: a.r.karapetyan@gmail.com (A. Karapetyan); fedorovee75@ukr.net (E. Fedorov); Dr.smirnovoa@gmail.com (O. Smirnov)

ORCID: 0000-0002-7412-3252 (A. Karapetyan); 0000-0003-3841-7373 (E. Fedorov); 0000-0001-9543-874X (O. Smirnov)
              ©️ 2022 Copyright for this paper by its authors.
              Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
              CEUR Workshop Proceedings (CEUR-WS.org)
     The subject of inquiry. Methods for finding a quasi-optimal solution based on metaheuristics and
artificial neural networks.
     The paper aims to increase the efficiency of a quasi-optimal solution search to the travelling
salesman problem using neural networks and metaheuristic methods.
     To achieve this goal, it is necessary to solve the following tasks:
    Create an optimization method on an artificial neural network.
    Create an optimization method based on single-solution human-based metaheuristics.
    Conduct a quantitative study of the proposed optimization method.

2. Problem statement
   The problem of increasing the efficiency of solution search to the travelling salesman problem based
on single-solution human-based metaheuristics is presented as the problem of finding local and global
search operators Alocal and Aglobal respectively, its application provides a search for such a solution,
under which F ( x* )  min and T  min .
   The problem of increasing the efficiency of searching for a solution to the travelling salesman
problem based on a neural network is reduced to the problem of finding such a vector of parameters W
that satisfies the criterion for the adequacy of the quasi-optimal solution search model
     1 P
 F    
     P  1
            ( f ( x ,W )  d  ) 2  min i.e. they result in the minimum mean square error (the difference
                                       W

between the model output and the desired output), where 𝑃 is the power of the test set 𝑥𝜇 .– -th -
training input value, d𝑑𝜇 .– -th - training output value.


3. Literature Review
    The advantage of single-solution human-based metaheuristics is the ease of implementation, low
computational complexity (no need to calculate a population of potential solutions), a small number of
adjustable parameters and operators, and no requirement to model the behaviour of objects of different
nature [1-3].
    Existing metaheuristics suffer from one or more of the following disadvantages:
        there is only an abstract description of the method, or the description of the method is focused
   on solving only a specific problem [4-5];
        the convergence of the method is not guaranteed [7-6];
        the influence of the iteration number on the process of finding a solution is not taken into
   account [8-9];
        the procedure for determining the values of parameters is not automated; [11-10]
        there is no possibility of using non-binary potential solutions [13-12];
        insufficient accuracy of the method [14-15] ;
        there is no possibility of solving problems of conditional optimization [16-17];
    This raises the problem of constructing efficient metaheuristic optimization methods.
    Existing neural networks suffer from one or more of the following disadvantages:
        high probability of the learning and adaptation method hitting the local extremum[18-19];
        difficulty in determining the structure of the network since there are no algorithms for
   calculating the number of layers and neurons in each layer for specific applications [20-21];
        inaccessibility for human understanding of the knowledge accumulated by the network (it is
   impossible to represent the relationship between output and output in the form of rules) since they
   are distributed among all elements of the neural network and are presented in the form of its weight
   coefficients [22];
        the difficulty of forming a representative sample [23];
    In this regard, the problem arises of constructing effective neural network optimization methods.
4. Self-organizing feature map
     A one-dimensional self-organizing feature map (SOFM) [24-25] is a single-layer non-recurrent
network. The neurons of the input layer correspond to the coordinates of the vertices. Before running
the network, the neurons of the output layer correspond to the points of the multidimensional sphere (if
the vertices are given on the plane, then these neurons correspond to the points of the ring), and after
running the network, they correspond to the points in the obtained suboptimal path. In contrast to the
classical learning method, in this paper, the weights of the winning neuron and its neighbours are
adjusted not based on the Kohonen rule but based on calculation of the centre of gravity, which makes
it possible to parallelize the learning process. [26-27]. In addition, unlike the classical learning method,
this paper uses the effective dynamic width of the topological neighbourhood (based on simulated
annealing) to calculate the topological neighbourhood function, which allows to explore the entire
search space at the initial iterations and make the search directed at the final iterations, which ensures
high search accuracy of this neural network.
    The training method consists of the following steps:
    1. Training iteration number 𝑛 = 1, initialization of all network weights 𝑤𝑖𝑗 (𝑛), 𝑖 ∈ 1, 𝑀,𝑗 ∈
   1, 𝑁ℎ so that the point with coordinates (𝑤𝑖1 (𝑛), . . . , 𝑤𝑖𝑀 (𝑛)) is located on the is on the surface of
   an M -dimensional sphere of radius 𝑅 ∈ (0,1], where 𝑀 – the number of input layer neurons (or the
   length of the vertex coordinate vector), 𝑁 ℎ -the number of neurons in the output layer, usually 𝑀 ≤
   𝑁 ℎ ≤ 3𝑀.
    A set of vertex coordinates is specified {𝒙𝜇 |𝒙𝜇 ∈ 𝑅 𝑀 }, 𝜇 ∈ 1, 𝑀, where 𝒙𝜇 – 𝜇 - vertex coordinate
vector.
    Initial shortest distance 𝑑(𝑛) =0.
    The maximum number of iterations N is set.
   2. Calculation of the distance to all neurons of the network
        The distance 𝑑𝜇𝑗 from the  vertex to each j neuron is determined by the formula: 𝑑𝜇𝑗 =
      ∑𝑀                    2
        𝑖=1(𝑥𝜇𝑖 − 𝑤𝑖𝑗 (𝑛)) , 𝜇 ∈ 1, 𝑀, 𝑗 ∈ 1, 𝑁
                                                ℎ

   3. Calculation of the shortest distance and selection of the neuron with the shortest distance.
   The smallest distance is calculated
   𝑑𝜇 = 𝑚𝑖𝑛𝑑𝜇𝑗 , 𝜇 ∈ 1, 𝑀, 𝑗 ∈ 1, 𝑁 (1)
           𝑗
   and the winning neuron is selected 𝑗𝜇∗ for which the distance 𝑑𝜇𝑗 is the shortest.
    𝑗𝜇∗ = 𝑎𝑟𝑔 𝑚𝑖𝑛 𝑑𝜇𝑗 , 𝜇 ∈ 1, 𝑀, 𝑗 ∈ 1, 𝑁ℎ .
                𝑗
   4. Setting the weights of the winning neuron 𝑗𝜇∗ and its neighbours based on the calculation of the
   centre of mass of the j cluster.
                       ∑𝑀
                        𝜇=1 ℎ𝑗,𝑗∗ (𝑛)𝑥𝜇𝑖
                                𝜇
       𝑤𝑖𝑗 (𝑛 + 1) =    ∑𝑀
                                           , 𝑖 ∈ 1, 𝑀, 𝑗 ∈ 1, 𝑁ℎ ,
                         𝜇=1 ℎ𝑗,𝑗∗ (𝑛)
                                    𝜇
   where ℎ𝑗,𝑗 ∗ (𝑛) - is the topological neighbourhood function, which is equal to 1 for 𝑗 = 𝑗 ∗ and
   decreases as the distance between the 𝑗 and 𝑗 ∗ neurons in the topological space increase. For
   example,
                                                         𝜌2 (𝑗, 𝑗 ∗ )
                                                  𝑒𝑥𝑝 (−              ) , 𝜌(𝑗, 𝑗 ∗ ) < 𝜎(𝑛)
                                   ℎ𝑗,𝑗 ∗ (𝑛) = {        2𝜎 2 (𝑛)
                                                         0,               𝜌(𝑗, 𝑗 ∗ ) ≥ 𝜎(𝑛)
   𝜌(𝑗, 𝑗 ∗ ) = 𝑚𝑖𝑛{ |𝑗 − 𝑗 ∗ |, 𝑁 (1) − |𝑗 − 𝑗 ∗ |},
   𝜎(𝑛) – the effective width of the topological neighbourhood (the "diameter" of the topological
   neighbourhood function) decreases with time. In this paper, it is calculated based on simulated
   annealing
                         1
   𝜎(𝑛) = 𝜎0 𝑒𝑥𝑝 (−          ),
                       𝑇(𝑛)
   𝑇(𝑛) = 𝛽 𝑛 𝑇(0),
   𝜎0 – initial effective width of the topological neighbourhood,
   𝑇(0) – start temperature,
   𝛽 – cooling rate.
   5. Checking the completion condition of training
                 1
   𝑑(𝑛 + 1) = ∑𝑀        𝑑 ,
                𝑀 𝜇=1 𝜇
   If 𝑛 < 𝑁, then 𝑛 = 𝑛 + 1, go to step 2.
   6. Calculating the distance to all neurons in the network
   𝑑𝜇𝑗 = ∑𝑀                   2
           𝑖=1(𝑥𝜇𝑖 − 𝑤𝑖𝑗 (𝑛)) , 𝜇 ∈ 1, 𝑀, 𝑗 ∈ 1, 𝑁
                                                   ℎ

   7. Selecting the neuron with the shortest distance
   𝑗𝜇∗ = 𝑎𝑟𝑔 𝑚𝑖𝑛 𝑑𝜇𝑗 , 𝜇 ∈ 1, 𝑀, 𝑗 ∈ 1, 𝑁ℎ
                𝑗
   The result is a vector of top-rank pairs ((1, 𝑗1∗ ), . . . , (𝑀, 𝑗𝑀
                                                                     ∗
                                                                       ))

5. Iterated local search
   Iterated local search was proposed by Stutzle [28]. The algorithm consists of two phases -
perturbation and local search. The use of a perturbation action makes it possible to avoid local optima
in a local search. Too little perturbation makes the algorithm greedy. In this paper, the perturbation
consists of the formation of a new solution by a permutation called "4-opt" (the current solution is
randomly divided into 4 parts, which become in order 1,4,3,2), and the local search consists of the
formation of a new solution by permutation called "2-opt" (the elements of the new solution, located
between two randomly selected vertices, are rearranged in reverse order).
   The method consists of the following steps:
   1. Initialization
   1.1. Setting the maximum number of iterations 𝑁1 ; the maximum number of local search iterations
   𝑁2 , where no new best solution is found; vertex vector lengths 𝑀.
   1.2. An ordered set of vertices is specified 𝑉 = {1, . . . , 𝑀} and the edge weight matrix [𝑑𝑖𝑗 ], 𝑖, 𝑗 ∈
   1, 𝑀.
   1.3. Specifying the Cost Function (Goal Function)
   𝐹(𝑥) = 𝑑𝑥𝑀,𝑥1 + ∑𝑀−1𝑖=1 𝑑𝑥𝑖 ,𝑥𝑖+1 → 𝑚𝑖𝑛,
                                             𝑥
   where 𝑑𝑥𝑀,𝑥1 – edge weight (𝑥𝑖 , 𝑥𝑖+1 ), 𝑥𝑖 , 𝑥𝑖+1 ∈ 𝑉,
       𝑥 – vertex vector.
   1.4. Set by randomly ordering the set 𝑉, the best vertex vector 𝑥 ∗ .
   1.5. Local search based on 2-opt
   1.5.1. 𝑚 = 0
   1.5.2. Two vertices 𝑐1 и 𝑐2, are randomly selected from the vector 𝑥 ∗ and the selection of these
   vertices continues until the condition 1 < 𝑐2 − 𝑐1 < 𝑀 − 1 is satisfied.
   1.5.3. Based on the vector
    𝑥 ∗ = (𝑥1∗ , . . . , 𝑥𝑐1−1
                           ∗       ∗
                               , 𝑥𝑐1              ∗
                                      , . . . , 𝑥𝑐2      ∗
                                                     , 𝑥𝑐2+1 , . . . , 𝑥𝑀∗
                                                                           )
   The vector is created
    𝑥 = (𝑥1∗ , . . . , 𝑥𝑐1−1
                         ∗       ∗
                             , 𝑥𝑐2              ∗
                                    , . . . , 𝑥𝑐1      ∗
                                                   , 𝑥𝑐2+1 , . . . , 𝑥𝑀∗
                                                                          ),
                        ∗              ∗
   i.e. vertexes 𝑥𝑐1      , . . . , 𝑥𝑐2   are transmitted in reverse order.
   1.5.4. If 𝐹(𝑥) < 𝐹(𝑥 ), then 𝑥 ∗ = 𝑥, 𝑚 = 0, otherwise 𝑚 = 𝑚 + 1
                                     ∗

   1.5.5. If 𝑚 < 𝑁2 , then go to step 1.5.2.
   2. Iteration number 𝑛 = 1.
   3. Performing a perturbation (generation of a solution 𝑥̑ from a solution 𝑥 ∗ based on 4-opt)
   3.1. Three vertices are randomly selected from the 𝑥 ∗ vector:
    𝑐1 = 2 + (𝑀/4) ∗ 𝑈(0,1), 𝑐2 = 𝑐1 + 1 + (𝑀/4) ∗ 𝑈(0,1),
    𝑐3 = 𝑐2 + 1 + (𝑀/4) ∗ 𝑈(0,1),
   where 𝑈(0,1) – a function that returns a uniformly distributed random number in a range [0,1]
   3.2. Based on the vector
                                                                                                ∗
    x ∗ = (x1∗ , . . . , xc1−1
                          ∗            ∗
                                   , xc1            ∗
                                         , . . . , xc2−1    ∗
                                                         , xc2            ∗
                                                               , . . . , xc3−1    ∗
                                                                               , xc3 , . . . , xM )
   The vector is created
                                                                                  ∗
                               x̑ = (x1∗ , . . . , xc1−1∗          ∗
                                                                , xc3  , . . . , xcM    ∗
                                                                                     , xc2            ∗
                                                                                           , . . . , xc3−1    ∗
                                                                                                           , xc1            ∗
                                                                                                                 , . . . , xc2−1 )
   4. Local search based on 2-opt
   4.1. m = 0
   4.2. Two vertices c1 и c2, are randomly selected from the vector x ∗ and the selection of these
   vertices continues until the condition 1 < c2 − c1 < M − 1 is satisfied.
   4.3. Based on the vector
    x̑ = (x̑ 1 , . . . , x̑ c1−1 , 𝑥̑ 𝑐1 , . . . , 𝑥̑ 𝑐2 , 𝑥̑ 𝑐2+1 , . . . , 𝑥̑ 𝑀 )
   The vector is created
    𝑥̆ = (𝑥̑1 , . . . , 𝑥̑ 𝑐1−1 , 𝑥̑ 𝑐2 , . . . , 𝑥̑ 𝑐1 , 𝑥̑ 𝑐2+1 , . . . , 𝑥̑ 𝑀 )
    i.e. the vertices 𝑥̑ 𝑐1 , . . . , 𝑥̑ 𝑐2 are rearranged in reverse order.

   4.4. If 𝐹(𝑥̆) < 𝐹(𝑥̑ ), then 𝑥̑ = 𝑥̆, 𝑚 = 0, otherwise 𝑚 = 𝑚 + 1
   4.5. If 𝑚 < 𝑁2 , then go to step 4.2.
   5. If 𝐹(𝑥̑ ) < 𝐹(𝑥 ∗ ), then 𝑥 ∗ = 𝑥̑
   6. If 𝑛 < 𝑁1 , then 𝑛 = 𝑛 + 1, go to step 3
    The result is 𝑥 ∗ .

6. Search with variable neighbourhood
   Variable neighbourhood search was proposed by Mladenović and Hansen [28] and used a local
search in an growing neighbourhood.
   This algorithm is based on three principles:
       is a local minimum for one neighbourhood, possibly not a local minimum for another
   neighbourhood;
       the global minimum is a local minimum for all possible neighbourhoods;
       local minima are relatively close to global minima.
   In this paper, the creation of a neighbourhood and a local search consists of the formation of a new
solution by a permutation called "2-opt".
   The method consists of the following steps:
   An example of numbered list is as following.
   1. Initialization
   1.1. Setting the maximum number of iterations 𝑁1 ; the maximum number of local search iterations
   𝑁2 , where no new best solution is found; maximum neighbourhood size 𝑁3 ; vertex vector lengths
   𝑀.
   1.2. An ordered set of vertices is specified 𝑉 = {1, . . . , 𝑀} and the edge weight matrix [𝑑𝑖𝑗 ], 𝑖, 𝑗 ∈
   1, 𝑀.
   1.3. Specifying the Cost Function (Goal Function)
   𝐹(𝑥) = 𝑑𝑥𝑀,𝑥1 + ∑𝑀−1 𝑖=1 𝑑𝑥𝑖 ,𝑥𝑖+1 → 𝑚𝑖𝑛,
                                                 𝑥
   where 𝑑𝑥𝑀,𝑥1 – edge weight (𝑥𝑖 , 𝑥𝑖+1 ), 𝑥𝑖 , 𝑥𝑖+1 ∈ 𝑉, 𝑥 – vertex vector.
   1.4. Is set by randomly ordering the set 𝑉, the best vertex vector𝑥 ∗ .
   2. Iteration number 𝑛 = 0.
   3. Neighbourhood size𝑍 = 1
   4. Create a neighbourhood 𝑈𝑥 ∗ solutions 𝑥 ∗ based on 2-opt
   4.1. 𝑧 = 1
   4.2. Two vertices 𝑐1 и 𝑐2, are randomly selected from the vector 𝑥 ∗ and the selection of these
   vertices continues until the condition 1 < 𝑐2 − 𝑐1 < 𝑀 − 1 is satisfied.
   4.3. Based on the vector
   𝑥 ∗ = (𝑥1∗ , . . . , 𝑥𝑐1−1
                         ∗       ∗
                              , 𝑥𝑐1            ∗
                                    , . . . , 𝑥𝑐2    ∗
                                                  , 𝑥𝑐2+1 , . . . , 𝑥𝑀 ∗
                                                                         )
   the vector is created
                                     𝑥𝑧 = (𝑥1∗ , . . . , 𝑥𝑐1−1
                                                            ∗            ∗
                                                                     , 𝑥𝑐2            ∗
                                                                           , . . . , 𝑥𝑐1    ∗
                                                                                         , 𝑥𝑐2+1            ∗
                                                                                                 , . . . , 𝑥𝑀 )
                            ∗           ∗
    i.e. the vertices 𝑥𝑐1 , . . . , 𝑥𝑐2 are rearranged in reverse order.
   4.4. If 𝑥𝑧 ∉ 𝑈𝑥 ∗ , то 𝑈𝑥 ∗ = 𝑈𝑥 ∗ ∪ {𝑥𝑧 }, 𝑧 = 𝑧 + 1
   4.5. If 𝑧 ≤ 𝑍, then go to step 4.2.
   5. Vector 𝑥̑ is randomly selected from the neighbourhood 𝑈𝑥 ∗
   6. Local search based on 2-opt
   6.1. 𝑚 = 0
   6.2. Two vertices 𝑐1 и 𝑐2, are randomly selected from the vector 𝑥̑ and the selection of these
   vertices continues until the condition 1 < 𝑐2 − 𝑐1 < 𝑀 − 1 is satisfied.
   6.3. Based on the vector
   𝑥̑ = (𝑥̑1 , . . . , 𝑥̑ 𝑐1−1 , 𝑥̑ 𝑐1 , . . . , 𝑥̑ 𝑐2 , 𝑥̑ 𝑐2+1 , . . . , 𝑥̑ 𝑀 )
    the vector is created
   𝑥̆ = (𝑥̑1 , . . . , 𝑥̑ 𝑐1−1 , 𝑥̑ 𝑐2 , . . . , 𝑥̑ 𝑐1 , 𝑥̑ 𝑐2+1 , . . . , 𝑥̑ 𝑀 )
   i.e. the vertices𝑥̑ 𝑐1 , . . . , 𝑥̑ 𝑐2 are rearranged in reverse order.
   6.4. If 𝐹(𝑥̆) < 𝐹(𝑥̑ ), then 𝑥̑ = 𝑥̆, 𝑚 = 0, otherwise 𝑚 = 𝑚 + 1
   6.5. If 𝑚 < 𝑁2 , then go to step 6.2.
   7. If 𝐹(𝑥̑ ) < 𝐹(𝑥 ∗ ), then 𝑥 ∗ = 𝑥̑ , 𝑛 = 0, go to step 3, otherwise 𝑛 = 𝑛 + 1
   8. If 𝑍 < 𝑁3 , then 𝑍 = 𝑍 + 1, go to step 4
   9. If 𝑛 < 𝑁1 , then go to step 3
   The result is 𝑥 ∗ .

7. Greedy randomized adaptive search
    Greedy randomized adaptive search was proposed by Feo and Resende [29]. The algorithm consists
of two phases - a greedy randomized procedure and a local search. The goal of the algorithm is to
repeatedly generate random greedy solutions and then use a local search to bring these solutions closer
to local optima. In this paper, local search consists of the formation of a new solution by a permutation
called "2-opt".
    The method consists of the following steps:
    1. Initialization
    1.1. Setting the greediness parameter 𝛼 or a greedy randomized procedure, where 𝛼 ∈ [0,1]
    (where 𝛼 = 0 maximum greed)
    1.2. Setting the maximum number of iterations 𝑁1 ; the maximum number of local search iterations
    𝑁2 , where no new best solution is found; vertex vector lengths 𝑀.
    1.3. An ordered set of vertices is specified 𝑉 = {1, . . . , 𝑀} and the edge weight matrix [𝑑𝑖𝑗 ], 𝑖, 𝑗 ∈
    1, 𝑀.
    1.4. Specifying the Cost Function (Goal Function)
    1.5. 𝐹(𝑥) = 𝑑𝑥𝑀,𝑥1 + ∑𝑀−1𝑖=1 𝑑𝑥𝑖 ,𝑥𝑖+1 → 𝑚𝑖𝑛,
                                                               𝑥
   where 𝑑𝑥𝑀,𝑥1 – edge weight (𝑥𝑖 , 𝑥𝑖+1 ), 𝑥𝑖 , 𝑥𝑖+1 ∈ 𝑉, 𝑥 – vertex vector.
   Set by randomly ordering the set 𝑉, the best vertex vector 𝑥 ∗ .
   2. Iteration number 𝑛 = 1.
   3. Running a greedy randomized procedure
   3.1. A vertex v cur is randomly selected from the plurality of vertices.
   From a set of vertices V, a vertex is randomly selected v cur The initialization of the set of forbidden
   vertices is V tabu = {v cur }, number of forbidden vertices l = 1, x̑ 1 = v cur
   3.2. Creating a set of allowed vertices 𝑉̃ = 𝑉\𝑉 𝑡𝑎𝑏𝑢 , initialization of the set of restricted
   candidates 𝑉 𝑅𝐶𝐿 = ∅, vertex number 𝑗 = 1
            𝑚𝑖𝑛 𝑚𝑖𝑛 𝑣𝑐𝑢𝑟 ,𝑣̃            𝑚𝑎𝑥 𝑚𝑎𝑥 𝑣𝑐𝑢𝑟 ,𝑣̃
                     ̃|        𝑠                 ̃|    𝑠
   3.3. 𝑑       𝑠∈1,|𝑉             ,𝑑       𝑠∈1,|𝑉         ,
                                           𝑚𝑖𝑛
                               𝑚𝑖𝑛𝑚𝑎𝑥
   3.4. If 𝑑𝑣 𝑐𝑢𝑟 ,𝑣̃𝑗 ≤ 𝑑          , then 𝑉 𝑅𝐶𝐿 = 𝑉 𝑅𝐶𝐿 ∪ {𝑣̃𝑗 }
   3.5. If 𝑗 < |𝑉̃ |, then 𝑗 = 𝑗 + 1, go to step 3.3
   3.6. From a set 𝑉 𝑅𝐶𝐿 a vertex is randomly selected 𝑣 𝑐𝑢𝑟 , 𝑉 𝑡𝑎𝑏𝑢 = 𝑉 𝑡𝑎𝑏𝑢 ∪ {𝑣 𝑐𝑢𝑟 }, 𝑥̑ 𝑙+1 = 𝑣 𝑐𝑢𝑟
   3.7. If 𝑙 < 𝑀, then 𝑙 = 𝑙 + 1, go to step 3.2
   4. Local search based on 2-opt
   4.1. 𝑚 = 0
   4.2. Two vertices 𝑐1 и 𝑐2, are randomly selected from the vector 𝑥̑ and the selection of these
   vertices continues until the condition 1 < 𝑐2 − 𝑐1 < 𝑀 − 1 is satisfied.
   4.3. Based on the vector
   x̑ = (x̑ 1 , . . . , x̑ c1−1 , x̑ c1 , . . . , x̑ c2 , x̑ c2+1 , . . . , x̑ M )
   the vector is created
   x̆ = (x̑ 1 , . . . , x̑ c1−1 , x̑ c2 , . . . , x̑ c1 , x̑ c2+1 , . . . , x̑ M )
   i.e. the vertices x̑ c1 , . . . , x̑ c2 are rearranged in reverse order.
   4.4. If F(x̆) < F(x̑ ), then x̑ = x̆, m = 0, otherwise m = m + 1
   4.5. If m < N2 , then go to step4.2.
   5. If F(x̑ ) < F(x ∗ ), then x ∗ = x̑
   6. If n < N1 , then n = n + 1, go to step 3
   The result is 𝑥 ∗ .



8. Guided local search

   Guided local search was proposed by Voudouris and Tsang [29]. The algorithm consists of two
phases - local search and penalty modification. The use of a penalty function that increases the value of
the goal function makes it possible to avoid local optima in local search. In this paper, local search
consists of the formation of a new solution by a permutation called "2-opt".
   The method consists of the following steps:
   1. Initialization
   1.1. Setting the scaling factor of the penalty function 𝜆, and 𝜆 > 0 (for large 𝜆 the entire search
   space is explored, for small 𝜆 the search becomes directed)
   1.2. Setting the maximum number of iterations 𝑁1 ; the maximum number of local search
   iterations 𝑁2 , where no new best solution is found; vertex vector lengths 𝑀.
   1.3. An ordered set of vertices is specified 𝑉 = {1, . . . , 𝑀} and the edge weight matrix [𝑑𝑖𝑗 ], 𝑖, 𝑗 ∈
   1, 𝑀.
   1.4. Specifying the Cost Function (Goal Function)
                                  𝐹(𝑥) = 𝑑𝑥𝑀,𝑥1 + ∑𝑀−1
                                                     𝑖=1 𝑑𝑥𝑖 ,𝑥𝑖+1 → 𝑚𝑖𝑛,
                                                                         𝑥
   where 𝑑𝑥𝑀,𝑥1 – edge weight (𝑥𝑖 , 𝑥𝑖+1 ), 𝑥𝑖 , 𝑥𝑖+1 ∈ 𝑉,
   𝑥 – vertex vector.
   1.5. Specifying the penalty function
                                                             𝑀−1

                                       𝐹𝑊 (𝑥, 𝑤) = 𝑤𝑥𝑀,𝑥1 + ∑ 𝑤𝑥𝑖 ,𝑥𝑖+1
                                                             𝑖=1
   where 𝑤𝑥𝑖 ,𝑥𝑖+1 – edge penalty (𝑥𝑖 , 𝑥𝑖+1 ), 𝑥𝑖 , 𝑥𝑖+1 ∈ 𝑉
   1.6. Specifying an Extended Cost Function (Goal Function)
                                  𝐹𝐴 (𝑥, 𝑤) = 𝐹(𝑥) + 𝜆 ⋅ 𝐹𝑊 (𝑥, 𝑤) → 𝑚𝑖𝑛
                                                                           𝑥
   1.7. Setting the utility function
                     𝑑𝑥𝑖 ,𝑥𝑖+1
                                 ,   1≤𝑖<𝑀
                   1+𝑤𝑥𝑖 ,𝑥𝑖+1
   𝑓𝑈 (𝑥, 𝑤, 𝑖) = { 𝑑                        , 𝑖 ∈ 1, 𝑀
                        𝑥𝑀 ,𝑥1
                                 ,    𝑖=𝑀
                    1+𝑤𝑥𝑀 ,𝑥1
   1.8. Set by randomly ordering the set 𝑉, the best vertex vector 𝑥 ∗ .
   1.9. The current solution is set 𝑥̑ , and 𝑥̑ = 𝑥 ∗ .
   1.10. Penalties are initialized
   𝑤𝑖𝑗 = 0, 𝑖, 𝑗 ∈ 1, 𝑀
   2. Iteration number 𝑛 = 1.
   3. Local search based on 2-opt
   3.1. 𝑚 = 0
   3.2. Two vertices 𝑐1 и 𝑐2, are randomly selected from the vector 𝑥̑ and the selection of these
   vertices continues until the condition 1 < 𝑐2 − 𝑐1 < 𝑀 − 1 is satisfied.
   3.3. Based on the vector
   𝑥̑ = (𝑥̑1 , . . . , 𝑥̑ 𝑐1−1 , 𝑥̑ 𝑐1 , . . . , 𝑥̑ 𝑐2 , 𝑥̑ 𝑐2+1 , . . . , 𝑥̑ 𝑀 )
   the vector is created
   𝑥̆ = (𝑥̑1 , . . . , 𝑥̑ 𝑐1−1 , 𝑥̑ 𝑐2 , . . . , 𝑥̑ 𝑐1 , 𝑥̑ 𝑐2+1 , . . . , 𝑥̑ 𝑀 )
   i.e. the vertices 𝑥̑ 𝑐1 , . . . , 𝑥̑ 𝑐2 are rearranged in reverse order.
   3.4. If 𝐹𝐴 (𝑥̆, 𝑤) < 𝐹𝐴 (𝑥̑ , 𝑤), then 𝑥̑ = 𝑥̆, 𝑚 = 0, otherwise 𝑚 = 𝑚 + 1
   3.5. If 𝑚 < 𝑁2 , then go to step3.2.
   4. The penalties are modified
   4.1. 𝑖 ∗ = 𝑎𝑟𝑔 𝑚𝑎𝑥 𝑓𝑈 (𝑥̑ , 𝑤, 𝑖)
                       𝑖
   4.2. If 1 ≤ 𝑖 ∗ < 𝑀, then 𝑤𝑥̑ 𝑖∗ ,𝑥̑ 𝑖∗+1 = 𝑤𝑥̑ 𝑖∗ ,𝑥̑ 𝑖∗+1 + 1
        If 𝑖 ∗ = 𝑀, then 𝑤𝑥̑ 𝑀,𝑥̑ 1 = 𝑤𝑥̑ 𝑀,𝑥̑ 1 + 1
   5. If 𝐹(𝑥̑ ) < 𝐹(𝑥 ∗ ), then 𝑥 ∗ = 𝑥̑ .
   6. If 𝑛 < 𝑁1 , then 𝑛 = 𝑛 + 1, go to step 3
   The result is 𝑥 ∗ .



9. Partial optimization metaheuristic under special intensification conditions
   Partial optimization metaheuristic under special intensification conditions was proposed by Taillard
and Voss [30]. The algorithm consists of four phases - the choice of the number of the component (part)
of the solution, the creation of a set of components closest to the selected one, the optimization
procedure, and the analysis of the resulting solution. The larger the number of nearest components, the
larger the neighbourhood. In the optimization procedure (for example, search with tabus), only the
nearest components of the solution are changed (in the case of the problem of finding the optimal path,
these are vertices). In this paper, the creation of a neighbourhood consists of forming a new solution by
a permutation called "2-opt".The method consists of the following steps:
   1. Initialization
   1.1. Setting the maximum number of nearest components 𝑄, the maximum number of iterations 𝑁1 ;
   the maximum number of search iterations with tabus 𝑁2 ; the size of neighbourhood 𝑍; solution
   length 𝑀; maximum length of the tabu list 𝐿𝑚𝑎𝑥 .
   1.2. An ordered set of vertices is specified 𝑉 = {1, . . . , 𝑀} and the edge weight matrix [𝑑𝑖𝑗 ], 𝑖, 𝑗 ∈
   1, 𝑀.
   1.3. Specifying the Cost Function (Goal Function)
   𝐹(𝑥) = 𝑑𝑥𝑀,𝑥1 + ∑𝑀−1  𝑖=1 𝑑𝑥𝑖 ,𝑥𝑖+1 → 𝑚𝑖𝑛,
                                               𝑥
   where 𝑑𝑥𝑀,𝑥1 – edge weight (𝑥𝑖 , 𝑥𝑖+1 ), 𝑥𝑖 , 𝑥𝑖+1 ∈ 𝑉,
   𝑥 – vertex vector.
   1.4. Set by randomly ordering the set 𝑉, the best vertex vector 𝑥 ∗ .
   1.5. The current solution is set 𝑥̑ , and 𝑥̑ = 𝑥 ∗ .
   1.6. Initialization of the set of tabu vertices 𝑉 𝑡𝑎𝑏𝑢 = ∅.
   2. Iteration number 𝑛 = 1.
   3. The number of the solution component is randomly selected 𝑗, and 𝑥̑𝑗 ∉ 𝑉 𝑡𝑎𝑏𝑢
   4. Creating a set of vertex numbers 𝑉 𝑈
   4.1. All vertices 𝑥̑ 𝑖 are sorted by proximity to the vertex 𝑥̑𝑗
   4.2. Q of the top (closest) vertices are selected, from which a set 𝑉 𝑈 is created
   5. Optimization procedure - use of search with prohibitions
   5.1. Initializing the tabu list 𝑇 = ∅
   5.2. Search iteration number with tabus 𝑚 = 1.
   5.3. Creating a neighbourhood 𝑈𝑥̑ solution 𝑥̑ based on 2-opt
   5.3.1. 𝑧 = 1
   5.3.2. Two vertices 𝑐1 и 𝑐2, are randomly selected from the vector 𝑥̑ and the selection of these
   vertices continues until the condition 1 < 𝑐2 − 𝑐1 < 𝑀 − 1 ∧ 𝑐1, 𝑐2 ∈ 𝑉 𝑈 is satisfied.
   5.3.3. Based on the vector
   𝑥̑ = (𝑥̑1 , . . . , 𝑥̑ 𝑐1−1 , 𝑥̑ 𝑐1 , . . . , 𝑥̑ 𝑐2 , 𝑥̑ 𝑐2+1 , . . . , 𝑥̑ 𝑀 )
   the vector is created
   𝑥𝑧 = (𝑥̑ 1 , . . . , 𝑥̑ 𝑐1−1 , 𝑥̑ 𝑐2 , . . . , 𝑥̑ 𝑐1 , 𝑥̑ 𝑐2+1 , . . . , 𝑥̑ 𝑀 )
   i.e. the vertices 𝑥̑ 𝑐1 , . . . , 𝑥̑ 𝑐2 are rearranged in reverse order.
   5.3.4. A pair of tabu edges is created 𝐸𝑧 = ((𝑐1 − 1, 𝑐1), (𝑐2 − 1, 𝑐2)) for the vertex vector 𝑥𝑧
   5.3.5. 𝑗 = 1
   5.3.6. If 𝑗 < 𝑀 ∧ (𝑥𝑧𝑗 , 𝑥𝑧,𝑗+1 ) ∈ 𝑇 or 𝑗 = 𝑀 ∧ (𝑥𝑧𝑀 , 𝑥𝑧1 ) ∈ 𝑇, then go to step 5.3.2
   5.3.7. If 𝑗 < 𝑀, then 𝑗 = 𝑗 + 1, go to step 5.3.6
   5.3.8. If 𝑥𝑧 ∉ 𝑈𝑥̑ , then 𝑈𝑥̑ = 𝑈𝑥̑ ∪ {𝑥𝑧 }, 𝑧 = 𝑧 + 1,
   5.3.9. If 𝑧 ≤ 𝑍, then go to step 5.3.3.
   5.4. From neighbourhood 𝑈𝑥̑ a solution with the lowest price is selected 𝑥𝑧 ∗ , i.e. 𝑧 ∗ =
   𝑎𝑟𝑔 𝑚𝑖𝑛 𝐹(𝑥𝑧 )
         𝑧
   5.5. If 𝐹(𝑥𝑧 ∗ ) ≥ 𝐹(𝑥̑ ), then go to step 5.9
   5.6. 𝑥̑ = 𝑥𝑧 ∗ .
   5.7. A pair of edges is placed at the start of the tabu list .
   5.8. If |𝑇| > 𝐿𝑚𝑎𝑥 , then a pair of edges that were tabued earlier is pushed from the end of the tabu
   list 𝑇.
   5.9. If 𝑚 < 𝑁2 , then 𝑚 = 𝑚 + 1, go to step 5.3
   6. If 𝑥̑ = 𝑥 ∗ , then 𝑉 𝑡𝑎𝑏𝑢 = 𝑉 𝑡𝑎𝑏𝑢 ∪ {𝑗} (a stronger version is possible 𝑉 𝑡𝑎𝑏𝑢 = 𝑉 𝑡𝑎𝑏𝑢 ∪ 𝑉 𝑈 ),
   otherwise 𝑥 ∗ = 𝑥̑ , 𝑉 𝑡𝑎𝑏𝑢 = 𝑉 𝑡𝑎𝑏𝑢 \𝑉 𝑈 (a weaker version is possible 𝑉 𝑡𝑎𝑏𝑢 = ∅)
   7. If 𝑛 < 𝑁1 и |𝑉 𝑡𝑎𝑏𝑢 | < 𝑀, then 𝑛 = 𝑛 + 1, go to step 3
   The result is 𝑥 ∗ .


10. Quantitative study
    The quantitative study of the proposed optimization methods was carried out using the Matlab
package. For the traveling salesman problem, the search for a solution was carried out on the standard
database berlin52. For this database, the optimal path length is 7542.
    In this paper, for a self-organizing feature map, the annealing temperature decrease function is
determined by the formula 𝑇(𝑛) = 𝛽 𝑛 𝑇0 and is shown in Fig.1. In this case, the initial effective width
of the topological neighbourhood is 1, the initial temperature is 106, and the cooling coefficient is 0.94.




Figure 1: Annealing temperature decrease function
   The dependence (Fig. 1) of the annealing temperature on the iteration number shows that the
annealing temperature decreases with an increase in the iteration number.
   In this case, the effective width of the topological neighbourhood according to the formula 𝜎(𝑛) =
             1
𝜎0 𝑒𝑥𝑝 (−      )and is presented in Fig.2
          𝑇(𝑛)




Figure 2. Effective Width of Topological Neighbourhood

    The dependence (Fig. 2) of the effective width of the topological neighbourhood on the annealing
temperature shows that the effective width of the topological neighbourhood decreases with decreasing
temperature.
    The results of comparing the proposed SOFM learning method with the existing one based on the
criteria of time and path length are presented in Table 1, where M is the number of input layer neurons
(or the length of the vertex coordinate vector), N^h is the number of output layer neurons, N is the
maximum number of iterations.

Table 1
Comparison of the proposed SOFM training method with the existing one based on the criteria of time
and path length
          Criteria Head                Proposed method                   Existing method
              Time                           𝑀⋅𝑁
                                                                        𝑀 ⋅ 𝑁 ⋅ (𝑀 ⋅ 𝑁 ℎ )
                                    (in case of use of GPU)
           Path length                       7919                              8510

    According to Table 1, the proposed teaching method gives a faster and more accurate result than the
existing teaching method.
    In this paper, for single-solution human-based metaheuristics, the maximum number of iterations is
100, and the neighbourhood size is 50. The penalty function scaling factor is 70, the greed parameter is
0.3, the maximum number of nearest components is 10, and the maximum length of the prohibition list
is 3.
    The results of the comparison of the proposed methods with the taboo search method based on the
length of the path are presented in Table 2.
Table 2
Comparison of Proposed Single-Solution Human-Based Metaheuristics with Taboo Search Method
    № п/п                      Metaheuristics                          Best path length found
       1                    Iterative local search                              8176
       2             Search with variable neighborhood                          8154
       3            Greedy randomized adaptive search                           8097
      4                    Guided local search                                   8034
      5         Partial optimization metaheuristic under
                                                                                 7919
                    special intensification conditions
      6                       Taboo Search                                       8296

    According to Table 2, the proposed single-solution human-based metaheuristics give a more
accurate result than the taboo search, and the partial optimization metaheuristic under special
intensification conditions is the most accurate.

11. Conclusions
   1. The urgent task of increasing the efficiency of methods for solving the traveling salesman
   problem was undertaken by creating methods based on artificial neural networks and single-solution
   human-based metaheuristics.
   2. The proposed modification of the learning method for a self-organizing feature map uses:
        setting weights of the winning neuron and its neighbours based on the calculation of the centre
   of gravity, this allows for accelerating the training of this neural network due to parallelization (in
   case of GPU use);
        the effective dynamic width of the topological neighbourhood for calculating the topological
   neighbourhood function, which allows to explore the entire search space at the initial iterations and
   make the search directed at the final iterations, this ensures high accuracy of the search through this
   neural network.
   3. A modification of single-solution human-based metaheuristics is proposed that allows potential
   integer solutions and uses a 2-opt permutation in the case of a local search neighbourhood creation,
   and a 4-opt permutation in the case of a global search, which allows adapting these metaheuristics
   to solve the traveling salesman problem.
   The proposed methods make it possible to expand the scope of application of the self-organizing
feature map and single-solution human-based metaheuristics, which is confirmed by their adaptation to
the traveling salesman problem and contributes improve efficiency of intelligent computational
systems.
   Prospects for further research are the study of the implementation of the proposed method on a broad
class of artificial intelligence problems.

12.References
[1] Rothlauf F. Design of modern heuristics: Principles and application / F. Rothlauf. – New York:
    Springer-Verlag, 2011. – 267 p.
[2] Nakib A. Metaheuristics for Medicine and Biology / Nakib A., Talbi El-G. – Berlin: Springer-
    Verlag, 2017. – 211 p.
[3] Engelbrecht A.P. Computational Intelligence: an introduction / A.P. Engelbrecht. – Chichester,
    West Sussex: Wiley & Sons, 2007. – 630 p.
[4] Yu X. Introduction to evolutionary algorithms / X. Yu, M. Gen. – London: Springer-Verlag, 2010.
    – 433 p.
[5] Subbotin S. Diagnostic Rule Mining Based on Artificial Immune System for a Case of Uneven
    Distribution of Classes in Sample / S. Subbotin, A. Oliinyk, V. Levashenko, E. Zaitseva //
    Communications. – Vol.3. – 2016. – P.3-11.
[6] Yang X.-S. Nature-inspired Algorithms and Applied Optimization / X.-S. Yang. – Charm:
    Springer, 2018. – 330 pp.
[7] Blum C. Hybrid Metaheuristics. Powerful Tools for Optimization / C. Blum, G. R. Raidl. – Charm:
    Springer, 2016. – 157 p.
[8] Martí R., Handbook of Heuristics / R. Martí, P.M. Pardalos, M.G.C. Resende. – Charm: Springer,
    2018. – 1289 p.
[9] Yang X.-S. Optimization Techniques and Applications with Examples / X.-S. Yang. – Hoboken,
     New Jersey: Wiley & Sons, 2018. – 364 p.
[10] Gendreau M. Handbook of Metaheuristics / M. Gendreau, J.-Y. Potvin. – New York: Springer,
     2010. – 640 p.
[11] Doerner K.F. Metaheuristics. Progress in Complex Systems Optimization / K.F. Doerner, M.
     Gendreau, P. Greistorfer, W. Gutjahr, R.F. Hartl, M. Reimann. – New York: Springer, 2007. – 408
     p.
[12] Chopard B. An Introduction to Metaheuristics for Optimization / B. Chopard, M. Tomassini. –
     New York: Springer, 2018. – 230 p.
[13] Bozorg‐Haddad O. Meta-heuristic and Evolutionary Algorithms for Engineering Optimization / O.
     Bozorg‐Haddad, M. Solgi, H. Loaiciga. – Hoboken, New Jersey: Wiley & Sons, 2017. – 293 p.
[14] Babayigit B. A clonal selection algorithm for array pattern nulling by controlling the positions of
     selected elements / B. Babayigit, K. Guney, A. Akdagli // Progress In Electromagnetics Research.
     – Vol. 6. – 2008. – P. 257-66.
[15] Radosavljević J. Metaheuristic Optimization in Power Engineering / J. Radosavljević. – New
     York: The Institution of Engineering and Technology, 2018. – 536 p.
[16] Du K.-L. Search and Optimization by Metaheuristics. Techniques and Algorithms Inspired by
     Nature / K.-L. Du, M.N.S Swamy. – Charm: Springer, 2016. – 434 p.
[17] Alba E., Nakib A., Siarry P. Metaheuristics for Dynamic Optimization. – Berlin: Springer-Verlag,
     2013. – 398 p.
[18] Kobayashi M. Quaternionic Hopfield neural networks with twin-multistate activation function.
     Neurocomputing 267, 304–310 (2017)
[19] Haykin S. Neural Networks and Learning Machines. Upper Saddle River, New Jersey: Pearson
     Education. Inc., 2009. 906 p.
[20] K.-L. Du, K. M. S. Swamy, Neural Networks and Statistical Learning, Springer-Verlag, London,
     2014.
[21] Russell S. Artificial Intelligence: a Modem Approach / S. Russell, P. Norvig. – Englewood Cliffs,
     NJ: Prentice Hall PTR, 2020. – 1136 p p.
[22] M. Tim Jones. Artificial Intelligence: A Systems Approach. Hingham, MA: INFINITY SCIENCE
     PRESS LLC, 2008. – 498 p.
[23] The method of intelligent image processing based on a three-channel purely convolutional neural
     network / Fedorov, E., Lukashenko, V., Patrushev, V., Rudakov, K., Mitsenko, S. // CEUR
     Workshop Proceedings, 2018, 2255, pp. 336–351.
[24] T Kohonen (2012) Self-organizing and associative memory, 3rd edn. Springer, New York
[25] T Kohonen. Essentials of the self-organizing map. Neural Networks Volume 37, January 2013,
     Pages 52-65 https://doi.org/10.1016/j.neunet.2012.09.018
[26] Parallel computational algorithms in thermal processes in metallurgy and mining / Shvachych,
     G.G., Ivaschenko, O.V., Busygin, V.V., Fedorov, Ye.Ye. // Naukovyi Visnyk Natsionalnoho
     Hirnychoho Universytetu, 2018, Vol. 4, pp. 129–137
[27] Automated control of temperature regimes of alloyed steel products based on multiprocessors
     computing systems / Shlomchak, G., Shvachych, G., Moroz, B., Fedorov, E., Kozenkov, D. //
     Metalurgija, 2019, 58(3-4), pp. 299–302
[28] Talbi El-G. Metaheuristics: from design to implementation / El-G. Talbi. – Hoboken, New Jersey:
     Wiley & Sons, 2009. – 618 p.
[29] Glover F. Handbook of Metaheuristics / F. Glover, G.A. Kochenberger. – Dordrecht: Kluwer
     Academic Publishers, 2003. – 570 p.
[30] Brownlee J. Clever Algorithms: Nature-Inspired Programming Recipes / J. Brownlee. –
     Melbourne: Brownlee, 2011. – 436 p.