<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Construction and Optimization of Stability Conditions of Learning Processes in Mathematical Models of Neurodynamics</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Andriy Shatyrko</string-name>
          <email>shatyrko.a@knu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Denys Khusainov</string-name>
          <email>d.y.khusainov@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksii Bychkov</string-name>
          <email>oleksiibychkov@knu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Josef Diblik</string-name>
          <email>diblik@vut.cz</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jaromir Baštinec</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Brno University of Technology</institution>
          ,
          <addr-line>Technická 3058/10, Brno, 61600</addr-line>
          ,
          <country country="CZ">Czech Republic</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Taras Shevchenko University of Kyiv</institution>
          ,
          <addr-line>64, Volodymyrska str., Kyiv, 01033</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <fpage>42</fpage>
      <lpage>51</lpage>
      <abstract>
        <p>This article is devoted to dynamic processes in the field of artificial intelligence, namely in the tasks of neurodynamics: the field of knowledge in which neural networks are considered as nonlinear dynamical systems and focuses on the problem of stability. The systems under consideration share four common characteristics: a large number of nodes (neurons), nonlinearity, dissipativity, noise. The purpose of this work is to build to construct of asymptotic stability conditions for dynamic model of neuronet network, which is described in terms of ODE nonlinear systems. Main method of investigation is Lyapunov direct method. Authors show that solution of pointed problem can be reduced to the task of convex optimization. By realization on Python tools the algorithm of NelderMead method, a number of numerical experiments were conducted to select the optimal parameters of Neuronet model, differential equation system, software, stability, Lyapunov function.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>To date, it is difficult to overestimate the achievements of neural networks and their contribution
to various fields of science, to the development of the world as a whole and to the lives of each of us
in particular. Deep learning is used in a huge number of fields from image recognition, weather
prediction or text translations to the creation of new
medicines and unique works of art [1-5].</p>
      <p>Nevertheless, very often the apparatus of neural networks is not investigated properly, and the
programs themselves are used as "black boxes": data is given as an input, and the desired prediction is
obtained as an output. It is clear that it is simply impossible to build a good working architecture
without understanding how a neural network works.</p>
      <p>One of the urgent tasks of the modern theory of neural networks is learning. The dynamics of
processes, and more precisely the process of "learning", comparing with processes in electric circuits,
can be described using the apparatus of ordinary differential equations. In particular, by a system of
stationary nonlinear differential equations with a selected linear part and nonlinearity of a special
kinde. This direction of research is relevant, which is confirmed by many recent scientific works
[611]. One of the universal apparatuses of the mathematical theory of stability, allowing to study the
dynamics of the learning process, i.e. the convergence of the solutions of the system to the established
one, is the second Lyapunov method [12,13].</p>
    </sec>
    <sec id="sec-2">
      <title>2. Formulation of the problem</title>
      <p>Without limiting the generality the authors first consider a dynamic system on a plane. It is a
system of two nonlinear differential equations of the form
EMAIL:
(A. 1);
(A. 2);</p>
      <p>2022 Copyright for this paper by its authors.
 ̇2 = − 22 2 +  21 21( 1)+  22 22( 2)+  2.</p>
      <p>
        We assume that  11 &gt; 0,  12 &gt; 0, functions   ( ),  ,  = ̅1̅,̅2̅ are monotonic and continuously
differentiable, and the system of differential equations (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) has a single singular point  0( 10,  20),
which is a solution of the system of equations
− 11 1 +  11 11( 1)+  12 12( 2)+  1 = 0,
− 22 2 +  21 21( 1)+  22 22( 2)+  2 = 0.
      </p>
      <p>Let's make a replacement, a parallel transfer type, a fixed point  0( 10,  20) to the origin of the
coordinates
 1 =  1 +  10,
 2 =  2 +  20.</p>
      <p>
        Then, taking into account (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ), (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ), system (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) will be reduced to the form
where
      </p>
      <p>Since
 ̇1 = − 11 1 +  11 11( 1)+  12 12( 2),
 ̇ 2 = − 22 2 +  21 21( 1)+  22 22( 2),
 11( 1)=  11( 1 +  10)−  11( 10),
 12( 2)=  12( 2 +  20)−  12( 20),
 21( 1)=  21( 1 +  10)−  21( 10),
 22( 2)=  22( 2 +  20)−  22( 20).</p>
      <p>
        11(0)= 0,  12(0)= 0,  21(0)= 0,  22(0)= 0,
then the study of the equilibrium position  0( 10,  20) and the convergence of solutions to this point
is reduced to the study of the stability of the zero equilibrium position  (0,0)of the system (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ).
      </p>
      <p>
        Let the functions  11( 1),  12( 2),  21( 1),  22( 2) from (
        <xref ref-type="bibr" rid="ref5">5</xref>
        ) satisfy the so-called "sector
conditions", which can be written in a compact form as follows:
( 11 1 −  11( 1)) 11( 1)&gt; 0,
( 12 2 −  12( 2)) 12( 2)&gt; 0,
( 21 1 −  21( 1)) 21( 1)&gt; 0,
( 22 2 −  22( 2)) 22( 2)&gt; 0.
 ( )=
1 −  −
1 +  − ,
 ( )=
      </p>
      <p>In fact, this means that these functions are located in the first and third sectors of the coordinate
plane. Such functions are, for example,
which are successfully used in the design of neural networks as activation functions [1].</p>
      <p>
        If the conditions (
        <xref ref-type="bibr" rid="ref6">6</xref>
        ) are satisfied, the asymptotic stability conditions and convergence estimates
can be obtained using the Lyapunov function of the Lurie-Postnikov type [14 ]. Since the linear part
has a diagonal form, we construct the Lyapunov function in the form of a sum of squares and integral
additions with constant coefficients
 1
 ( 1,  2)= ℎ11 12 + ℎ22 22 + ∫
      </p>
      <p>0
+ ∫0 2( 12 12( )+  22 22( )) ,
( 11 11( )+  21 21( ))
+
ℎ11 &gt; 0, ℎ22 &gt; 0,  11 &gt; 0,  21 &gt; 0,  12 &gt; 0,  22 &gt; 0.</p>
      <p>Let introduce next notation
 = ( 12</p>
      <p>
        22  23),
1
 13 = (2  21 11
2ℎ22 22),
+  1 is positive-definite. Then the zero-equilibrium position of the system (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) will be
Proof. From dependence (
        <xref ref-type="bibr" rid="ref7">7</xref>
        ) it follows that the function  ( 1,  2)satisfies the following two
ℎ11 12 + ℎ22 22 ≤  ( 1,  2)≤ (ℎ11 + 2 11 11 + 2 12 21) 12 +
+ (ℎ22 + 2 21 12 + 2 22 22) 22,
1
1
1
i.e. is a positive definite function.
      </p>
      <p>
        Let's calculate its total derivative due to system (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ). We will get
0
1
Let's enter the first terms in the quadratic form, we get
 111 = (00 00),  212 = ( 11
0
(
        <xref ref-type="bibr" rid="ref8">8</xref>
        )
quadratic form
 ( 1, 2)= −   .
      </p>
      <p>( 1,  2)= −   +
The matrix</p>
      <p>
        in the last expression is not positive definite. Therefore, in order to make the
quadratic form negative definite, we use the “sector conditions” (
        <xref ref-type="bibr" rid="ref6">6</xref>
        ) and write the total derivative of
the Lyapunov function in the form
+ 11( 11 1 −  11( 1)) 11( 1)+  12( 12 2 −  12( 2)) 12( 2)+
− 11( 11 1 −  11( 1)) 11( 1)−  12( 12 2 −  12( 2)) 12( 2)−
− 21( 21 1 −  21( 1)) 21( 1)−  22( 22 2 −  22( 2)) 22( 2).
      </p>
      <p>+ 21( 21 1 −  21( 1)) 21( 1)+  22( 22 2 −  22( 2)) 22( 2)−</p>
      <p>( 1,  2)= −  ( +  1) −
− 11( 11 1 −  11( 1)) 11( 1)−  12( 12 2 −  12( 2)) 12( 2)−
− 21( 21 1 −  21( 1)) 21( 1)−  22( 22 2 −  22( 2)) 22( 2).</p>
      <p>( 1,  2)≤ −  ( +  1) .</p>
      <p>
        Since the inequalities (
        <xref ref-type="bibr" rid="ref6">6</xref>
        ) hold, we can discard the remaining terms and write down the estimate of
the complete derivative of the Lyapunov function (
        <xref ref-type="bibr" rid="ref7">7</xref>
        )
      </p>
      <p>
        The condition for the negative definiteness of the total derivative of Lyapunov function (
        <xref ref-type="bibr" rid="ref7">7</xref>
        ) due to
 +  1
the system (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) will be the positive definiteness of the sum of the matrices 
+  1. In this way, it was
possible to construct a positive definite Lyapunov function, the total derivative of which is negative
definite due to the system, and therefore the zero equilibrium position of system (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) will be
asymptotically stable. Which had to be proven. The matrices in (
        <xref ref-type="bibr" rid="ref8">8</xref>
        ) depend on arbitrary parameters
ℎ11 &gt; 0, ℎ22 &gt; 0,  11 &gt; 0,  21 &gt; 0,  12 &gt; 0,  22 &gt; 0,  11 &gt; 0,  21 &gt; 0,  12 &gt; 0,  22 &gt; 0. Therefore,
the problem of stability research is reduced to the problem of finding variables for which the matrix
will be positive definite. In this case, the equilibrium position will be asymptotically stable.
Since the minimum eigenvalue of a symmetric positive definite matrix is a convex function, in the
general case the problem is reduced to a convex optimization problem with constraints.
2.1.
      </p>
    </sec>
    <sec id="sec-3">
      <title>Nelder-Mead method for solving the problem</title>
      <p>The use of methods for optimizing functions that require the existence of a gradient is not always
practical. This applies especially to those cases when the gradient of the function either does not exist,
or it is impractical to calculate it. The Nelder-Mead method or the deformed polyhedron method
[15,16] is one of the non-smooth optimization methods used to find the optimum of a function. This
method is easy to implement and useful in practice, but, on the other hand, there is no theory of
convergence for it - the algorithm can diverge even on smooth functions. One of the main features of
the Nelder-Mead algorithm is high efficiency for a possible complex calculation of the function. This
is due to the fact that at each step it is necessary to calculate no more than  + 1 values of the
investigated function for its further analysis, where 
is the dimension of the space. The basis of the
method is the idea of comparing the values of the function in  + 1
vertices of the constructed
simplex, and its iterative shift in the direction of the optimal value. The graphic representation of the
method, which will be demonstrated later in the work, explains another of its interesting names - the
"amoeba method". Let there exist a real function of many variables  ( ),  ∈ ℝ
 . The task of
optimizing  ( )→ min or  ( )→ max is set, while  ( )does not necessarily have to be smooth,
and its noise is allowed. The Nelder-Mead method, which will be used for this task, has mandatory
parameters and certain stages of work. The specified parameters 
&gt; 0, 
&gt; 0,  &gt; 0
are closely
related to the idea of deformation of the  + 1 -dimensional simplex, and, accordingly, specify the
reflection, compression and stretching of the polyhedron. In practice, standard values are most often
chosen for these parameters:</p>
      <p>= 1,  = 0.5,  = 2.</p>
      <p>
        Stages of the algorithm [16]:
1) Preparatory stage. Randomly select  + 1 -space point   = ( 
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),  
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ), … ,  
( )),  = 1 …  +
1, which satisfies all the constraints of the problem. These points will form the initial simplex. Values
of the function   (  ),  = 1 …  + 1 are calculated at these  + 1 points.
2)
      </p>
      <p>
        Sorting and centering. The vertices of the simplex determined at the previous stage are sorted in
order of increasing value of the function  ( ) in them. Among these values, three are chosen:  ℎ,
which corresponds to the largest value of the function  ℎ ( ℎ),   is the value of   (  ), next after
 ℎ, and also the smallest -   for   . We find the center of the simplex  с without the point  ℎ.
 с =

 =1, ≠ℎ
  .
found by formula (
        <xref ref-type="bibr" rid="ref9">9</xref>
        ) with the coefficient  :
3)
      </p>
      <p>Stages of deformation: reflection. The point  ℎ is displayed in   relative to the point  с
  = (1 +  ) с −   ℎ.</p>
      <p>We find the value of the function   =  (  )for the point   .
4) Checking the direction.
4.1. If   &lt;   , then we try to improve the result obtained in the previous step, get a new point  
and the value of the function   in it, using the stretching coefficient</p>
      <p>= (1 −  ) с −    .</p>
      <p>If   &lt;   , that is, the solution found at this step is even better than the previous one, then  ℎ takes
the value of   , otherwise - the value of   . After that, we move on to the next iteration.
4.2. If   &lt;   &lt;   , then  ℎ takes value   . After that, we move on to the next iteration.
4.3. If   &lt;   &lt;  ℎ, then  ℎ takes value   else –   takes value  ℎ. After that go to step 5.
4.4. If  ℎ &lt;   , then finding point   is not satisfied point, that is why cross to step 5.
5) Stages of deformation: compression. When the attempt to stretch the simplex did not give
results, we try to compress it using the given parameter  . We calculate the point   and the value of
the function in it</p>
      <p>=   ℎ + (1 −  )  .
6) Checking the direction.
6.1. If   &lt;  ℎ, then  ℎ takes value   . We cross to the next iteration.
6.2. If   &gt;  ℎ, go to step 7.
7) Stages of deformation: global compression. If the previous steps did not bring improvement
in finding the optimum of the function, it means that the initial points turned out to be the best, we
make the compression to the point</p>
      <p>←   +   −2   ,  ≠  .
8) Verification of convergence and stopping of the algorithm. Conditions for stopping the
program are selected. For example, performing a set number of iterations, or achieving a certain
accuracy. After checking, we return to step two, or, accordingly, stop the algorithm and get the
desired result.</p>
      <p>Remarks 1. The previous algorithm is described for the problem of minimizing the function  ( ).
To find the maximum of this function, all values of   (  ),  = 1 …  + 1 should be compared with
the opposite sign.</p>
      <p>For a better explanation, the Nelder-Mead algorithm can be visualized for the two-dimensional
case. Then the ( + 1)-dimensional simplex will be a triangle (Fig. 1), and all actions of the
algorithm will be easy to represent in the form of ordinary transformations of a simple geometric
figure: reflection, expansion and contraction of the triangle relative to its center of gravity without the
worst point (marked in blue in Fig. 2). The first step of the algorithm is to display the point with the
largest value (marked in red in Fig. 3) of the function relative to the center of gravity. At the new
point, the current value of the function is calculated, which determines the next actions.</p>
      <p>If the value at the displayed point is even smaller, that is, it is better in the context of the
minimization problem, then we perform stretching (Fig. 4). Conversely, if the value at the displayed
point turned out to be not too good, we compress the simplex in the direction of the center (Fig. 5). If
the previous steps do not contribute to the minimization of the function, then global compression is
performed to the point with the smallest value (Fig. 6). In practice, there are very rare cases when it is
necessary to apply this last type of polyhedron deformation.</p>
      <p>Remarks 2. The deformation parameters  ,  ,  , chosen before starting the algorithm, are
responsible for how strong the shape will be compressed or stretched at each step. For example, the
reflection coefficient  = 1 specifies a symmetrical reflection relative to the center. The compression
parameter  = 0.5 moves the examined point to a distance equal to half the distance to the center of
gravity, and  = 2 – the stretching coefficient moves it to twice the distance, respectively.</p>
      <p>
        This method optimizes the objective function quite quickly and efficiently. On the other hand, due
to the lack of convergence theory, in practice the method can lead to incorrect answers even on
smooth (continuously differentiable) functions. A situation is also possible when the working simplex
is far from the optimal point, and the algorithm performs a large number of iterations, while changing
the value of the function little. A heuristic method for solving this problem is to run the algorithm
several times and limit the number of iterations. So, having analyzed all the advantages and
disadvantages of one of the methods of non-smooth optimization - the Nelder-Mead algorithm, we
will try to apply it to find the parameters of the Lyapunov function (
        <xref ref-type="bibr" rid="ref7">7</xref>
        ).
      </p>
      <p>
        The Python programming language version 3.6.9 was chosen for the numerical experiment. The
Google Colab interactive cloud environment [17] served as the development and implementation
environment. The choice of this language is due to its powerful mathematical functionality, which
allows solving a sufficiently wide class of various problems, and the ability to graphically display the
results of work, which is an important part of any development. The easy-to-learn environment of
Google Colab is also convenient for mathematical calculations, as it provides free access to the use of
powerful GPU and TPU processors. Thanks to this, all calculations are carried out quickly, and do not
load the personal computer. Among the libraries used for the program are NumPy, SciPy for
convenient operations with multidimensional arrays, PrettyTable for displaying results in a table,
Plotlyb and Matplotlib for creating two-dimensional and three-dimensional visualization, as well as
additional tools for generating random numbers, fixing execution time. The program itself consists of
several parts: the first is the initialization of parameters and the definition of basic functions for
matrices, the second is the implementation of the Nelder-Mead algorithm, the third is a graphical
demonstration of the algorithm for the two-dimensional case of Nelder-Mead (simplex - triangle) and
numerical comparisons for different initial simplexes and parameters. Testing was carried out for
onedimensional and two-dimensional cases. Accordingly, the coefficients and matrices  +  1 from the
formulas found in (
        <xref ref-type="bibr" rid="ref8">8</xref>
        ) are specified separately for each of these cases.
      </p>
    </sec>
    <sec id="sec-4">
      <title>3. Results and comparisons</title>
      <p>Let's test the operation of the algorithm using examples of finding all three parameters ℎ,  ,  . Let's
choose the initial values  = 10,  = −50, ℎ = 10,  = 10,  = 10,  = 35. Than the matrix:
 +  1 = (200 625 )</p>
      <p>625 1760
will have a minimum eigenvalue   ≈ −19,51. Let's run the program for different initial
simplexes, which are formed by using five different values of ∆: 0,01; 0,5; 1; 5; 10. This means that
to form a polyhedron, a different value of ∆ will be added to one of the coordinates in turn for each of
the tests. We get the following graph - Fig. 7. The minimum eigenvalue is plotted on the ordinate
axis, and the number of steps taken by the algorithm is shown on the abscissa axis. The map legend
shows in color the correspondence between the graphs and the choice of the initial simplex, and the
final minimum value obtained as a result for each of the cases is signed. It can be seen on the graph
that the initial simplex formed with ∆ = 0,5 has the best result, and the worst with ∆= 0.01 , which
could not even choose the parameters for which the matrix will be positive definite. It can also be
seen that the minimum eigenvalue calculated for each of the selected examples at each step does not
grow infinitely. For example, for ∆ = 0,5 it took about 60 steps for the value to stop changing, while
for ∆ = 1 it took just over 120 steps.</p>
      <p>The initial simplex for the best of the obtained cases ∆ = 0,5 , formed on the basis of the initial
parameters ℎ = 10,  = 10,  = 35, has  + 1 = 4 vertices:
(10,5; 35; 10), (10; 35,5; 10), (10; 35; 10,5), (10; 35; 10).</p>
      <p>Accordingly, the optimized parameters are ℎ ≈ 4,  ≈ 0,79,  ≈ 40,82, and the positive definite
matrix  +  1 will have the form</p>
      <p>+  1 = (800,0,048 800, 0,147),   ≈ 80,06.</p>
      <p>Let's try to slightly improve the work results by updating the center of mass after each
deformation of the polyhedron. Let's run the program again on the same data, and compare the
obtained results (Fig. 8) for 20 steps of the algorithm.</p>
      <p>We see that now the minimum eigenvalue of the matrix has become much larger and reaches
  ≈ 1892,98 in just 20 steps. In this case, large value of parameters ∆ = 10 and ∆ = 5 give the
best value for the formation of the initial simplex, on the other hand, small ∆ did not increase the
minimum eigenvalue too much, although they fulfilled the condition of positive definiteness of the
matrix   &gt; 0 . It is worth noting that for the experiment in Fig. 8 as the number of steps
increases, the   grows very quickly. For example, for 100 steps and ∆ = 5, the minimum
eigenvalue reaches   ≈ 7,07 × 1017 . Now consider a two-dimensional system of equations for
which the matrix  +  1 has a dimension of 6×6. Let the parameters be set as follows:
 11 = 6,  22 = 6,  11 = −3,  12 = −2,  21 = −2,  22 = −3,  11 = 2,  12 = 3,  21 = 4,  22 =
7, ℎ11 = 10, ℎ22 = 92,  11 = 57,  12 = 2,  21 = 1,  22 = 71,  11 = 99,  12 = 74,  21 = 59,  22 =
71.</p>
      <p>Then the matrix  +  1:
 +  1 =</p>
      <p>Let's run the Nelder-Mead algorithm for this case, and again analyze the results that depend on the
choice of the initial simplex - Fig. 9.</p>
      <p>This graph shows that, unlike the previous results for the one-dimensional case, not every
parameters set for different steps was able to maximize the minimum eigenvalue of the matrix. For
example, for a small ∆ = 0,01 , the minimum eigenvalue remained negative. We see that the value
∆ = 50 performed best. For it, the result of 200 steps of the algorithm is the optimized matrix
 +  1 ≈≈
(
6,2 × 104</p>
      <p>0
−5,8 × 103
−1,3 × 103
−1,8 × 102
0</p>
      <p>0
3,2 × 104</p>
      <p>0
2,3 × 102
−5,3 × 103
−1,2 × 104
and for it   ≈ 34,67 with the parameter values found by the algorithm ℎ11 ≈ 5164,74;
ℎ22 ≈ 2693,52;  11 ≈ 48,96;  12 ≈ 77,44;  21 ≈ 59,25,  22 ≈ 3248,9;  11 ≈ 5983,56;  12 ≈
903,42;  21 ≈ 5359,02;  22 ≈ 8521,24.</p>
      <p>Let's run the program for the same initial parameters, but without updating the center of mass for
each operation. We will get the following results (Fig. 10):</p>
      <p>Comparing with the previous result (Fig. 9) for the two-dimensional case, the experiment gave
much worse results. In addition, all the obtained values are approximately the same. We see that ∆ =
0,5 and ∆ = 0,01 performed best. Although the first simplex converged in about 30 steps, while the
second only in 100. We conclude that for this case small values of ∆ are better suited for determining
the initial simplex.</p>
    </sec>
    <sec id="sec-5">
      <title>4. Conclusion</title>
      <p>The following results were achieved in this work:
- Conditions for the stability of learning processes in neurodynamics models were obtained on the
example of the Hopfield network using the second Lyapunov method.</p>
      <p>- The computer program was written that solves the problem of parameter selection in the
Lyapunov function using the Nelder-Mead method with constraints for one-dimensional and
twodimensional dynamical systems.</p>
      <p>As a result of the analysis of the work of the computer program, the following conclusions were
made. The Nelder-Mead algorithm or the deformed polyhedron method, used for cases of
optimization of a function without a gradient, coped well with the task of maximizing the minimum
eigenvalue of a symmetric positive definite matrix for one-dimensional and two-dimensional
dynamical systems. In particular, the parameters for obtaining a positive minimum eigenvalue were
selected in just a few steps. It was also analyzed how the selection of the initial simplex for the
Nelder-Mead method affects the obtained results for both cases. As a result, such ∆ was obtained for
the simplex, for which the minimum eigenvalue constantly increases with increasing steps. This
means that, depending on the mathematical problem, in further studies, with the help of this program,
it will be possible to select the necessary values to obtain the maximum positive definite matrix of any
dimension. All results are constructive from the point of view of performing computational
experiments. In the future, they can be extended to the case of multidimensional systems. It is also
known that Hopfield networks are more adequately described in terms of functional-differential
equations with a time-delay. And all the results presented in this work can be used to continue the
original author's research, started in the their works [18,19].</p>
    </sec>
    <sec id="sec-6">
      <title>5. Acknowledgements</title>
      <p>Work is conducted under the Agreement on scientific cooperation between Faculty of Computer
Science and Cybernetics Taras Shevchenko National University of Kyiv, Ukraine and the Faculty of
Electrical Engineering and Communication, Brno University of Technology, Brno, Czech
Republic. This research was supported by the project of specific university research at Brno
University of Technology FEKT-S-20-6225 and research project #BF015-04 at Taras Shevchenko
National University of Kyiv.</p>
    </sec>
    <sec id="sec-7">
      <title>6. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Simon</given-names>
            <surname>Haykin</surname>
          </string-name>
          ,
          <source>Neural Networks and Learning Machines</source>
          , 3rd ed.,
          <year>2009</year>
          . -
          <fpage>937</fpage>
          с.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>H</given-names>
            <surname>Akca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R</given-names>
            <surname>Alassar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V</given-names>
            <surname>Covachev</surname>
          </string-name>
          ,
          <string-name>
            <surname>Z Covacheva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>EA</given-names>
            <surname>Al-Zahrani</surname>
          </string-name>
          (
          <year>2004</year>
          )
          <article-title>Continuous- time additive Hopfield-type neural networks with impulses</article-title>
          .
          <source>J Math Anal Appl</source>
          <volume>290</volume>
          (
          <issue>2</issue>
          ):
          <fpage>436</fpage>
          -
          <lpage>451</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Y</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <surname>L Lu</surname>
          </string-name>
          (
          <year>2004</year>
          )
          <article-title>Global exponential stability and existence of periodic solutions of Hopfieldtype neural networks with impulses</article-title>
          .
          <source>Physics Letters A</source>
          <volume>333</volume>
          :
          <fpage>62</fpage>
          -
          <lpage>71</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Castro</surname>
          </string-name>
          , Fidelis Zanetti de and Marcos Eduardo Valle. “
          <article-title>Continuous-Valued Quaternionic Hopfield Neural Network for Image Retrieval: A Color Space Study</article-title>
          .”
          <source>2017 Brazilian Conference on Intelligent Systems (BRACIS)</source>
          (
          <year>2017</year>
          ):
          <fpage>186</fpage>
          -
          <lpage>191</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Perry</surname>
          </string-name>
          , Stuart W. and
          <string-name>
            <surname>Ron</surname>
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Wyber</surname>
          </string-name>
          .
          <article-title>“A Hopfield neural network approach for the reconstruction of wide-bandwidth sonar data</article-title>
          .
          <source>” Neural Networks for Signal Processing X. Proceedings of the 2000 IEEE Signal Processing Society Workshop (Cat. No.00TH8501) 2</source>
          (
          <year>2000</year>
          ):
          <fpage>876</fpage>
          -
          <lpage>885</lpage>
          vol.
          <volume>2</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Kuroe</surname>
            , Yasuaki and
            <given-names>Hitoshi</given-names>
          </string-name>
          <string-name>
            <surname>Iima</surname>
          </string-name>
          .
          <article-title>“A model of Hopfield-type octonion neural networks and existing conditions of energy functions</article-title>
          .
          <source>” 2016 International Joint Conference on Neural Networks (IJCNN)</source>
          (
          <year>2016</year>
          ):
          <fpage>4426</fpage>
          -
          <lpage>4430</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <surname>Yongkun</surname>
          </string-name>
          et al. “
          <article-title>Almost automorphic synchronization of quaternion-valued high-order Hopfield neural networks with time-varying and distributed delays</article-title>
          .
          <source>” IMA J. Math. Control. Inf</source>
          .
          <volume>36</volume>
          (
          <year>2019</year>
          ):
          <fpage>983</fpage>
          -
          <lpage>1013</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Valle</surname>
            ,
            <given-names>Marcos</given-names>
          </string-name>
          <string-name>
            <surname>Eduardo</surname>
          </string-name>
          and Fidelis Zanetti de Castro. “
          <source>On the Dynamics of Hopfield Neural Networks on Unit Quaternions.” IEEE Transactions on Neural Networks and Learning Systems</source>
          <volume>29</volume>
          (
          <year>2018</year>
          ):
          <fpage>2464</fpage>
          -
          <lpage>2471</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Kobayashi</surname>
          </string-name>
          , Masaki. “
          <source>Stability Conditions of Bicomplex-Valued Hopfield Neural Networks.” Neural Computation</source>
          <volume>33</volume>
          (
          <year>2021</year>
          ):
          <fpage>552</fpage>
          -
          <lpage>562</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          (
          <year>2005</year>
          ).
          <article-title>Complex Dynamics in a Simple Hopfield-Type Neural Network</article-title>
          . In: Wang,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Liao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            ,
            <surname>Yi</surname>
          </string-name>
          ,
          <string-name>
            <surname>Z</surname>
          </string-name>
          . (eds) Advances
          <source>in Neural Networks - ISNN 2005. ISNN 2005. Lecture Notes in Computer Science</source>
          , vol
          <volume>3496</volume>
          . Springer, Berlin, pp
          <fpage>357</fpage>
          -
          <lpage>362</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Valle and F. Z. de Castro</surname>
          </string-name>
          ,
          <article-title>"On the Dynamics of Hopfield Neural Networks on Unit Quaternions,"</article-title>
          <source>in IEEE Transactions on Neural Networks and Learning Systems</source>
          , vol.
          <volume>29</volume>
          , no.
          <issue>6</issue>
          , pp.
          <fpage>2464</fpage>
          -
          <lpage>2471</lpage>
          ,
          <year>June 2018</year>
          , doi: 10.1109/TNNLS.
          <year>2017</year>
          .
          <volume>2691462</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Lakshmikantham</surname>
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leela</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <source>Martinyuk A.A. Stability Analysis of Nonlinear Systems</source>
          , - Birkhauser,
          <year>2015</year>
          . -
          <fpage>329p</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Gao</surname>
            , Jin and
            <given-names>Lihua</given-names>
          </string-name>
          <string-name>
            <surname>Dai</surname>
          </string-name>
          . “
          <article-title>Anti-periodic synchronization of quaternion-valued high-order Hopfield neural networks with delays</article-title>
          .
          <source>” AIMS Mathematics</source>
          (
          <year>2022</year>
          ): DOI:10.3934/math.2022775
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>A.I.</surname>
          </string-name>
          <article-title>Lur'e, Some Non-Linear Problems in the Theory of Automatic Control, Her Majesty's Stationery O_ce</article-title>
          , London,
          <year>1957</year>
          (
          <article-title>a translation from the Russian original: Nekotorye nelineinye zadachi teorii avtomaticheskogo regulirovaniya, Gos</article-title>
          . Isdat. Tekh., Moscow,
          <year>1957</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Nelder</surname>
            ,
            <given-names>J.A.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Mead</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>1965</year>
          ),
          <article-title>“A simplex method for function minimization”</article-title>
          ,
          <source>Comput. J., 7</source>
          , pp.
          <fpage>308</fpage>
          -
          <lpage>313</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Nelder-Mead</surname>
            <given-names>algorithm</given-names>
          </string-name>
          [Електронний ресурс].
          <source>- 2022</source>
          . - Режим доступу: http://www.scholarpedia.org/article/Nelder-Mead_
          <fpage>algorithm</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17] Welcome To Colaboratory [Electron].
          <source>- 2022</source>
          . - https://colab.research.google.com
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Khusainov</surname>
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Ya</surname>
          </string-name>
          .,
          <string-name>
            <surname>Diblik</surname>
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Bastinec</given-names>
            <surname>Ja</surname>
          </string-name>
          .,
          <string-name>
            <surname>Shatyrko</surname>
            <given-names>A.V.</given-names>
          </string-name>
          <string-name>
            <surname>Investigating</surname>
          </string-name>
          <article-title>Dynamics of One Weakly Nonlinear System with Delay Argument //</article-title>
          <source>Journal of Automation and Information Sciences</source>
          ,
          <volume>50</volume>
          (
          <issue>1</issue>
          )
          <year>2018</year>
          . - pp.
          <fpage>20</fpage>
          -
          <lpage>38</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Khusainov</surname>
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Ya</surname>
          </string-name>
          .,
          <string-name>
            <surname>Diblik</surname>
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Bastinec</given-names>
            <surname>Ja</surname>
          </string-name>
          .,
          <string-name>
            <surname>Shatyrko</surname>
            <given-names>A.V.</given-names>
          </string-name>
          <article-title>Estimates of Solution Convergence Dynamical Processes in Neuronet with Time Delay /</article-title>
          <source>Conference Proceedings “IEEE ATIT</source>
          <year>2019</year>
          ”, p.
          <fpage>411</fpage>
          -
          <lpage>414</lpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>