<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Fractal structure of training of a three-layer neural network</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Franiv</string-name>
          <email>volodymyr.franiv@lnu.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Ivan Franko National University of Lviv</institution>
          ,
          <addr-line>Universytetska St.1, 79000 Lviv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The study of the fractal structure for a multilayer neural network is carried out in the work. The number of neurons in the input and hidden layers corresponded to the size of the input array. The software product was implemented in the Python programming environment for recognizing printed numbers. The sample of the input array for printed digits was 5 plus 4 variants of digit distortion with an error of ≈15% for the 3x5 digit array and ≈10% for the 4x7 digit array. The sample of the heterogeneous array was 8, and contained 3 options that did not correspond to any number. The study of the fractal structure was performed in three modes of multilayer neural network training, undertraining, satisfactory training, and retraining. It was established that the appearance of the fractal structure is caused by the retraining of neurons. Retraining neurons causes to local minima appears on the training error objective function. This leads to an increase in the error in the formation of the value of the correction function of the training weights. The process of transition of the neural network from the retraining mode to the chaotic mode is determined by the process of doubling the number of local minima on the objective function of the training error. Since the cause of the retraining and chaotic regimes are the same, the formation of the fractal structure in them is similar. Non-homogeneity of the input array negatively affects the formation of the fractal structure of the training process.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;the multilayer neural network</kwd>
        <kwd>the fractal structure</kwd>
        <kwd>Adam optimization method 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The process of formation of the objective function of the training error of a neuron is
determined by the contribution of each neuron from the previous layer [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Such a process
of formation of the neural network training error indicates that the objective function of
      </p>
      <p>
        © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
the training error should be considered as a set of periodic functions that determine the
existence of neural network training modes. Namely, the mode of undertraining,
satisfactory training, and retraining. It is known [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] that the process of retraining is
associated with the appearance of local minima on the objective error function, and causes
an increase in the value of the training error. The appearance of local minima, according to
the work [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], is described in the first approximation by the logistical function of doubling
their number from the training step. Doubling the number of existing local minima
ultimately leads to the emergence of a chaotic mode of neural network training. In [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], the
process of training a neural network with the dynamics of an incommensurate
superstructure is compared. In particular, in this work it was noted that in the sinusoidal
mode of an incommensurate superstructure is characterized by the absence of harmonics.
An increase in the magnitude of the anisotropic interaction is accompanied by the
appearance of harmonics of an incommensurate superstructure, which leads to the soliton
regime of an incommensurate superstructure. A further increase in the anisotropic
interaction causes the appearance of a block structure characterized by different
periodicity, and the average value of the wave vector of an incommensurate
superstructure for a given ensemble can take an incommensurate value. At the same time,
the formation of a chaotic phase can be traced. A similar picture can be traced in the
process of training a neural network. So, with an increase in the training step, in the mode
of retraining individual neurons, the appearance of local minima can be traced. An
increase in the number of local minima in the first approximation is described by the
process of doubling their number with an increase in the training step. This process is
described by a recurrence function on the form:
      </p>
      <p>хn+1= alpha - xn - xn2</p>
      <p>
        This is confirmed by the results of the Fourier analysis of the objective function of
training error and the appearance of branching diagrams [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>The fractality of an incommensurate superstructure is determined both by the
processes of harmonic occurrence and by the processes of nucleation and annihilation of
solitons. A number of works are devoted to the analysis of fractality of an incommensurate
superstructure. The fractality of the neural network training process may be determined
by the appearance of local minima on the objective function of the training error. That is,
the process of retraining a neural network may be fractal in nature. Thus, the study of the
fractal structure of the training error function in different modes of neural network
training will confirm or identify new mechanisms for the formation of training error.</p>
      <p>The study of fractal structure in stochastic systems is important for understanding and
predicting their dynamics. It can assist in explaining complex structures and interactions
in chaotic systems, as well as in developing effective methods for modeling and controlling
such systems. Therefore, the task of this work is to establish a picture of the neural
network training process, and the features of the formation of training error in the
retraining mode.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Methodology</title>
      <p>Since the appearance of local minima is described by the process of doubling their number
by the training error function, the reflection of fractality will be carried out with the help
of a logistic function that describes the doubling process, that is, a complex function on the
form:</p>
      <p>zn+1= -zn-zn2</p>
      <p>Taking into account the peculiarities of this mapping, namely, that the variable zn is a
complex quantity in which the real part is the training step (alpha), and the imaginary part
is the value of the weight correction function (w). The image of the fractal structure was
performed in the coordinates: the value of the correction of the training weights w, the
training step alpha, is the speed of moving away from the solution of points that are not
included in the solution of this system is represented by different colors on figures.</p>
      <p>
        The dynamics of the fractal structure was investigated depending on the training
parameters of the neural network, in particular on the number of iterations and the
dimension of the input array and its homogeneity, the training step and the parameters for
optimizing the training process. The study of the fractal structure was performed for a
multilayer neural network. The number of neurons in the input and hidden layer
corresponded to the size of the input array. The program was written in Python and
performed the recognition of printed numbers. The training data for printed digits were 5
representations of the digit plus 4 variants with a distortion of the digit with an error of
15% for the 3x5 figure setting array and &gt;10% for the 4x7 digit setting array. We used the
optimization method of training Adam [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], which is characterized by a monotonous
process of training a neural network. On the basis of this architecture, a study of the fractal
structure was carried out when applying this method of training optimization to the
objective function of the training error. When studying the fractal structure from the
neural network training parameters, in particular from the number of iterations and the
size of the input array and its homogeneity, and the training step, the optimization
parameters for this optimization method were selected according to the works [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and
were, for β1 = 0.9 and β2 = 0.999.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Homogeneous training data</title>
      <p>
        According to the Fourier spectra of the training error function, the retraining process
begins to be traced in the vicinity of the training step alpha = 0.45. Therefore, let's
consider the formation of a fractal structure when changing the alpha training step in the
range of 0.1 - 0.7. Fig.1 shows the view of the fractal structure from the number of
iterations, provided that the optimization method of Adam training is applied with the
optimization parameters β1=0.9 and β2=0.999. Under the condition of 10, 100, 500
iterations of the fractal structure image for the number "0" (Fig.1) in the retraining mode
of the neural network, a complex boundary is demonstrated, which gradually reveals
smaller and smaller recursive details when zoomed in. A set boundary is made up of
smaller versions of the basic form, so the fractal property of self-similarity refers to the
entire set, not just a part of it. With an increase in the number of iterations, there is a
change in the fractal picture. Namely, the appearance of smaller recursive details can be
traced. It is known [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] that with an increase in the number of iterations, the process of
retraining can be traced, which may prompt a change in the picture of fractality when the
number of iterations changes. Similar fractal structures have been obtained for other
printed figures. Thus, the presence of a fractal structure indicates that in the mode of
retraining the neural network, and taking into account the Fourier spectra of the error
function, there is an increase in the number of local minima, and in the first approximation
this process is described by the function of doubling their number. The mechanism of
transition to the chaotic mode of neural network training is described by the process of
doubling the number of local minima.
      </p>
      <p>a)
b)
c)</p>
      <p>It is known that for this multilayer neural network, according to Fourier spectra, the
process of retraining begins to be traced in the vicinity of the value of the training step
0.45. With a further increase in the training step, the chaotic training mode of the neural
network begins to manifest itself. In order to identify the manifestation of the neural
network retraining process in the fractal structure, a study of the fractal structure from
the size of the training step was carried out. Fig.2 shows the view of the fractal structure
when changing the step alpha training. Starting from alpha = 0.3, the recurrence system
describes the magnitude of the training error from the training step and demonstrates the
absence of a solution to the system. Under the condition alpha = 0.4, this system has a
single solution, and is characterized by almost no retraining process. That is, this mode of
neural network training demonstrates a satisfactory training process. A further change in
the training step leads to the appearance of two, four, and so on stable solutions, followed
by a transition to the mode described by the doubling process and a transition to a chaotic
training mode. Under these conditions, this mode of training is described by a fractal
structure (alpha=0.4÷0.7). According to the obtained fractal structure in Fig.2, the chaotic
training mode of the neural network is described by the appearance of additional small
details of different levels. Thus, the process of transition from the retraining mode to the
chaotic training mode of the neural network is accompanied by an increase in the number
of local minima, and therefore the appearance of additional small details of the fractal
structure. Similar results were obtained for other figures.</p>
      <p>
        The key quantity that describes a fractal quantitatively is the "fractal dimension".
However, in different sources, this term is understood as different quantities: the
Minkowski dimension, the Hausdorff-Bezikovich dimension, the self-similarity dimension.
The Hausdorff-Bezikovich dimension DH is a measure of the division of an object into
parts of size r, followed by counting the number of N(r) parts covering the object under
study [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Fig.3 shows the results of the fractal dimension calculated by the
HausdorffBezikovich method when changing the training step. The dependence of the fractal
dimension on the training step also indicates the emergence of a satisfactory neural
network training mode in the vicinity of alpha = 0.3, and the appearance of the neural
network retraining process at alpha &gt; 0.3. A further increase in the training step leads to
an increase in the fractal dimension, which indicates the transition to a chaotic mode [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] of
neural network training. Similar dependencies were obtained for other figures.
      </p>
      <p>It is known that an increase in the number of iterations can also lead to overtraining of
the neural network. To confirm this statement, consider the effect of the number of
iterations on the shape of the fractal structure. For this purpose, at the alpha=0.3 training
step, a study of the impact of the number of iterations on the neural network training
process was conducted. The training step was chosen close to the value at which the
process of retraining the neurons of the neural network begins to be traced. That is, the
fractal structure was investigated at the training step alpha = 0.3, provided that the
number of iterations increased. At a given value of the training step of a multilayer neural
network, a certain number of iterations are required to achieve the retraining mode. Fig.4
shows the fractal structure for the digit "0" given by a 3x5 array, provided alpha = 0.3,
when changing the number of iterations and applying the Adam optimization method to
the training error function.</p>
      <p>According to Fig.4, under the condition of 10 iterations, the mode of undertraining can
be traced. At 100 iterations, a satisfactory training mode can be traced, with the
emergence of a retraining mode. At 500 iterations and above, the mode of retraining the
neural network with the formation of a fractal structure is clearly manifested. Thus, the
emergence of a fractal structure is due to the process of retraining the neural network.
Similar patterns of fractal structure from the number of iterations were obtained for other
figures. That is, with an increase in the number of iterations, the formation of a fractal
structure can be traced, which indicates that starting with a certain number of iterations,
the process of retraining can be traced, and this process is associated with the process of
the emergence of local minima and doubling of their number.</p>
      <p>Fig.5 shows the dependence of the fractal dimension calculated by the
HausdorffBezikovich method on the number of iterations, the training step alpha = 0.3, for the figure
"0" given by a 3x5 array. According to Fig.5, the dependence of the fractal dimension on
the number of iterations is characterized by a minimum of 100 iterations, and then
increases with an increase in the number of iterations. Taking into account the results
given in Fig.4, in the vicinity of 100 iterations, this system is characterized by a
satisfactory training mode with no retraining mode of the neural network. With an
increase in the number of iterations, there is a retraining of neurons, with the formation of
a fractal structure (Fig. 4), and an increase in the fractal dimension (Fig. 5).</p>
    </sec>
    <sec id="sec-4">
      <title>4. Heterogeneous training data</title>
      <p>Let consider the figure of the influence of the parameter β2 on the formation of the fractal
structure. Figure 6 shows pictures of the fractal structure depends the parameter β2,
which characterizes the degree of attenuation of the previous values of the square of the
gradient of the objective function of the training error.</p>
      <p>
        According to [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], this parameter is decisive in the process of training a neural
network, and its optimal value is equal to 0.999. Fig.6 shows pictures of the fractal
structure when changing the optimization parameter β2 in the range of 0.1÷0.9999. The
obtained fractal structures in Fig.6 do not undergo a significant change when the β2
parameter changes. Although it should be noted that with an increase in the value of β2,
the picture of the fractal structure becomes richer in the presence of smaller fragments.
The values of the fractal dimension given in Table 1 also demonstrate the above pattern.
That is, there are no qualitative changes in the vicinity of the value of β2 = 0.999. It is
possible that the tendencies of changes in the training error from the value β2, which were
declared in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], can be manifested with larger sizes of the input array, or for more
heterogeneous arrays.
      </p>
      <p>
        The dynamics of the fractal structure for this input array from the training step (Fig.7)
demonstrates similar dynamics of changes as for the array with inhomogeneity ≈15%
(Fig.2). In the vicinity of alpha=0.3, a homogeneous training process can be traced with
almost no retraining mode of the neural network. A further increase in the training step
causes the emergence of a retraining mode, which subsequently makes the transition to a
chaotic training mode. According to Fourier studies of the spectra of the training error
function [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], already at alpha&gt;0.5, the emergence of a chaotic training mode of the neural
network can be traced. According to Fig.8, the fractal structure does not show changes
when changing the training mode from retraining to chaotic.
      </p>
      <p>Although the magnitude of the fractal dimension from the training step, in this range of
changes alpha is characterized by an increase. This indicates an increase in the
randomness of the training mode at alpha&gt; 0.4. It is possible that the transition to a
chaotic mode of training is associated with a change in the smaller fractal structure. To
confirm or refute this assumption, let us consider the influence on the fractal structure of
such training parameters as the number of iterations and the optimization parameter β2.
An increase in the number of iterations at the beginning leads to a transition to the
retraining mode of neural networks (100 iterations), and subsequently to a chaotic
training mode. The transition from the retraining mode to the chaotic mode is
accompanied by an increase in the fine structure of the lower (first and second) level.</p>
      <p>Comparing the obtained fractal structure under the condition of non-homogeneity of
the input array ≈40% with the obtained fractal structure with the structure under the
condition of non-homogeneity of the input array ≈15% (Fig. 4), it can be argued that an
increase in the heterogeneity of the input array leads to a decrease in the number of small
details on the fractal structure.</p>
      <p>
        It is known [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] that the parameter β2 affects the degree of attenuation of the previous
values of the square of the gradient of the objective function of the training error, and is
characterized by the minimum value of the training error at β2 = 0.999. Fig.10 shows
fractal structures at different values of the optimization parameter β2, provided that the
maximum training step is alpha = 0.5, 100 iterations, for the number "0" and the input
array is not homogeneous ≈40%.
      </p>
      <p>When the parameter β2 is changed in the range of 0.1÷0.99, no changes in the fractal
structure can be observed. At β2 = 0.999, a change in the small details of the fractal
structure begins to be traced. Namely, the spatial areas of their existence are beginning to
increase. A further change in the β2 parameter leads to a more pronounced picture of such
changes. The calculated fractal dimension according to the Hausdorff-Bezikovich method
(Table 2) shows a similar dynamics from the optimization parameter β2. Namely, in the
interval of change of the optimization parameter β2 = 0.1÷0.999, there is a decrease in the
value of the fractal dimension, reaching the smallest value at β2 = 0.999. A further change
in β2 leads to a sharp increase in the fractal dimension.</p>
      <p>Let's consider the effect of the size of the input array on the fractal structure of the
neural network. Fig.11 shows the fractal structure with a different number of iterations
for the array of setting the figure 4x7, and the heterogeneity of the sample ≈10% (Fig.11,a)
and ≈40% (Fig.11,b), from the change in the training step alpha = 0.01÷0.7, and with the
optimization parameter β2=0.999. An increase in the number of iterations is accompanied
by a slight change in the fractal structure due to its shift along the real axis. The real axis in
our case corresponds to the change in the training step. Therefore, a shift to the region of
higher values of the real part may indicate a redistribution of contributions of a particular
mode of neural network training. As you know, with an increase in the number of
iterations, there is an increase in the role of the retraining mode, and in the future, the role
of the chaotic training mode. An increase in sample heterogeneity (≈40%) does not lead to
a change in the overall picture of the fractal structure (Fig. 11, b). As noted above, an
increase in the heterogeneity of the figure sample leads to a decrease in changes in the
fractal pattern of the structure. Comparing samples with heterogeneity of ≈10% and
≈40%, a similar pattern can be noted.</p>
      <p>a) 10 iterations
a) 100 iterations</p>
      <sec id="sec-4-1">
        <title>a) 500 iterations</title>
      </sec>
      <sec id="sec-4-2">
        <title>b) 10 iterations</title>
      </sec>
      <sec id="sec-4-3">
        <title>b) 100 iterations</title>
      </sec>
      <sec id="sec-4-4">
        <title>b) 1000 iterations</title>
      </sec>
      <sec id="sec-4-5">
        <title>b) 5000 iterations Figure 11: Fractal structure of training of a three-layer neural network when recognizing printed numbers depending on the training step, when representing a digit in a 4x7 array, with heterogeneity of the input array ≈10% a) and ≈40% b), for the digit "0".</title>
        <p>Taking into account the above-mentioned dependencies of changes in the fractal
structure on the training parameters (number of iterations, training step, optimization
parameter β2), a common pattern can be noted. An increase in the value of the training
parameters causes a shift in the picture of the fractal structure in the interval of higher
values of the actual part of its representation. In our opinion, this indicates that the
retraining mode of the neural network and the chaotic mode are equally involved in the
formation of the fractal structure. This is not surprising, since the reasons for the
occurrence of retraining mode and chaotic mode are the same. If there are differences in
fractal structures that describe the retraining mode and the chaotic mode, they are related
to fine details.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>So, summarizing the above, it can be noted that the fractal structure of the neural network
training process is due to the retraining of neurons. Neuronal overtraining causes the
appearance of local minima on the target function of the training error. This leads to an
increase in the error in the formation of the value of the correction function of the training
weights. The process of transition of the neural network from the retraining mode to the
chaotic mode is due to the process of doubling the number of local minima on the target
function of the training error. Since the reason for the occurrence of the retraining mode
and the chaotic mode are the same, the retraining mode of the neural network and the
chaotic mode are equally involved in the formation of the fractal structure in the process
of training the neuron. Heterogeneity of the input array has a negative impact on the
formation of the fractal structure of the training process.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S. O.</given-names>
            <surname>Subbotin</surname>
          </string-name>
          ,
          <article-title>Neural networks: theory and practice: teaching</article-title>
          . (Ed.
          <string-name>
            <surname>O. O. Evenok)</surname>
          </string-name>
          , Zhytomyr,
          <year>2020</year>
          ,
          <year>184p</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Melnyk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sveleba</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Katerynchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Kuno</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Franiv</surname>
          </string-name>
          ,
          <article-title>Multilayer Neural Network Training Error when AMSGrad, Adam, AdamMax Methods Used</article-title>
          .
          <source>COLINS2024: 8th International Conference on Computational Linguistics and Intelligent Systems, April</source>
          <volume>12</volume>
          -13 Lviv, Ukraine,
          <year>2024</year>
          , pp.
          <fpage>232</fpage>
          -
          <lpage>254</lpage>
          URL: https://ceur-ws.org/Vol3664/paper17.pdf
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D. P.</given-names>
            <surname>Kingma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. B.</given-names>
            <surname>Adam</surname>
          </string-name>
          ,
          <article-title>A Method for Stochastic Optimization 3rd International Conference for Learning Representations</article-title>
          , San Diego,
          <year>2015</year>
          , URL: https://doi.org/10.48550/arXiv.1412.6980
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Ma</surname>
          </string-name>
          , D. Yarats,
          <article-title>Momentum and Adam for deep learning</article-title>
          ,
          <source>7th ICLR: New Orleans</source>
          , LA, USA,
          <year>2019</year>
          , pр.
          <fpage>19</fpage>
          -21 URL: https://arxiv.org/abs/
          <year>1810</year>
          .06801
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>K.</given-names>
            <surname>Kawaguchi</surname>
          </string-name>
          ,
          <article-title>Effect of Depth and Width on Local Minima in Deep Learning Neural Computation</article-title>
          . MIT Press Volume
          <volume>31</volume>
          <issue>Issue 7</issue>
          ,
          <year>2019</year>
          , pp.
          <fpage>1462</fpage>
          -
          <lpage>1498</lpage>
          . URL: https://doi.org/10.1162/neco_a_
          <fpage>01195</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>O. V.</given-names>
            <surname>Kapustyan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Pichkur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Sobchuk</surname>
          </string-name>
          ,
          <source>Theory of Dynamic Systems</source>
          , Vezha-Druk, Lutsk, Ukraine,
          <year>2020</year>
          , 348 p.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>I. Y.</given-names>
            <surname>Adashevska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O. O.</given-names>
            <surname>Kraievska</surname>
          </string-name>
          ,
          <article-title>Self-similarity as a characteristic property of fractal. Fractal (fractional) dimension of Hausdorff, Scientific achievements of modern society: abstr. of 4th Intern</article-title>
          . Sci. and
          <string-name>
            <given-names>Practical</given-names>
            <surname>Conf</surname>
          </string-name>
          .,
          <string-name>
            <surname>Liverpool</surname>
          </string-name>
          , United Kingdom,
          <fpage>4</fpage>
          -6
          <source>December</source>
          <year>2019</year>
          , pp.
          <fpage>603</fpage>
          -
          <lpage>612</lpage>
          . URL: http://sci-conf.com.ua/wpcontent/uploads/2019/12/scientific-achievements
          <source>-of-modern-society_4-6</source>
          .
          <fpage>12</fpage>
          .19.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Dokkyun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jaehyun</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Sangmin</surname>
          </string-name>
          ,
          <source>An Effective Optimization Method for Machine Learning Based on ADAM, Appl. Sci</source>
          . Volume
          <volume>10</volume>
          ,
          <year>2020</year>
          , pp.
          <fpage>1073</fpage>
          -
          <lpage>1093</lpage>
          , URL: https://www.mdpi.com/2076-3417/10/3/1073
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>X.</given-names>
            <surname>Zeng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>AdaMax Online Training for Speech Recognition</article-title>
          ,
          <source>CSLT TECHNICAL REPORT-20150032</source>
          ,
          <year>2016</year>
          , URL: http://www.cslt.org/mediawiki/images/d/df/Adamax_Online_
          <article-title>Training_for_Speech_ Recog nition</article-title>
          .pdf
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S. J.</given-names>
            <surname>Reddi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Satyen</surname>
          </string-name>
          , &amp; S. Kumar,
          <source>On the Convergence of Adam and Beyond. 6th ICLR</source>
          , Vancouver, BC, Canada,
          <year>2018</year>
          , р.
          <fpage>23</fpage>
          -35 URL: https://arxiv.org/abs/
          <year>1904</year>
          .09237
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>