<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Methods in the Short- Term and Middle-Term Forecasting in Financial Sphere</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yuriy Zaychenko</string-name>
          <email>zaychenkoyuri@ukr.net</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Helen Zaichenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksii Kuzmenko</string-name>
          <email>oleksii.kuzmenko@ukr.net</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>03056</institution>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Institute for Applied System Analysis, Igor Sikorsky Kyiv Polytechnic Institute</institution>
          ,
          <addr-line>Peremogy avenue 37, Kyiv</addr-line>
        </aff>
      </contrib-group>
      <fpage>80</fpage>
      <lpage>89</lpage>
      <abstract>
        <p>In this paper the problems of short- and middle-term forecasting at the financial sphere are considered. For this problem intelligent forecasting methods: LSTM and hybrid deep learning networks based on GMDH are suggested. The optimal parameters of LSTM and hybrid networks were found. Optimal structures of hybrid networks were constructed for short-term and middle-term forecasting. The experimental investigations of LSTM and hybrid DL networks were carried out and their accuracy was compared. The fields of preferable application of LSTM and hybrid DL networks in forecasting problems in finance are determined. Short-term, middle-term financial forecasting, LSTM, hybrid DL network, optimization Problems of forecasting share prices and market indexes at stock exchanges pay great attention of investors and various money funds. For their solution were developed and investigated powerful intelligent methods and technologies among them neural networks and fuzzy logic systems.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction. Analysis of previous works</title>
      <p>hybrid DL networks based on GMDH</p>
      <p>
        method [
        <xref ref-type="bibr" rid="ref15">11</xref>
        ]. The application self-organization in these
networks enables to train not only neuron weights but to construct optimal structure of a network. Due
to method of training in these networks weights are adjusted not simultaneously but layer after layer.
That prevents the phenomenon of vanishing or explosion of gradient. It’s very important for networks
with many layers.
      </p>
      <p>
        The first works in this field used as nodes of hybrid network Wang-Mendel neurons with two
inputs [
        <xref ref-type="bibr" rid="ref15">11</xref>
        ]. But drawback of such neurons is the demand to train not only neural weights but the
parameters of fuzzy sets in antecendents of rules as well. That needs lot of calculational expenses and
large training time as well. Therefore, later DL neo-fuzzy networks were developed in which as nodes
are used neo-fuzzy neurons by Yamakawa [
        <xref ref-type="bibr" rid="ref16 ref17">12,13</xref>
        ]. The main property of such neurons is that it’s
necessary to train only neuron weights not fuzzy sets. That demands less computations as compared
with Wang-Mendel neurons and significantly cuts training time as a whole. The investigation of both
classes of hybrid DL networks were performed and their efficiency at forecasting in financial sphere
was compared in [
        <xref ref-type="bibr" rid="ref17">13</xref>
        ]. Therefore, it presents a great interest to compare the efficiency of hybrid DL
networks and LSTM at the problem of forecasting at financial sphere. The goal of this paper is to
investigate the accuracy of hybrid DL networks and LSTM at the problem of forecasting market
indices at the stock exchange, compare their efficiency at the different intervals and to determine the
      </p>
      <p>2022 Copyright for this paper by its authors.
classes of forecasting problems for which the application of corresponding computational intelligence
technologies is the most perspective.</p>
    </sec>
    <sec id="sec-2">
      <title>2. The description of the evolving hybrid GMDH-neo-fuzzy network</title>
      <p>The evolving hybrid DL-network architecture is presented in Fig.1. To the system’s input layer a
( × 1)-dimensional vector of input signals is fed. After that this signal is transferred to the first
hidden layer. This layer contains nodes, and each of these neurons has only two inputs.</p>
      <p>At the outputs of the first hidden layer the output signals are formed. Then these signals are fed to
the selection block of the first hidden layer.
precise signals by some chosen criterion (mostly by the mean squared error  2</p>
      <p>
        It selects among the output signals  ̂
[
        <xref ref-type="bibr" rid="ref1 ref5">1</xref>
        ] ∙  1 ∗ ( 1 ∗=  is so called freedom of choice) most




best outputs of the first hidden layer  ̂[
        <xref ref-type="bibr" rid="ref1 ref5">1</xref>
        ] ∗∙  2 pairwise combinations  ̂[
        <xref ref-type="bibr" rid="ref1 ref5">1</xref>
        ] ∗,  ̂ [
        <xref ref-type="bibr" rid="ref1 ref5">1</xref>
        ] ∗ are formed. These


signals are fed to the second hidden layer, that is formed by neurons  [
        <xref ref-type="bibr" rid="ref2 ref6">2</xref>
        ]. After training these neurons
output signals of this layer  ̂[
        <xref ref-type="bibr" rid="ref2 ref6">2</xref>
        ] are transferred to the selection block   [
        <xref ref-type="bibr" rid="ref2 ref6">2</xref>
        ] which choses  best
neurons by accuracy (e.g. by the value of  2
      </p>
      <p>
        [
        <xref ref-type="bibr" rid="ref2 ref6">2</xref>
        ]) if the best signal of the second layer is better than the
best signal of the first hidden layer  ̂1[
        <xref ref-type="bibr" rid="ref1 ref5">1</xref>
        ] ∗. Other hidden layers forms signals similarly. The system
evolution process continues until the best signal of the selection block   [ +1] appears to be worse
than the best signal of the previous s th layer. Then we return to the previous layer and choose its best
node neuron  [ ] with output signal  ̂[ ]. And moving from this neuron (node) along its connections
backwards and sequentially passing all previous layers we finally get the structure of the
GMDH-neo

[
        <xref ref-type="bibr" rid="ref1 ref5">1</xref>
        ] ). Among these  1 ∗
fuzzy network.
      </p>
      <p>It should be noted that in such a way not only the optimal structure of the network may be
constructed but also well-trained network due to the GMDH algorithm. Besides, since the training is
performed sequentially layer by layer the problems of high dimensionality as well as vanishing or
exploding gradient are avoided.
2.1.</p>
    </sec>
    <sec id="sec-3">
      <title>Neo-fuzzy neuron as a node of hybrid GMDH-system</title>
      <p>
        Let’s introduce into consideration the architecture of the node that is presented in Fig.2 and is
suggested as a neuron of the proposed GMDH-system. As a node of this structure a neo-fuzzy neuron
(NFN) by Takeshi Yamakawa and co-authors in [
        <xref ref-type="bibr" rid="ref13">9</xref>
        ] is used. The neo-fuzzy neuron is a nonlinear
multi-input single-output system shown in Fig.2. The main difference of this node from the general
neo-fuzzy neuron structure is that each node uses only two inputs. It realizes the following mapping:
2
nonlinear synapses   which perform transformation of input signal in the form
where   is the input  ( = 1,2,…, ), ̂ is a system output. Structural blocks of neo-fuzzy neuron are
and realize fuzzy inference: if   is
      </p>
      <p>
        then the output is   ,where   is a fuzzy set which
membership function is   ,   is a synaptic weight in consequent [
        <xref ref-type="bibr" rid="ref15">11</xref>
        ].
      </p>
      <p>(  )= ∑    (  )
(2)
(3)
(4)
(5)</p>
    </sec>
    <sec id="sec-4">
      <title>2.2. The neo-fuzzy neuron learning algorithm</title>
      <p>The learning criterion (goal function) is the standard local quadratic error function:
 ( )= 1
2</p>
      <p>1 1
( ( )− ̂( ))2 = 2 ( )2 = ( ( )− ∑
2</p>
      <p>
        ∑    (  ( )))
2 ℎ
conventional least squares method [
        <xref ref-type="bibr" rid="ref16">12</xref>
        ]

 =1
+ 
 =1

 =1
(real value).
used in the form [
        <xref ref-type="bibr" rid="ref15 ref16">11,12</xref>
        ]
where (•)+ means pseudo inverse of Moore-Penrose (here  ( )denotes external reference signal
If training observations are fed sequentially in on-line mode, the recurrent form of the LSM can be
 [
        <xref ref-type="bibr" rid="ref1 ref5">1</xref>
        ]( )= (∑ [
        <xref ref-type="bibr" rid="ref1 ref5">1</xref>
        ]( ) [
        <xref ref-type="bibr" rid="ref1 ref5">1</xref>
        ] ( )) ∑  [
        <xref ref-type="bibr" rid="ref1 ref5">1</xref>
        ]( ) ( )=  [
        <xref ref-type="bibr" rid="ref1 ref5">1</xref>
        ]( )∑  [
        <xref ref-type="bibr" rid="ref1 ref5">1</xref>
        ]( ) ( )
  ( )=   ( − 1)+
      </p>
      <p>( )=   ( − 1)−
{

  ( − 1)( ( )− (  ( − 1))   ( ( )))  ( ( ))
1+ (  ( ( )))   ( − 1)  ( ( ))
  ( − 1)  ( ( ))(  ( ( )))   ( − 1)</p>
      <p>1+ (  ( ( )))   ( − 1)  ( ( ))
.</p>
    </sec>
    <sec id="sec-5">
      <title>3. LSTM model and architecture</title>
      <p>
        Recurrent neural networks (RNNs) are based on the idea of passing through sequence of time steps
with derivatives that do not explode or vanish. The idea is that some gated self-loop can be introduced
that allows to decide what information should be forgotten, saved, and kept. That decision should be
based on features that neural networks learn during training. One of the most successful architectures
that implement that idea is Long-Short-Term-Memory (LSTM) recurrent neural network [
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4 ref5 ref6 ref7 ref8 ref9">1-5</xref>
        ].
      </p>
      <p>LSTM replaces the regular RNN unit with LSTM block that has its internal memory and can be</p>
      <p>
        blocks. LSTM has two types of connections – external
connections that are similar to the recurrent connection between RNN hidden units, and internal state
 ( ) that has a recurrent internal connection to itself. In addition, LSTM block has three main gates
that control an information flow and decide whether new information should be forgotten or saved
and whether we need to keep old information in memory [
        <xref ref-type="bibr" rid="ref9">5</xref>
        ]. An example of LSTM architecture is
shown in Fig.3.
that contains information from LSTM block outputs in the previous time steps;   is forget gate bias
vector;   is a matrix of input weights for forget gate;   is a matrix of recurrent weights for forget
gate.
      </p>
      <p>
        The next step for LSTM block consists of several intermediate steps. First, the input gate decides
which information in the internal state should be updated with new data. Then, the network creates a
list of new elements that reflect new information that should be added to the internal state. Finally, the
network combines all information from previous steps and updates the internal state  ( ). All these
operations are described with the following equation [
        <xref ref-type="bibr" rid="ref4 ref8 ref9">4, 5</xref>
        ]:

 ( )

=   ( ) ( −1)+  ( )

 (  + ∑   , 
( )+ ∑   , ℎ
      </p>
      <p>The last step of LSTM block decides which information should be returned as output. Output
value calculated using output gate mechanism:
ℎ( )

= 
ℎ( 
( )


( ))
 ( )

=  (  + ∑    ,

( )+ ∑   ,

ℎ

where   ,   ,   respectively bias vector, input, and recurrent weights matrices of output gate.</p>
      <p>
        For training LSTM stochastic gradient method and its modern modifications are used. LSTM
architecture has been successful on real-world tasks in different domains and shows that it works
much better with long-term dependencies than poor RNNs [
        <xref ref-type="bibr" rid="ref4 ref8 ref9">4, 5</xref>
        ].
      </p>
    </sec>
    <sec id="sec-6">
      <title>4. Experimental investigation and results analysis 4.1.</title>
    </sec>
    <sec id="sec-7">
      <title>Data set</title>
      <p>As an input data was taken corporate Emerging Markets Bond total Return Index (EMBRI) at
NASDAQ stock exchange at the period since January till August 2022. The sample consisted of
instances which were divided on training and test subsamples.</p>
      <p>Flow chart of EMBRI is presented in the Fig. 4.</p>
    </sec>
    <sec id="sec-8">
      <title>Experimental investigations of hybrid DL networks</title>
      <p>The first series of experiments were performed with hybrid Deep Learning network with neo-fuzzy
neurons as nodes. At experiments the following parameters were varied: ratio training/test sample,
number of inputs 3-5, number of fuzzy sets per variable 3-5 and membership functions: Bell,
Gaussian and Triangular. The goal of experiments was to find optimal parameters values. Forecast
period was taken 5 days, as the accuracy metrics were taken MSE and MAPE. In the first experiment
Bell MF was explored. After experiment optimal parameters were found for the hybrid DL network:
number of inputs – 3, number of fuzzy sets – 3, ratio – 0.8. With these parameters values the best
accuracy at the test sample was attained: MSE = 0,424, MAPE = 0,155.</p>
      <p>The next experiments were performed for hybrid network with Gaussian MF. After experiments
were found optimal parameters of hybrid DL network: number of inputs – 3, number of rules – 4,
ratio training/test – 0.6. With these parameters the forecasting results are presented in the Table 1.</p>
      <p>In the Table 2 the process of structure generation of the best network is presented with optimal
parameters. As it follows from Table 2 optimal structure consists of 3 layers: 3 nodes at the first layer,
3 nodes at the second layer and 1 node at the third layer. Flow chart of forecasting with this network
structure is prese at the Fig. 5.</p>
      <p>Table 1</p>
      <sec id="sec-8-1">
        <title>The best forecast with Gaussian MF (inputs: 3; rules: 4; ratio: 0.6)</title>
        <p>The next experiment was carried out with hybrid DL network with triangular MF. After
experiment were found optimal parameters and structure of the hybrid network: 5 inputs, 3 MF and
0.8 ratio. After experiments the accuracy of hybrid networks with different MF was compared. The
results are presented in the Fig. 6. At the next series experiments LSTM networks were investigated.
The goal of experiments was to find the optimal parameters. The following parameters varied:
1.5
2
1
0.5
0</p>
        <p>Real
399,15
399,12
397,97
396,82
396,62
number of inputs 3-5, ratio training/test 0.6, 07, 0.8. The optimal parameters values were found: n=3,
ratio 0.7. After that the LSTM with optimal parameters was applied for training and forecasting. The
results are presented in the Table 3. Flow charts of training and validation are presented in the Fig. 7.</p>
        <p>In the next experiments the efficiency of the best models of hybrid DL network and LSTM was
investigated and compared with different ratios training/test sample. The corresponding results for
different in puts are presented in the Tables 4-6. Analyzing these results, we may conclude
GMDHneo-fuzzy network has better forecasting accuracy at the interval period 5 than LSTM for various
ratios. In the next experiments the forecasting efficiency of hybrid DL networks and LSTM at
different forecasting intervals were investigated. In the Table 7 forecasting results of
GMDH-neofuzzy network and LSTM at interval 7 days and in the Fig. 8 forecasting accuracy are presented.</p>
        <p>As it follows from the presented results the hybrid neo-fuzzy network has the better accuracy than
LSTM at the interval 7 days. At the succeeding experiments the forecasting accuracy of both</p>
        <sec id="sec-8-1-1">
          <title>LSTM 4,074 1,378 2,023</title>
        </sec>
        <sec id="sec-8-1-2">
          <title>LSTM 4,715 2,107 5,412</title>
        </sec>
        <sec id="sec-8-1-3">
          <title>LSTM 1,961 4,628 2,152</title>
          <p>LSTM
1,806
1,899
1,518
1,511
1,224
0,932
0,877
networks was explored at middle-term fore casting with interval 20 days. The accuracy by MAPE is
presented in the Fig. 9.
4.3.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>Comparative experiments of hybrid DL networks and LSTM</title>
      <p>In the final experiments the forecasting accuracy of hybrid GMDH network (GMDH-nf) and
LSTM were compared at different forecasting intervals (short-term and middle-term). The
corresponding results by criterion MSE and MAPE are presented in the Table 8 and Table 9
correspondingly where the point means the size of test sample.</p>
      <sec id="sec-9-1">
        <title>Period 3</title>
      </sec>
      <sec id="sec-9-2">
        <title>Period 5</title>
      </sec>
      <sec id="sec-9-3">
        <title>Period 7</title>
      </sec>
      <sec id="sec-9-4">
        <title>Period 20</title>
      </sec>
      <sec id="sec-9-5">
        <title>Period 3</title>
      </sec>
      <sec id="sec-9-6">
        <title>Period 5</title>
      </sec>
      <sec id="sec-9-7">
        <title>Period 7</title>
        <p>Period 20
2.5</p>
        <p>2
1.5</p>
        <p>1
0.5
0</p>
      </sec>
      <sec id="sec-9-8">
        <title>GMDH-nf</title>
      </sec>
      <sec id="sec-9-9">
        <title>LSTM</title>
      </sec>
      <sec id="sec-9-10">
        <title>GMDH-nf</title>
      </sec>
      <sec id="sec-9-11">
        <title>LSTM</title>
      </sec>
      <sec id="sec-9-12">
        <title>GMDH-nf</title>
      </sec>
      <sec id="sec-9-13">
        <title>LSTM</title>
      </sec>
      <sec id="sec-9-14">
        <title>GMDH-nf</title>
      </sec>
      <sec id="sec-9-15">
        <title>LSTM</title>
      </sec>
      <sec id="sec-9-16">
        <title>GMDH-nf</title>
      </sec>
      <sec id="sec-9-17">
        <title>LSTM</title>
      </sec>
      <sec id="sec-9-18">
        <title>GMDH-nf</title>
      </sec>
      <sec id="sec-9-19">
        <title>LSTM</title>
      </sec>
      <sec id="sec-9-20">
        <title>GMDH-nf</title>
      </sec>
      <sec id="sec-9-21">
        <title>LSTM</title>
      </sec>
      <sec id="sec-9-22">
        <title>GMDH-nf</title>
      </sec>
      <sec id="sec-9-23">
        <title>LSTM</title>
        <p>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20</p>
      </sec>
      <sec id="sec-9-24">
        <title>GMDH-Neo-fuzzy</title>
      </sec>
      <sec id="sec-9-25">
        <title>LSTM</title>
        <p>Analysis of these results shows that in a whole hybrid DL network has the better accuracy than
LSTM at different short and middle forecasting intervals (3, 5, 7, 20 days).</p>
      </sec>
    </sec>
    <sec id="sec-10">
      <title>5. Conclusion</title>
    </sec>
    <sec id="sec-11">
      <title>6. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <article-title>1. In this paper the problem of forecasting at financial market with different forecasting intervals was considered (short-term and middle-term forecasting)</article-title>
          .
          <article-title>For its solution it was suggested to apply hybrid deep learning (DL) networks based on GMDH and LSTM networks</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <article-title>2. The experimental investigations were performed at the problem of forecasting Emerging Markets Bond Total Return Index (EMBRI) at NASDAQ stock exchange at the period since January till August</article-title>
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <article-title>3. Optimization of parameters of LSTM and hybrid networks was performed during the experiments. The optimal structure of hybrid DL network was constructed using GMDH method</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <article-title>4. The experimental investigations of optimized LSTM and hybrid networks were carried out at different forecasting intervals and their accuracy was compared. In result it was established that application of hybrid DL networks has much better accuracy than LSTM at the problems of short-term and middle-term forecasting at stock exchanges</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hochreiter</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Schmidhuber</surname>
          </string-name>
          ,
          <article-title>"Long short-term memory"</article-title>
          ,
          <source>Neural Computation</source>
          , vol.
          <volume>9</volume>
          ,
          <issue>1997</issue>
          , pp.
          <fpage>1735</fpage>
          -
          <lpage>1780</lpage>
          . doi:
          <volume>10</volume>
          .1162/neco.
          <year>1997</year>
          .
          <volume>9</volume>
          .8.1735.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Hammer</surname>
          </string-name>
          ,
          <article-title>"On the approximation capability of recurrent neural networks"</article-title>
          ,
          <source>Neurocomputing</source>
          , vol.
          <volume>31</volume>
          ,
          <year>1998</year>
          , pp.
          <fpage>107</fpage>
          -
          <lpage>123</lpage>
          . doi:
          <volume>10</volume>
          .1016/S0925-
          <volume>2312</volume>
          (
          <issue>99</issue>
          )
          <fpage>00174</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>C.</given-names>
            <surname>Olah</surname>
          </string-name>
          ,
          <article-title>"</article-title>
          <source>Understanding lstm networks"</source>
          ,
          <year>2020</year>
          . URL: https://colah.github.io/posts/2015-08- Understanding-LSTMs/.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Graves</surname>
          </string-name>
          ,
          <article-title>"Generating sequences with recurrent neural networks"</article-title>
          ,
          <source>CoRR</source>
          , vol.
          <source>abs/1308.0850</source>
          ,
          <year>2013</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.1308.0850.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Graves</surname>
          </string-name>
          ,
          <article-title>Supervised Sequence Labelling with Recurrent Neural Networks</article-title>
          , Verlag, Berlin, Heidelberg: Springer,
          <year>2012</year>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>642</fpage>
          -24797-2.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.S.</given-names>
            <surname>Haykin</surname>
          </string-name>
          ,
          <article-title>Neural networks: a comprehensive foundation</article-title>
          , 2nd ed. Upper Saddle River,
          <string-name>
            <surname>N.J</surname>
          </string-name>
          : Prentice Hall,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Ossovsky</surname>
            <given-names>S.</given-names>
          </string-name>
          <article-title>Neural networks for information processing, transl</article-title>
          .
          <source>from Polish</source>
          .-M. - Finance and Statistics, 2002 -
          <fpage>344p</fpage>
          .
          <article-title>(rus).</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Wang</surname>
            <given-names>F</given-names>
          </string-name>
          .
          <source>Neural Networks Genetic Algorithms and Fuzzy Logic for Forecasting // Proc. Intern. Conf. Advanced Trading Technologies</source>
          . - New York,
          <year>1992</year>
          , pp.
          <fpage>504</fpage>
          -
          <lpage>532</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Yamakawa</surname>
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Uchino</surname>
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miki</surname>
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kusanagi H</surname>
          </string-name>
          .
          <article-title>A neo-fuzzy neuron and its applications to system identification and prediction of the system behavior //</article-title>
          <source>Proc. 2nd Intеrn. Conf. Fuzzy Logic and Neural Networks «LIZUKA-92». - Lizuka</source>
          ,
          <year>1992</year>
          , pp.
          <fpage>477</fpage>
          -
          <lpage>483</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>I.</given-names>
            <surname>Goodfellow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Courville</surname>
          </string-name>
          , Deep Learning, MIT PRESS,
          <year>2016</year>
          . URL: http://www.deeplearningbook.org.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Yuriy</surname>
            <given-names>Zaychenko</given-names>
          </string-name>
          , Yevgeniy Bodyanskiy, Oleksii Tyshchenko, Olena Boiko,
          <string-name>
            <given-names>Galib</given-names>
            <surname>Hamidov</surname>
          </string-name>
          .
          <article-title>Hybrid GMDH-neuro-fuzzy system and its training scheme</article-title>
          .
          <source>Int. Journal Information theories and Applications</source>
          ,
          <year>2018</year>
          . vol.
          <volume>24</volume>
          , Number 2, pp.
          <fpage>156</fpage>
          -
          <lpage>172</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Yu</surname>
            . Zaychenko,
            <given-names>Galib</given-names>
          </string-name>
          <string-name>
            <surname>Hamidov</surname>
          </string-name>
          .
          <article-title>The Hybrid Deep Learning GMDH-neo-fuzzy Neural Network and Its Applications</article-title>
          .
          <source>Proceedings of 13-th IEEE Intern Conference Application of Information and Communication Technologies-AICT2019</source>
          .
          <fpage>23</fpage>
          -
          <issue>25</issue>
          <year>October 2019</year>
          , Baku, pp.
          <fpage>72</fpage>
          -
          <lpage>77</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Evgeniy</surname>
            <given-names>Bodyanskiy</given-names>
          </string-name>
          , Yuriy Zaychenko, Olena Boiko, Galib Hamidov,
          <string-name>
            <given-names>Anna</given-names>
            <surname>Zelikman</surname>
          </string-name>
          .
          <article-title>Structure Optimization and Investigations of Hybrid GMDH-Neo-fuzzy Neural Networks in Forecasting Problems</article-title>
          .
          <source>System Analysis &amp; Intelligent Computing. Ed. Michael Zgurovsky</source>
          ,
          <string-name>
            <given-names>Natalia</given-names>
            <surname>Pankratova</surname>
          </string-name>
          .
          <source>Book Studfies in Computational Intelligence, SCI</source>
          , vol.
          <volume>1022</volume>
          . Springer,
          <year>2022</year>
          , pp.
          <fpage>209</fpage>
          -
          <lpage>228</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>