<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Prediction of Data Transmission Route Congestion in Telecommunication Systems Based on a Modified Elman Neural Network</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Eduard Bovda</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yuriy Samokhvalov</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Military Institute of Telecommunications and Informatization named after Heroes of Kruty</institution>
          ,
          <addr-line>Knyaziv Ostrozkyh</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Street 45/1</institution>
          ,
          <addr-line>Kyiv, 01011</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Taras Shevchenko National University</institution>
          ,
          <addr-line>Volodymyrska Street 64/13, Kyiv, 01601</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <fpage>20</fpage>
      <lpage>21</lpage>
      <abstract>
        <p>The article analyzes existing approaches and methods of forecasting abnormal situations in telecommunication systems. The importance of the problem of forecasting congestion of data transmission routes is shown, and it is proposed to use the Elman neural network for its solution. A modification of this network and a method of predicting congestion of data transmission routes in the telecommunications network, which is based on a modified Elman neural network, are given. This method allows to increase the accuracy and speed of forecasting the congestion of routes in the network by increasing the bandwidth of the network and reducing the complexity of calculations. network, stochastic time efficiency. Data transmission routes, forecasting, telecommunication network, neural network, Elman Proceedings ceur-ws.org</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The basis of modern distributed systems are telecommunication networks, which are complex
technical systems and usually operate in dynamic environments [1, 2]. At the same time, the
management of such networks should ensure the solution of tasks that ensure data transmission with a
given quality [2]. Given this, telecommunication network management systems should include a
subsystem for predicting abnormal situations (overload of data transmission routes, errors, etc.), which
will allow the network administrator to take timely preventive measures. Therefore, forecasting the state
of telecommunications networks is an important task of network administration.</p>
      <p>
        A lot of research has been devoted to predicting the states of complex technical systems. Among
them, the following methods and approaches are most often used. Thus, in [
        <xref ref-type="bibr" rid="ref1">3</xref>
        ], a method for predicting
computer network states based on biometric algorithms is considered. In [
        <xref ref-type="bibr" rid="ref2 ref3">4, 5</xref>
        ], the method of temporal
extrapolation, in [
        <xref ref-type="bibr" rid="ref4 ref5 ref6">6, 7, 8</xref>
        ], the method of spatial extrapolation, in [
        <xref ref-type="bibr" rid="ref7">9</xref>
        ], the method of causal relationship
and expert methods, and in [
        <xref ref-type="bibr" rid="ref8">10</xref>
        ], a method is proposed in which data on the behavior of an object whose
features are related to time are presented as the results of observations at uniform time intervals and are
represented by a time series. You can also use the method of paired comparisons, which is considered
in the work [
        <xref ref-type="bibr" rid="ref9">11</xref>
        ]. In addition, recently, neural network-based approaches have been widely used to
predict the states of telecommunication networks and have shown their effectiveness. Such approaches
are discussed in [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13">12-15</xref>
        ]. Papers [
        <xref ref-type="bibr" rid="ref10 ref11 ref12">12, 13, 14</xref>
        ] consider neural networks that allow obtaining the desired
results without human intervention with low computational costs, and [
        <xref ref-type="bibr" rid="ref13 ref14">15, 16</xref>
        ] consider hybrid neural
networks that allow assessing and predicting the state of computer networks with high accuracy of
classification of the current and predicted state of the computer network, and [
        <xref ref-type="bibr" rid="ref15">17</xref>
        ] considers the use of
a probabilistic neural network to solve the problems of classifying and predicting the state of the network
transport environment.
      </p>
      <p>Based on the fact that the forecasting problem is a special case of the regression problem, the
following types of neural networks can be used to solve it: multilayer perceptron, radial basis networks,</p>
      <p>
        2023 Copyright for this paper by its authors.
CEUR
generalized regression networks, Volterra networks, and Elman networks. The analysis [
        <xref ref-type="bibr" rid="ref16">18</xref>
        ] of the use
of such networks in solving forecasting problems indicates the expediency of using time series
computation, which will be based on the Elman neural network.
      </p>
      <p>At the same time, the direct use of this network increases the load on the telecommunications network
as a whole, as well as the complexity of computing. This makes it impossible to predict its state in real
time. Therefore, the question arises of creating a model that would solve this problem.</p>
      <p>The article proposes a modification of the Elman network and a method for predicting the overload
of telecommunication network data transmission routes based on it, which allows for effective
management of a telecommunication network in conditions of high dynamics and complexity of
connections between nodes.
2.</p>
    </sec>
    <sec id="sec-2">
      <title>Modified Elman neural network</title>
      <p>
        An Elman neural network is a type of recurrent network. An Elman network consists of a multilayer
perceptron with feedback. This function allows to take into account previous actions and accumulate
information to support management decision-making based on time series forecasting. In other words,
time series forecasting is reduced to the task of interpolation (determining intermediate values of a value)
of a function of many variables and solving the problem of approximation (reduction to a simplified
form) of a multidimensional function, which inherently affects the quality of forecasting. Figure 1
shows a diagram of the Elman neural network, which consists of three layers: the input (distribution)
layer, the hidden layer, and the output (processing) layer. In this case, the hidden layer has a feedback
on itself [
        <xref ref-type="bibr" rid="ref16 ref17">18, 19</xref>
        ].
      </p>
      <p>X</p>
      <p>W</p>
      <p>V</p>
      <p>C</p>
      <p>H</p>
      <p>U</p>
      <p>O</p>
      <p>Y
where   is the input signal;


 is the output of the neural network;
  is the context state at iteration t for input X;</p>
      <p>is the weight matrix of the input layer;
 is the weight matrix of the hidden layer feedback;
(1)
(2)
layer;
layer;</p>
      <p>is the weight matrix connecting the output of the hidden layer with the input of the output
  −1 is the signal at the previous iteration;
  −1 is the state of the context at the previous iteration;</p>
      <p>is the vector of the activation function;
H is hidden layer of neurons, where each input X is connected to each neuron of the hidden
O is the output layer of neurons.</p>
      <p>A telecommunication network can be considered as a set of its elements: information directions,
routes, nodes, channels, and service quality characteristics. Therefore, as the input of the neural network,
we will have a set of parameters of the network directions in the form of an input signal   =
{ 1,  2,  3,  4,  5,  6,  7,  8}, where x1 is a type of traffic (voice, video, data) being transmitted; x2 is
volume of service traffic between nodes; x3 is an information throughput capacity; x4 is delay of packets
in the information direction; x5 is the value of jitter in the information sector; x6 is quality of routes
between nodes; x7 is number of packets with errors (IPER); x8 is number of packets lost (IPLR). The
input signal is formed as a result of monitoring the elements of the telecommunications network.</p>
      <p>The output of the neural network   is an output neuron (adder) that allows you to calculate the
deviation of the values detected by the neurons from the value of the operating state of the
telecommunications network routes.
3.</p>
      <p>Prediction of telecommunication network route overload using a modified</p>
    </sec>
    <sec id="sec-3">
      <title>Elman network</title>
      <p>The essence of the forecasting is to determine the value of congestion of telecommunication network
routes, which is calculated by the parameters of the information direction, in order to meet the
requirements of network optimization and quality of service for packets of various types. The
architecture of the modified Elman recurrent neural network is shown in Fig. 2.
in the input layer т and a hidden layer n and q and one output block. Let xit (i  1,2,..., m)
denote
the set of input neuronal vectors at a given time t , yt 1 indicates the network output at a given time t 1
, u jt ( j  1,2,..., n) denote the output of hidden layer neurons in time t and c jt ( j  1,2,..., n) and s jt
( j  1,2,..., n) indicate the neurons of the recurrent layer; wij is the weight that connects the node i
in the input layer of neurons to the node j in hidden layer. v j and q j is weights that connect the unit
j in hidden layer neurons with a node in the recurrent layer.</p>
      <p>The hidden layer looks like this - the inputs of all neurons in the hidden layer are given by the
network:</p>
      <p>NET ji (k )   wij xit (k 1)   vijcit (k )   qij sit (k )
where
c ji (k)  u jt (k 1) ,
i  1,2,..., n ,
j  1,2,..., m ;
q ji (k)  c jt (k 1) ,
i  1,2,..., n ,
j  1,2,..., m</p>
      <p>The outputs of hidden neurons are obtained from the expression:
u ji (k )  f</p>
      <p>H 
 wij xit (k )   vijcit (k )   qijsit (k ) </p>
      <p>
m
i1
 m

 i1
where the sigmoidal function in the hidden layer is selected as the activation function:
fH (x)  1 /(1  ex ) .</p>
      <p>The output signal of the hidden layer is defined as follows:
n
j1
n
j1
l
g1
l
g 1


(3)
(4)
where fT (x) is mapping as a function of neuronal activation.</p>
      <p>x1t
x2t
x3t
urt
zj
zj</p>
      <p>
        An algorithm for training an Elman network with stochastic time efficiency
Stochastic time efficiency is the minimization of time when training a neural network (in our case,
Elman) based on error correction (learning with a teacher). The backpropagation algorithm is a
supervised learning algorithm that minimizes the global error using the gradient descent method [
        <xref ref-type="bibr" rid="ref18">20</xref>
        ].
For the model of stochastic time efficiency of the Elman network, we assume that the resulting output
error  ет
 dtn  ytn
and sampling error n is defined as:
      </p>
      <p>E(tn )  0,5 (tn )dtn  ytn 2 (6)
where tn is response time (sampling) n (n  1,2,..., N ), d tn is an actual value, ytn is an output at a
given time tn , а  (t n ) is the effective function of stochastic time. Let us define  (t n ) in this way:
 (t n ) 
1

exptn (t)dt  tn (t)dB(t)</p>
      <p>t0 t0
the effective time function of the data is considered as a function of the time variable. The corresponding
error of all the data in each network is then retrained and determined as:</p>
      <p>E 
1 N</p>
      <p> E(tn ) 
N n1
1 N
2N n1 (tn )  d tn
 ytn 2
yt+1
(5)
(7)
(8)</p>
      <p>The main point of the learning algorithm is to minimize the value of the network route congestion
function until it reaches the specified minimum value  by repeated training. At each repetition, the
value of the network route congestion function is calculated and the global error is obtained. The gradient
of the network route congestion function is defined as E  E / W . For nodes in the input layer, the
weight gradient wij is given by the formula:
for nodes in the recurrent layer, the weight gradient is set by the formulas:
wij  
v j  
q j  
E(tn )
ij
E(tn )</p>
      <p>vij
E(tn )
qij
 tn z j (tn ) f H (NET jtn )xitn
 tn z j (tn ) f H (NET jtn )citn ,
 tn q j (tn ) f H (NET jtn )qitn ,
for weight nodes in the hidden layer - a weight gradient v j is given by the formula:
z j   E(tn )  tn z j (tn ) fH (NETjtn ),</p>
      <p>z j
where  is learning speed f H ( NETjtn ), is a derivative of the activation function.</p>
      <p>Based on this update rule for scales wij , v j , q j and z j are given by the formulas:
wikj1  wikj  wikj
v kj 1  v kj  v kj
q kj 1  q kj  q kj
z kj 1  z kj  z kj</p>
      <p>
        The Elman neural network should change the weights to minimize the error between the network's
prediction and the prediction target. Such a procedure can be effectively implemented using the methods of
mathematical logic, in particular the method [
        <xref ref-type="bibr" rid="ref19">21</xref>
        ].
      </p>
      <p>
        The Elman network training algorithm includes the following steps [
        <xref ref-type="bibr" rid="ref18 ref20">20, 22, 23</xref>
        ]:
      </p>
      <p>Step 1. Normalize the input data. In the Elman neural network, we select 8 parameters as input values
in the input layer. Then we define the network parameters, such as the learning rate η, which is between
0 and 1, the maximum number of iterations and the initial weights.</p>
      <p>Step 2. First, the scales wij , v j , q j and z j follow a uniform distribution on the interval (-1, 1).</p>
      <p>Step 3. The stochastic efficiency over time is introduced by the function  (t ) to the error function
E . Select the drift function  (t) and the function of a static indicator that characterizes the trend of
changes in the network state (t) .</p>
      <p>Step 4. Set the minimum error ξ route congestion. Based on the network training goal, the value of
the route congestion function is calculated:
6.</p>
      <p>Step 5. Change the connecting weights: calculate the gradient of the connecting weights
wij , wij , v j , v kj , q j , q kj , z j , z kj . Then the weights are changed from the current level to the
previous layer wikj1, v kj 1, q kj 1, z kj 1</p>
      <p>When predicting the congestion of data transmission routes in a network, the problem of so-called
"dead neurons" may arise. One of the limitations of any competing layer is that some neurons may not
be involved. That is, the neurons with initial weight vectors are far removed from the input vectors and
never win the competition, regardless of the training period. As a result, such vectors are not used in
training and the corresponding neurons never win (are dead). Therefore, in order to enable such neurons
to win, the learning algorithm provides for the possibility of a "winning neuron" losing its activity. For
this purpose, neuronal activity is recorded based on the calculation of the potential of each neuron in the
process of predicting the performance of data transmission routes and neuronal training.</p>
      <p>First, the layer neurons are assigned a potential pi 0  1 , where c is the number of neurons
c
(clusters). Then:
 if the value of the potential pi becomes less than the level pmin , then the neuron is excluded from
consideration;
 if pmin  0 , then the neuron is considered;
 if pmin  1, the neurons win in turn, since in each cycle of searching for a "winning neuron" only
one of them is ready to be considered for the possibility of defeating the others.</p>
      <p>On the kth training cycle, the potential is calculated according to the rule::</p>
      <p> 1
pi (k)   pi (k 1)  c , i  j ,</p>
      <p> pi (k 1)  pmin ,i  j
where j is the number of the "winning neuron".</p>
      <p>After providing equal opportunities for neurons to win and calculating the error, the neuron with the
number k will be determined by the formula:</p>
      <p>dk  mijn d j .</p>
      <p>The neurons of this layer are sets that are used according to the above rules (see formula 17). The
output value of the layer will be the total potential of all "winning neurons" according to the network
direction parameters based on the input values in the input layer.</p>
      <p>Step 6. The value of the route congestion function is calculated:
(17)
(18)
yt1
 fT   j1v j fH  i1wij xit  j1c j z jt  j1q j s jt   .</p>
      <p> m  m n g
(19)
The learning process ends when this value is equal to the specified minimum value.</p>
    </sec>
    <sec id="sec-4">
      <title>5. Conclusion</title>
      <p>A method for predicting the overload of data transmission routes in a telecommunications network
based on a modified Elman neural network is presented. The peculiarity of this method is to take into
account the characteristics of the network by calculating the potential of the network neurons.</p>
      <p>This makes it possible to increase the accuracy and speed of predicting route congestion in the
network by increasing the network capacity and reducing the computational complexity of the neural
network. The work of the Elman network algorithm with stochastic time efficiency is considered.</p>
      <p>The proposed method for predicting the congestion of data transmission routes in a
telecommunications network can also be used to predict other computer network states such as data
throughput and delay.</p>
      <p>6. References
[1] The Law of Ukraine on Electronic Communications No. 1089-IX, 16.12.2020.
[2] Bovda E.M. Conceptual bases of synthesis of an automated communication control system for
military purposes / E.M. Bovda, V.A. Romaniuk, Y.A. Pluhovyi // Collection of scientific works
of VITI. - 2016. - №1.- P. 6 - 18.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Stallings</surname>
            <given-names>W.</given-names>
          </string-name>
          <string-name>
            <surname>Network Security</surname>
          </string-name>
          <article-title>Essentials (2nd Edition)</article-title>
          .
          <source>Prentice Hall</source>
          ,
          <year>2002</year>
          . 432 p.
          <article-title>Economic cybernetics: a textbook / [O</article-title>
          .
          <string-name>
            <surname>Chubukova</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Ruban</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Antoshkina</surname>
          </string-name>
          , et al: YugoVostok,
          <year>2014</year>
          . 454 p.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Kaufman</surname>
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Perlman</surname>
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Speciner</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Network</surname>
          </string-name>
          <article-title>Security: Private Communications in a Public World</article-title>
          .
          <source>Pearson Education, Limited</source>
          ,
          <year>2021</year>
          . 752 p.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Aggarwal C. C.</surname>
          </string-name>
          <article-title>Neural Networks and Deep Learning</article-title>
          . Cham : Springer International Publishing,
          <year>2023</year>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>031</fpage>
          -29642-0 (date of access:
          <volume>12</volume>
          .
          <fpage>02</fpage>
          .
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Anand</surname>
          </string-name>
          , Adarsh and Ram,
          <source>Mangey. Systems Performance Modeling</source>
          , Berlin, Boston: De Gruyter,
          <year>2021</year>
          . 181 p.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Additive</given-names>
            <surname>Manufacturing</surname>
          </string-name>
          , Design, Functionally Graded Additive Manufacturing. 100
          <string-name>
            <given-names>Barr</given-names>
            <surname>Harbor</surname>
          </string-name>
          <string-name>
            <surname>Drive</surname>
          </string-name>
          ,
          <source>PO Box C700</source>
          ,
          <string-name>
            <surname>West</surname>
            <given-names>Conshohocken</given-names>
          </string-name>
          , PA
          <volume>19428</volume>
          -2959 : ASTM International,
          <year>2021</year>
          . URL: https://doi.org/10.1520/iso/astmtr52912-eb.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Divitsky</surname>
            <given-names>A.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Borovyk</surname>
            <given-names>L.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Salnyk</surname>
            <given-names>S.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gol</surname>
            <given-names>V.D.</given-names>
          </string-name>
          <article-title>Analysis of methods for predicting changes in data transmission routes in wireless self-organized networks</article-title>
          .
          <source>Collection of scientific papers of Kharkiv National Air Force University</source>
          ,
          <year>2020</year>
          ,
          <volume>1</volume>
          (
          <issue>63</issue>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Divitskyi</surname>
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Salnyk S. Hol</surname>
            <given-names>V</given-names>
          </string-name>
          .
          <article-title>&amp;amp Storchak A. Method of identification of data routes in wireless self-organized networks | Collection "Information Technology and Security"</article-title>
          .
          <source>Information Technology and Security</source>
          ,
          <year>2021</year>
          . URL: https://doi.org/10.20535/
          <fpage>2411</fpage>
          -
          <lpage>1031</lpage>
          .
          <year>2021</year>
          .
          <volume>9</volume>
          .1.249839.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Chen</surname>
            <given-names>C. H.</given-names>
          </string-name>
          <article-title>Handbook of Pattern Recognition and Computer Vision</article-title>
          . 5th ed. University of Massachusetts Dartmouth, USA : WORLD SCIENTIFIC,
          <year>2020</year>
          . 584 p.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Awan</surname>
            <given-names>Z. K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Khan</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Iftikhar</surname>
            <given-names>A</given-names>
          </string-name>
          .
          <article-title>Hybrid Neural Networks: From Application Point of View</article-title>
          . LAP Lambert Academic Publishing,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Chen</surname>
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kak</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            <given-names>L</given-names>
          </string-name>
          .
          <article-title>Hybrid neural network architecture for on-line learning</article-title>
          // Intelligent Information Management.
          <year>2010</year>
          . Vol.
          <volume>2</volume>
          . P.
          <volume>253</volume>
          -
          <fpage>261</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Wan</surname>
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhu</surname>
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fergus R. A</surname>
          </string-name>
          <article-title>Hybrid neural network-latent topic model //</article-title>
          <source>Proc. of the 15th Intern. Conf. on Artificial Intelligence and Statistics (AISTATS)</source>
          .
          <source>La Palma</source>
          , Canary Islands,
          <year>2012</year>
          . Vol.
          <volume>22</volume>
          . P.
          <volume>1287</volume>
          -
          <fpage>1294</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Zhong</surname>
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Enke</surname>
            <given-names>D</given-names>
          </string-name>
          .
          <article-title>Predicting the daily return direction of the stock market using hybrid machine learning algorithms</article-title>
          .
          <source>Financial Innovation</source>
          .
          <year>2019</year>
          , vol.
          <volume>5</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>1</fpage>
          -
          <lpage>20</lpage>
          ..
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Karpenko</surname>
          </string-name>
          ,
          <source>Formation of the Enterprise Strategy based on the Industry Life Cycle / Independent Journal of Management &amp; Production</source>
          .
          <year>2021</year>
          . Vol.
          <volume>12</volume>
          , no. 3. P. s262-
          <fpage>s280</fpage>
          . URL: https://doi.org/10.14807/ijmp.v12i3.
          <fpage>1537</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Borah</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Panigrahi</surname>
            <given-names>R</given-names>
          </string-name>
          . Applied Soft Computing:
          <article-title>Techniques and Applications</article-title>
          . Florida, United States : Apple Academic Press,
          <year>2022</year>
          . 286 p.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Elman</surname>
            <given-names>J.L</given-names>
          </string-name>
          . Finding Structure in Time // Cognitive science 14. -
          <fpage>1990</fpage>
          . - P.
          <fpage>179</fpage>
          -
          <lpage>211</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Lorentz</surname>
            <given-names>C.</given-names>
          </string-name>
          <article-title>SUPERVISED LEARNING TECHNIQUES. TIME SERIES FORECASTING</article-title>
          .
          <article-title>EXAMPLES with NEURAL NETWORKS and MATLAB</article-title>
          .
          <source>Independently Published</source>
          ,
          <year>2020</year>
          . 277 p.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>C.</given-names>
            <surname>Strong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Barrett</surname>
          </string-name>
          , C.Liu,
          <string-name>
            <given-names>T.</given-names>
            <surname>Arnon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lazarus</surname>
          </string-name>
          , Algorithms for Verifying Deep Neural Networks / et al. Stanford University, USA: Foundations and Trends in Optimization,
          <year>2021</year>
          . 404 p.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Samokhvalov</surname>
            ,
            <given-names>Y.Y.</given-names>
          </string-name>
          <article-title>Problem-oriented theorem-proving method in fuzzy logic (pomethod)</article-title>
          .
          <source>Cybern Syst Anal</source>
          <volume>31</volume>
          ,
          <fpage>682</fpage>
          -
          <lpage>690</lpage>
          (
          <year>1995</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Rotstein</surname>
            <given-names>A.P.</given-names>
          </string-name>
          <article-title>Intelligent identification technologies</article-title>
          . Rotstein
          <string-name>
            <surname>A.P. - Vinnytsia</surname>
          </string-name>
          , UniversumVinnytsia,
          <year>1999</year>
          . - 320 p.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ghiasi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Niknam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dehghani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ghadimi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mehrandezh</surname>
          </string-name>
          , ELECTRIC POWER SYSTEMS RESEARCH / University of Lisbon Higher Technical Institute, Lisboa, Portugal,
          <year>2023</year>
          . URL: https://doi.org/10.1016/j.epsr.
          <year>2022</year>
          .108975
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>