<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Prediction of Data Transmission Route Congestion in Telecommunication Systems Based on a Modified Elman Neural Network</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Eduard</forename><surname>Bovda</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Military Institute of Telecommunications and Informatization named after Heroes of Kruty</orgName>
								<address>
									<addrLine>Knyaziv Ostrozkyh Street 45/1</addrLine>
									<postCode>01011</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Yuriy</forename><surname>Samokhvalov</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">Taras Shevchenko National University</orgName>
								<address>
									<addrLine>Volodymyrska Street 64/13</addrLine>
									<postCode>01601</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<orgName type="department">Information Technology and Implementation (IT&amp;I-2023)</orgName>
								<address>
									<addrLine>November 20-21</addrLine>
									<postCode>2023</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Prediction of Data Transmission Route Congestion in Telecommunication Systems Based on a Modified Elman Neural Network</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">A230F149FCDFD28E37944B627B2188AF</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T20:01+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Data transmission routes</term>
					<term>forecasting</term>
					<term>telecommunication network</term>
					<term>neural network</term>
					<term>Elman network</term>
					<term>stochastic time efficiency</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The article analyzes existing approaches and methods of forecasting abnormal situations in telecommunication systems. The importance of the problem of forecasting congestion of data transmission routes is shown, and it is proposed to use the Elman neural network for its solution. A modification of this network and a method of predicting congestion of data transmission routes in the telecommunications network, which is based on a modified Elman neural network, are given. This method allows to increase the accuracy and speed of forecasting the congestion of routes in the network by increasing the bandwidth of the network and reducing the complexity of calculations.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The basis of modern distributed systems are telecommunication networks, which are complex technical systems and usually operate in dynamic environments <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. At the same time, the management of such networks should ensure the solution of tasks that ensure data transmission with a given quality <ref type="bibr" target="#b1">[2]</ref>. Given this, telecommunication network management systems should include a subsystem for predicting abnormal situations (overload of data transmission routes, errors, etc.), which will allow the network administrator to take timely preventive measures. Therefore, forecasting the state of telecommunications networks is an important task of network administration.</p><p>A lot of research has been devoted to predicting the states of complex technical systems. Among them, the following methods and approaches are most often used. Thus, in <ref type="bibr" target="#b2">[3]</ref>, a method for predicting computer network states based on biometric algorithms is considered. In <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5]</ref>, the method of temporal extrapolation, in <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b7">8]</ref>, the method of spatial extrapolation, in <ref type="bibr" target="#b8">[9]</ref>, the method of causal relationship and expert methods, and in <ref type="bibr" target="#b9">[10]</ref>, a method is proposed in which data on the behavior of an object whose features are related to time are presented as the results of observations at uniform time intervals and are represented by a time series. You can also use the method of paired comparisons, which is considered in the work <ref type="bibr" target="#b10">[11]</ref>. In addition, recently, neural network-based approaches have been widely used to predict the states of telecommunication networks and have shown their effectiveness. Such approaches are discussed in <ref type="bibr" target="#b11">[12]</ref><ref type="bibr" target="#b12">[13]</ref><ref type="bibr" target="#b13">[14]</ref><ref type="bibr" target="#b14">[15]</ref>. Papers <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b12">13,</ref><ref type="bibr" target="#b13">14]</ref> consider neural networks that allow obtaining the desired results without human intervention with low computational costs, and <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b15">16]</ref> consider hybrid neural networks that allow assessing and predicting the state of computer networks with high accuracy of classification of the current and predicted state of the computer network, and <ref type="bibr" target="#b16">[17]</ref> considers the use of a probabilistic neural network to solve the problems of classifying and predicting the state of the network transport environment.</p><p>Based on the fact that the forecasting problem is a special case of the regression problem, the following types of neural networks can be used to solve it: multilayer perceptron, radial basis networks, generalized regression networks, Volterra networks, and Elman networks. The analysis <ref type="bibr" target="#b17">[18]</ref> of the use of such networks in solving forecasting problems indicates the expediency of using time series computation, which will be based on the Elman neural network.</p><p>At the same time, the direct use of this network increases the load on the telecommunications network as a whole, as well as the complexity of computing. This makes it impossible to predict its state in real time. Therefore, the question arises of creating a model that would solve this problem.</p><p>The article proposes a modification of the Elman network and a method for predicting the overload of telecommunication network data transmission routes based on it, which allows for effective management of a telecommunication network in conditions of high dynamics and complexity of connections between nodes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Modified Elman neural network</head><p>An Elman neural network is a type of recurrent network. An Elman network consists of a multilayer perceptron with feedback. This function allows to take into account previous actions and accumulate information to support management decision-making based on time series forecasting. In other words, time series forecasting is reduced to the task of interpolation (determining intermediate values of a value) of a function of many variables and solving the problem of approximation (reduction to a simplified form) of a multidimensional function, which inherently affects the quality of forecasting. Figure <ref type="figure" target="#fig_0">1</ref> shows a diagram of the Elman neural network, which consists of three layers: the input (distribution) layer, the hidden layer, and the output (processing) layer. In this case, the hidden layer has a feedback on itself <ref type="bibr" target="#b17">[18,</ref><ref type="bibr" target="#b18">19]</ref>. In this diagram, X is the input of the neural network; Y is the output of the neural network; C is the context state for the input X; W is the weight matrix of the input layer; V is the weight matrix of the hidden layer feedback; U is the weight matrix connecting the output of the hidden layer with the input of the output layer; H is the hidden layer of neurons, where each input X is connected to each neuron of the hidden layer; O is the output layer of neurons.</p><p>In the Elman network, the forecasting process is simulated by the output signal of some nonlinear dynamic system that depends on a number of factors, including past states of the system. Elman proposed to introduce an additional feedback layer into the network, called the contextual or state layer. This layer receives signals from the output of the hidden layer and, through the delay elements C, feeds them to the previous one, the input layer, thus preserving the processed information from previous cycles within the network <ref type="bibr" target="#b17">[18]</ref>.</p><p>Unlike a conventional feed-forward network, the input image of a recurrent network is not a single vector, but a sequence of input image vectors fed to the input in a given order, with the new state of the hidden layer depending on its previous states. Then the Elman network can be described by the following relations in matrix form:</p><formula xml:id="formula_0">𝑌 𝑡 = 𝐹(𝑈 × 𝐹(𝑊 × 𝑋 𝑡 + 𝑉 × 𝐶 𝑡 )) (1) 𝐶 𝑡 = 𝐹(𝑊 × 𝑋 𝑡−1 + 𝑉 × 𝐶 𝑡−1 )</formula><p>(2) where 𝑋 𝑡 is the input signal;</p><p>𝑌 𝑡 is the output of the neural network; 𝐶 𝑡 is the context state at iteration t for input X; 𝑊 is the weight matrix of the input layer; 𝑉 is the weight matrix of the hidden layer feedback;</p><p>𝑈 is the weight matrix connecting the output of the hidden layer with the input of the output layer;</p><p>𝑋 𝑡−1 is the signal at the previous iteration; 𝐶 𝑡−1 is the state of the context at the previous iteration; 𝐹 is the vector of the activation function; H is hidden layer of neurons, where each input X is connected to each neuron of the hidden layer;</p><p>O is the output layer of neurons. A telecommunication network can be considered as a set of its elements: information directions, routes, nodes, channels, and service quality characteristics. Therefore, as the input of the neural network, we will have a set of parameters of the network directions in the form of an input signal 𝑋 𝑡 = {𝑥 1 , 𝑥 2 , 𝑥 3 , 𝑥 4 , 𝑥 5 , 𝑥 6 , 𝑥 7 , 𝑥 8 }, where x1 is a type of traffic (voice, video, data) being transmitted; x2 is volume of service traffic between nodes; x3 is an information throughput capacity; x4 is delay of packets in the information direction; x5 is the value of jitter in the information sector; x6 is quality of routes between nodes; x7 is number of packets with errors (IPER); x8 is number of packets lost (IPLR). The input signal is formed as a result of monitoring the elements of the telecommunications network.</p><p>The output of the neural network 𝑌 𝑡 is an output neuron (adder) that allows you to calculate the deviation of the values detected by the neurons from the value of the operating state of the telecommunications network routes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Prediction of telecommunication network route overload using a modified Elman network</head><p>The essence of the forecasting is to determine the value of congestion of telecommunication network routes, which is calculated by the parameters of the information direction, in order to meet the requirements of network optimization and quality of service for packets of various types. The architecture of the modified Elman recurrent neural network is shown in Fig. <ref type="figure" target="#fig_2">2</ref>.</p><p>Figure <ref type="figure" target="#fig_2">2</ref> shows a network with multiple inputs of the Elman network, where the number of neurons in the input layer т and a hidden layer n and q and one output block. in the input layer of neurons to the node j in hidden layer. j v and j q is weights that connect the unit j in hidden layer neurons with a node in the recurrent layer.</p><p>The hidden layer looks like this -the inputs of all neurons in the hidden layer are given by the network:</p><formula xml:id="formula_1">) ( ) ( ) 1 ( ) ( 1 1 1 k s q k c v k x w k NET it l g ij it n j ij it m i ij ji           (3) where ) 1 ( ) (   k u k c jt ji , n i ,..., 2 , 1  , m j ,..., 2 , 1  ; ) 1 ( ) (   k c k q jt ji , n i ,..., 2 , 1  , m j ,..., 2 , 1 </formula><p>The outputs of hidden neurons are obtained from the expression:</p><formula xml:id="formula_2">                ) ( ) ( ) ( ) ( 1 1 1 k s q k c v k x w f k u it l g ij it n j ij it m i ij H ji (4)</formula><p>where the sigmoidal function in the hidden layer is selected as the activation function:  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">An algorithm for training an Elman network with stochastic time efficiency</head><p>Stochastic time efficiency is the minimization of time when training a neural network (in our case, Elman) based on error correction (learning with a teacher). The backpropagation algorithm is a supervised learning algorithm that minimizes the global error using the gradient descent method <ref type="bibr" target="#b19">[20]</ref>. For the model of stochastic time efficiency of the Elman network, we assume that the resulting output error</p><formula xml:id="formula_3">n n т t t е y d   </formula><p>and sampling error n is defined as: </p><formula xml:id="formula_4">  2 ) ( 5 , 0 ) ( n n t t n n y d t t E   <label>(</label></formula><formula xml:id="formula_5">) ( ) ( ) ( exp 1 ) (     (7)</formula><p>the effective time function of the data is considered as a function of the time variable. The corresponding error of all the data in each network is then retrained and determined as: The main point of the learning algorithm is to minimize the value of the network route congestion function until it reaches the specified minimum value  by repeated training. At each repetition, the value of the network route congestion function is calculated and the global error is obtained. The gradient of the network route congestion function is defined as</p><formula xml:id="formula_6">  2 1 1 ) ( 2 1 ) ( 1 n n t t N n n N n n y d t N t E N E          (8)</formula><formula xml:id="formula_7">W E E     / .</formula><p>For nodes in the input layer, the weight gradient ij w is given by the formula:</p><formula xml:id="formula_8">n n n it jt H n j t ij n ij x NET f t z t E w ) ( ) ( ) (            (9)</formula><p>for nodes in the recurrent layer, the weight gradient is set by the formulas:</p><formula xml:id="formula_9">n n n it jt H n j t ij n j c NET f t z v t E v ) ( ) ( ) (           , (<label>10</label></formula><formula xml:id="formula_10">) n n n it jt H n j t ij n j q NET f t q q t E q ) ( ) ( ) (          </formula><p>, for weight nodes in the hidden layer -a weight gradient j v is given by the formula:</p><formula xml:id="formula_11">), ( ) ( ) ( n n jt H n j t j n j NET f t z z t E z           (<label>11</label></formula><formula xml:id="formula_12">)</formula><p>where  is learning speed</p><p>), ( Based on this update rule for scales ij w , j v , j q and j z are given by the formulas:</p><formula xml:id="formula_13">k ij k ij k ij w w w    1 (12) k j k j k j v v v    1 (13) k j k j k j q q q    1 (14) k j k j k j z z z    1<label>(15)</label></formula><p>The Elman neural network should change the weights to minimize the error between the network's prediction and the prediction target. Such a procedure can be effectively implemented using the methods of mathematical logic, in particular the method <ref type="bibr" target="#b20">[21]</ref>.</p><p>The Elman network training algorithm includes the following steps <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b21">22,</ref><ref type="bibr">23]</ref>:</p><p>Step 1. Normalize the input data. In the Elman neural network, we select 8 parameters as input values in the input layer. Then we define the network parameters, such as the learning rate η, which is between 0 and 1, the maximum number of iterations and the initial weights.</p><p>Step 2. First, the scales ij w , j v , j q and j z follow a uniform distribution on the interval (-1, 1).</p><p>Step 3. The stochastic efficiency over time is introduced by the function Step 4. Set the minimum error ξ route congestion. Based on the network training goal, the value of the route congestion function is calculated:</p><formula xml:id="formula_14">     N n n t E N E 1 ) ( / 1 . (<label>16</label></formula><formula xml:id="formula_15">)</formula><p>If the value E less than the specified minimum error, we go to step 5, if it is greater, we go to step 6.</p><p>Step 5. Change the connecting weights: calculate the gradient of the connecting weights <ref type="figure">,  ,  ,  ,  ,</ref> . Then the weights are changed from the current level to the previous layer</p><formula xml:id="formula_16">k j j k j j k j j ij ij z z q q v v w w     , ,</formula><formula xml:id="formula_17">1 1 1 1 , , ,      k j k j k j k ij z q v w .</formula><p>When predicting the congestion of data transmission routes in a network, the problem of so-called "dead neurons" may arise. One of the limitations of any competing layer is that some neurons may not be involved. That is, the neurons with initial weight vectors are far removed from the input vectors and never win the competition, regardless of the training period. As a result, such vectors are not used in training and the corresponding neurons never win (are dead). Therefore, in order to enable such neurons to win, the learning algorithm provides for the possibility of a "winning neuron" losing its activity. For this purpose, neuronal activity is recorded based on the calculation of the potential of each neuron in the process of predicting the performance of data transmission routes and neuronal training.</p><p>First, the layer neurons are assigned a potential   c p i 1 0  , where c is the number of neurons (clusters). Then:</p><p> if the value of the potential i p becomes less than the level min p , then the neuron is excluded from consideration;  if 0 min  p , then the neuron is considered;</p><formula xml:id="formula_18"> if 1 min  p</formula><p>, the neurons win in turn, since in each cycle of searching for a "winning neuron" only one of them is ready to be considered for the possibility of defeating the others. On the kth training cycle, the potential is calculated according to the rule::</p><formula xml:id="formula_19">            j i p k p j i c k p k p i i i , ) 1 ( , 1 ) 1 ( ) ( min , (<label>17</label></formula><formula xml:id="formula_20">)</formula><p>where j is the number of the "winning neuron".</p><p>After providing equal opportunities for neurons to win and calculating the error, the neuron with the number k will be determined by the formula:</p><formula xml:id="formula_21">j j k d d min  . (<label>18</label></formula><formula xml:id="formula_22">)</formula><p>The neurons of this layer are sets that are used according to the above rules (see formula 17). The output value of the layer will be the total potential of all "winning neurons" according to the network direction parameters based on the input values in the input layer.</p><p>Step 6. The value of the route congestion function is calculated:</p><formula xml:id="formula_23">                          m j m i n j g j jt j jt j it ij H j T t s q z c x w f v f y 1 1 1 1 1 . (<label>19</label></formula><formula xml:id="formula_24">)</formula><p>The learning process ends when this value is equal to the specified minimum value.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>A method for predicting the overload of data transmission routes in a telecommunications network based on a modified Elman neural network is presented. The peculiarity of this method is to take into account the characteristics of the network by calculating the potential of the network neurons.</p><p>This makes it possible to increase the accuracy and speed of predicting route congestion in the network by increasing the network capacity and reducing the computational complexity of the neural network. The work of the Elman network algorithm with stochastic time efficiency is considered.</p><p>The proposed method for predicting the congestion of data transmission routes in a telecommunications network can also be used to predict other computer network states such as data throughput and delay.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Schematic of the Elman neural network</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>.</head><label></label><figDesc>The output signal of the hidden layer is defined as follows: mapping as a function of neuronal activation.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Elman network for predicting route congestion</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head></head><label></label><figDesc>of the activation function.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head></head><label></label><figDesc>to the error function E . Select the drift function ) (t  and the function of a static indicator that characterizes the trend of changes in the network state ) (t  .</figDesc></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">The Law of Ukraine on Electronic Communications</title>
		<imprint>
			<date type="published" when="1089">1089. 2020</date>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="page">12</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">M</forename><surname>Bovda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">M</forename><surname>Bovda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">A</forename><surname>Romaniuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename></persName>
		</author>
		<title level="m">Conceptual bases of synthesis of an automated communication control system for military purposes</title>
				<editor>
			<persName><forename type="middle">A</forename><surname>Pluhovyi</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="6" to="18" />
		</imprint>
	</monogr>
	<note>// Collection of scientific works of VITI</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Network Security Essentials</title>
		<author>
			<persName><forename type="first">W</forename><surname>Stallings</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Economic cybernetics: a textbook</title>
				<editor>
			<persName><forename type="first">O</forename><surname>Chubukova</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">V</forename><surname>Ruban</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Antoshkina</surname></persName>
		</editor>
		<imprint>
			<publisher>YugoVostok</publisher>
			<date type="published" when="2002">2002. 2014</date>
			<biblScope unit="page">454</biblScope>
		</imprint>
	</monogr>
	<note>2nd Edition</note>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Network Security: Private Communications in a Public World</title>
		<author>
			<persName><forename type="first">C</forename><surname>Kaufman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Perlman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Speciner</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2021">2021</date>
			<publisher>Pearson Education, Limited</publisher>
			<biblScope unit="page">752</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Neural Networks and Deep Learning</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">C</forename><surname>Aggarwal</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-29642-0</idno>
		<ptr target=":12." />
		<imprint>
			<date type="published" when="2023-02">2023. 02.2024</date>
			<publisher>Springer International Publishing</publisher>
			<pubPlace>Cham</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Systems Performance Modeling</title>
		<author>
			<persName><forename type="first">Adarsh</forename><surname>Anand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mangey</forename><surname>Ram</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2021">2021</date>
			<publisher>De Gruyter</publisher>
			<biblScope unit="page">181</biblScope>
			<pubPlace>Berlin, Boston</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Additive Manufacturing, Design, Functionally Graded Additive Manufacturing</title>
		<idno type="DOI">10.1520/iso/astmtr52912-eb</idno>
		<ptr target="https://doi.org/10.1520/iso/astmtr52912-eb" />
		<imprint>
			<date type="published" when="2021">19428-2959. 2021</date>
			<publisher>ASTM International</publisher>
			<biblScope unit="volume">100</biblScope>
			<pubPlace>PO Box C700; West Conshohocken, PA</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Analysis of methods for predicting changes in data transmission routes in wireless self-organized networks</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>Divitsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">V</forename><surname>Borovyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">V</forename><surname>Salnyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">D</forename><surname>Gol</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Collection of papers of Kharkiv National Air Force University</title>
		<imprint>
			<biblScope unit="issue">63</biblScope>
			<biblScope unit="page">1</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Method of identification of data routes in wireless self-organized networks | Collection</title>
		<author>
			<persName><forename type="first">A</forename><surname>Divitskyi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Salnyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Hol</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>&amp;amp Storchak</surname></persName>
		</author>
		<idno type="DOI">10.20535/2411-1031.2021.9.1.249839</idno>
		<ptr target="https://doi.org/10.20535/2411-1031.2021.9.1.249839" />
	</analytic>
	<monogr>
		<title level="m">Information Technology and Security</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
		<respStmt>
			<orgName>Information Technology and Security</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Handbook of Pattern Recognition and Computer Vision</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">H</forename><surname>Chen</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2020">2020</date>
			<publisher>WORLD SCIENTIFIC</publisher>
			<biblScope unit="page">584</biblScope>
			<pubPlace>, USA</pubPlace>
		</imprint>
		<respStmt>
			<orgName>University of Massachusetts Dartmouth</orgName>
		</respStmt>
	</monogr>
	<note>5th ed</note>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Hybrid Neural Networks: From Application Point of View</title>
		<author>
			<persName><forename type="first">Z</forename><forename type="middle">K</forename><surname>Awan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Khan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Iftikhar</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2012">2012</date>
			<publisher>LAP Lambert Academic Publishing</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Hybrid neural network architecture for on-line learning // Intelligent Information Management</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wang</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2010">2010</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="253" to="261" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">A Hybrid neural network-latent topic model</title>
		<author>
			<persName><forename type="first">L</forename><surname>Wan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Fergus</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 15th Intern. Conf. on Artificial Intelligence and Statistics (AISTATS)</title>
				<meeting>of the 15th Intern. Conf. on Artificial Intelligence and Statistics (AISTATS)<address><addrLine>La Palma, Canary Islands</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="page" from="1287" to="1294" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Predicting the daily return direction of the stock market using hybrid machine learning algorithms</title>
		<author>
			<persName><forename type="first">X</forename><surname>Zhong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Enke</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Financial Innovation</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="20" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Formation of the Enterprise Strategy based on the Industry Life Cycle</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Karpenko</surname></persName>
		</author>
		<idno type="DOI">10.14807/ijmp.v12i3.1537</idno>
		<ptr target="https://doi.org/10.14807/ijmp.v12i3.1537" />
	</analytic>
	<monogr>
		<title level="j">Independent Journal of Management &amp; Production</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="262" to="280" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Borah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Panigrahi</surname></persName>
		</author>
		<title level="m">Applied Soft Computing: Techniques and Applications</title>
				<meeting><address><addrLine>Florida, United States</addrLine></address></meeting>
		<imprint>
			<publisher>Apple Academic Press</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page">286</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Finding Structure in Time // Cognitive science</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Elman</surname></persName>
		</author>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="179" to="211" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Lorentz</surname></persName>
		</author>
		<author>
			<persName><surname>Supervised</surname></persName>
		</author>
		<author>
			<persName><surname>Techniques</surname></persName>
		</author>
		<title level="m">EXAMPLES with NEURAL NETWORKS and MATLAB</title>
				<imprint>
			<publisher>Independently Published</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page">277</biblScope>
		</imprint>
	</monogr>
	<note>TIME SERIES FORECASTING</note>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Algorithms for Verifying Deep Neural Networks</title>
		<author>
			<persName><forename type="first">C</forename><surname>Strong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Barrett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Arnon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Lazarus</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Foundations and Trends in Optimization</title>
				<meeting><address><addrLine>USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page">404</biblScope>
		</imprint>
		<respStmt>
			<orgName>Stanford University</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Problem-oriented theorem-proving method in fuzzy logic (pomethod)</title>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">Y</forename><surname>Samokhvalov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Cybern Syst Anal</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="page" from="682" to="690" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<title level="m" type="main">Intelligent identification technologies</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">P</forename><surname>Rotstein</surname></persName>
		</author>
		<editor>Rotstein A.P. -Vinnytsia</editor>
		<imprint>
			<date type="published" when="1999">1999</date>
			<publisher>Universum-Vinnytsia</publisher>
			<biblScope unit="page">320</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Ghiasi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Niknam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Dehghani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ghadimi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mehrandezh</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.epsr.2022.108975</idno>
		<ptr target="https://doi.org/10.1016/j.epsr.2022.108975" />
		<title level="m">ELECTRIC POWER SYSTEMS RESEARCH</title>
				<meeting><address><addrLine>Lisboa, Portugal</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
		<respStmt>
			<orgName>University of Lisbon Higher Technical Institute</orgName>
		</respStmt>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
