<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">The simulator and neuro-controller for small satellite attitude development</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Nataliya</forename><surname>Shakhovska</surname></persName>
							<email>nataliya.b.shakhovska@lpnu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<postCode>79013</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Dmytro</forename><surname>Kozii</surname></persName>
							<email>dmytruto@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<postCode>79013</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Pavlo</forename><surname>Mukalov</surname></persName>
							<email>pmykalov@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<postCode>79013</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">The simulator and neuro-controller for small satellite attitude development</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">9859FEE86BD44B7948F80A208DA93C71</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T08:31+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>satellite</term>
					<term>neuro-controller</term>
					<term>learning rate</term>
					<term>attitude</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The paper describes the realization of simulator and neuro-controller for small satellite attitude. The main types of neuro-controllers are analyzed. A problem of proper neuroemulator choosing for neurocontroller training is analyzed. A new criterion on the basis of local control gradients analysis for input neuroemulator's neurons is proposed. Results of numerical simulations of neurocontroller training by a gradient descent method is given.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Neural control is a kind of adaptive control when artificial neural networks (NN) are used as building blocks of control systems. Neural networks have a number of unique properties that make them a powerful tool for building control systems: the ability to learn from examples and to summarize data, the ability to adapt to changing properties of the object of control and the environment, the suitability for the synthesis of nonlinear regulators. Over the past 20 years, a large number of neurological methods have been developed, the most popular among them are Model Reference Adaptive Neurocontrol and Adaptive Critics <ref type="bibr" target="#b1">[2]</ref>.</p><p>The method of neural control with a reference model, also known as a "circuit with neurotransmitter and neuro-controller" or "reciprocal distribution in time," was proposed in the early 1990s <ref type="bibr" target="#b0">[1]</ref>, <ref type="bibr">[3 -5]</ref>. This method does not require knowledge of the mathematical model of the control object. Instead, a separate neural network, a neuromuscular, studies the direct dynamics of the control object and then it is used to calculate derivatives when training a neuro-controller. At the same time, the trained neuro-emulators with the lowest mean square error of the simulation of the control object usually chooses from the set of trained neuro-emulators. However, is this criterion best if the neural network is used for further training another neural network, connected sequentially to the first, and not actually for modeling the control object?</p><p>The paper presents neurocontroller development for satellite rotation control.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>State of arts</head><p>NN was proposed in 1943. McCullock and Pitts as the result of studying the structure and activity of biological neurons.</p><p>A typical structure of the automatic control system with the PID-regulator and the NN as an automatic adjustment unit is considered in the work <ref type="bibr" target="#b5">[6]</ref>. NN acts as a functional transformation, which for each set of input signals the coefficients for the PID regulator are produced. The most complicated part of the design of an NN-based regulator is the training procedure, which reduces to the identification of unknown NN parameters, such as weighting factors and displacement of neurons. For NN training, the gradient search method uses a minimum criterion function, which depends on the parameters of the neurons. The search process is inertial, at each iteration, the search for all coefficients of the network occurs: first for the output layer, then for the previous and so on to the first.</p><p>The length of the learning process is a key issue when using NN methods for PID regulators <ref type="bibr" target="#b6">[7]</ref>. In addition, when applying NN, there are difficulties due to the impossibility of predicting regulation errors for incoming actions that were not included in the set of training sequences by determining the structure of neurons in the network, the duration of training, the range and the number of training actions.</p><p>The main purpose of NM training is to choose the weighting factors of such a network to ensure consistency between input and output values. The neuron with the input p = {p 1 , p 2 , ..., p r } is shown in Fig. <ref type="figure">1</ref>. The initial value is equal to the scalar product of the vector W on the input vector p, the bias value b is added to the weighted sum of inputs <ref type="bibr" target="#b7">[8]</ref>.</p><p>Output signal is:</p><formula xml:id="formula_0">b p w p w p w n R R         1 2 12 1 11</formula><p>... The choice of NN architecture is to determine the number of layers, the number of neurons in each of the layers, the form of the activation function of each layer, and information about the topological links of the neurons. Single-layer NNs are not suitable for solving complex problems <ref type="bibr" target="#b8">[9]</ref>, but combining several neurons into one or more layers has great potential. The two-layer NN, which in the first layer contains a b а n Inputs</p><formula xml:id="formula_1">р 1 w 11 р 2 w 12 … p 2 w 1R  f</formula><p>sigmoidal activation function, and in the second one linear, can be trained to approximate any function with finite number of breakpoints with arbitrary accuracy <ref type="bibr" target="#b8">[9]</ref>.</p><p>The purpose of identification is to determine the operator of the model, which converts the input action of the controlled object to the output value. Different identification methods are possible depending on the various forms of representation of mathematical models in the form of ordinary differential equations, difference equations, convolution equations <ref type="bibr" target="#b9">[10]</ref>, and others. However, none of the proposed methods is universal.</p><p>The paper <ref type="bibr" target="#b10">[11]</ref> considers the use of NN as an alternative tool for the identification of dynamic objects. The use of NN is based on the fact that in practice modern electric drives are multi-mass systems with nonlinear links. Relevant linearized models built based on transfer functions, cannot always adequately reflect the state of the electric drive in all modes of its operation. The equivalence of a nonlinear system and its linear approximation will be equal in a limited time interval, and when transitioning the output system from one mode to another, it is expedient to use the linearization method and obtain a new linear system.</p><p>The paper <ref type="bibr" target="#b11">[12]</ref> proposed the use of recurrent multi-layer N with external inputs NARX.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Fig. 2. The recurrent multilayer neural network NARX</head><p>The training model is given as <ref type="bibr" target="#b0">( 1)</ref> ( ( ),..., ( <ref type="formula">1</ref>), ( )),..., ( 1)</p><formula xml:id="formula_2">y n f y n y n q u n u n q       ,<label>(2)</label></formula><p>where у(п) is output vector, u(n) is input vector, п is the discrete time moment, q is power of the system. Such a NN, which has feedback with single delays, allows constructing on its basis a model of a dynamic object of arbitrary complexity. Using this method requires veri-</p><formula xml:id="formula_3">y (t) u(t) z -1 z -1 z -q z -q</formula><p>fication of trained NF for adequacy with the use of new data not included in the training sample. Such NN is associated with the possibility of re-training NN <ref type="bibr" target="#b11">[12]</ref>.</p><p>The Matlab <ref type="bibr" target="#b12">[13]</ref> Neural Network Toolbox application suite contains the most popular neurocontrollers (NPCs) with</p><formula xml:id="formula_4"> Neural Predictive Control (NPC),  Nonlinear Auto Regressive Moving Average (NARMA-L2) model,  Model Reference Controller (MRC).</formula><p>In <ref type="bibr" target="#b13">[14]</ref>, a mathematical description of predictive neurorization using MATLAB system tools is presented. In <ref type="bibr" target="#b14">[15]</ref>, the NARMA-L2 controller is used for automatic control of the vessel on a variable course. When solving the problem of guidance and stabilization of the armament of a light armored machine, the NARMA-L2 neuro regulator is used in the contour of speed. As the authors note, NARMA-L2 acts as a relay regulator, whose output is switched to opposite limits, resulting in significant fluctuations in speed (up to 40% of the maximum). However, these neuro-regulators are not connected with physical model of object.</p><p>The purpose of this work is to build model and neuro-controller to control small satellite with default amount of reaction wheels.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Materials and methods</head><p>The main tasks of the paper are to create:</p><p>1. Simple simulator of satellite rotations, controlled by 3 or 4 reaction wheels, placed in different configurations. The simulation model will be configurable and easy to read. 2. An Artificial Intelligence (AI) learning module which will trigger the simulator and learn autonomously from the behavior of the simulated satellite, how to control its rotations. 3. The AI module, after trained for different configurations of wheels, will get commands with desired 3D rotation speeds and control the wheels to achieve the desired rotation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Satellite simulator design</head><p>Simulator is developed using C++ programming language. The satellite simulator is created to solve the next tasks:</p><p> To provide physical model of physical object;  To provide physical model of satellite with reaction wheels for rotation control;  To provide possibility to control satellite using reaction wheels during simulation.</p><p>Simulator is divided to the such layers of logical implementations:</p><formula xml:id="formula_5"> Core of simulation,  Satellite simulation.</formula><p>Core is a general simulation that grants us encapsulated logic for creating and moving of material object. It also allows us to configure simulation and to log information about all objects in simulation. Satellite simulation extends material object logic with reaction wheels and physical facts (friction, gravity, gyroscope effect etc.). The class diagram is given in Fig. <ref type="figure" target="#fig_1">3</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Entities:</head><p> Point -provides an abstract point for further implementations;  MassPoint -point which has mass and movement vector;  Object -provides enumeration of points which interact with each other;  ReactionWheel -inherited from MassPoint instances, is used for changing rotation speed of satellite by changing its angular momentum;  Satellite -inherited from Object, provides simulated Satellite of arbitrary form, which moves and rotates using thrusters(ForcePoint) and reaction wheels;  Simulator -provides enumeration of Object instances and configuration of scenario of their behavior.</p><p>The sphere in the Fig. <ref type="figure" target="#fig_2">4</ref> is a space, which limits the set of material points of the object. The center of mass is note center of the sphere, because its coordinates depends of coordinates and masses off other points. During of training, the neural network must monitor and remember the dependence of the control signal u(k-1) on the next value of the reaction of the control object that was before in the state X(k-1). The values of the control signals and responses of the object are recorded and, on this basis, a training sample is formed.</p><formula xml:id="formula_6">    1 , : ( ) ( 1) , (<label>)</label></formula><formula xml:id="formula_7">M T i i i i i U P T P y i X i T u i     <label>( 3 )</label></formula><p>We used and desired reaction.</p><p>In the training mode neural network must find and remember the dependence of control signal ( 1) u k  , in state before ( 1) S k  . When the object is controlled, the inverse neuro-emulator is connected as a controller and it is receiving the ( ) rr k value from input <ref type="bibr" target="#b0">( 1)</ref> r k  :</p><formula xml:id="formula_8">  ( ) ( 1) ( ) T rr k r k X k   . (<label>4</label></formula><formula xml:id="formula_9">)</formula><p>The class diagram is given in Fig. <ref type="figure">5</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Fig. 5. Class controller diagram</head><p>Inputs in the control network is the satellite state (speed for each axes). The output is the control signal (torque) u(t). This is energy level for each rotation wheel. We used mini-batch gradient descent algorithm for neural network training.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>The structure of neural network</head><p>The neural network structure for this task looks like this:  Input layer -3 neurons (for speed by x,y,z),  Hidden layer -15 full-connected neurons with sigmoid activation function,  Output layer -n neuron with predicted energy level, where n is equal amount of rotation wheels,  The bias is used too.</p><p>The architecture of neuro-controller is chosen experimentally and given in Fig. <ref type="figure" target="#fig_3">6</ref>. The goal of the algorithm is to find model parameters (e.g. coefficients or weights) that minimize the error of the model on the training dataset. It does this by making changes to the model that move it along a gradient or slope of errors down toward a minimum error value. This gives the algorithm its name of "gradient descent."</p><p>Mini-batch gradient descent is a trade-off between stochastic gradient descent and batch gradient descent. In mini-batch gradient descent, the cost function (and therefore gradient) is averaged over a small number of samples, from around 10-500. This is opposed to the SGD batch size of 1 sample, and the BGD size of all the training samples.</p><p>Mini-batch gradient descent finally takes the best of both worlds and performs an update for every mini-batch of n training examples: θ=θ−η⋅∇θJ(θ;x(i:i+n);y(i:i+n)).</p><p>(</p><formula xml:id="formula_10">)<label>5</label></formula><p>This allows us ─ to reduces the variance of the parameter updates, which can lead to more stable convergence; ─ can make use of highly optimized matrix optimizations common to state-of-the-art deep learning libraries that make computing the gradient w.r.t. a mini-batch very efficient. Common mini-batch sizes range between 50 and 256, but can vary for different applications.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Results</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Stack of technologies</head><p>For neuro-controller realization we used 1. Eigen to provide vectors, matrixes, quaternions of different dimensions and working with them (it was mostly used in simulator) <ref type="bibr" target="#b15">[16]</ref>. 2. MiniDnn to provide neural network for creating controller of a satellite. Parameters of NN is saved in NeuralConfig.h. These neural network parameters were chosen experimentally. We provided more than 500 training experiments with different neural network configuration. In the best attempts mean loss was be equal 0.013 and parameters there was: </p><formula xml:id="formula_11">NUMBEROFSAMPLES</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Conclusions</head><p>To sum up this article described how we could use neural networks for controlling satellites. Neural controllers is a very powerful method that allows us automate different processes and improve accuracy of its results.</p><p>An experimental study of the proposed criterion of 500 neuro-controllers was conducted, which showed its effectiveness compared to the traditional method (Loss function value is less than 0.05) of selecting neurotransmitters based on the least square root error method on the test data voter.</p><p>In the framework of further research, it is planned to test this criterion, along with other methods of neuro-control, which include the stage of preliminary neuroidentification of the control object: predictive model neuro management and hybrid neuro-PID control as well as using the Kalman cube filter.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>( 1 )Fig. 1 .</head><label>11</label><figDesc>Fig. 1. Neuron structure</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 3 .</head><label>3</label><figDesc>Fig. 3. Satellite simulator class diagram</figDesc><graphic coords="5,126.36,239.52,342.48,270.60" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 4 .</head><label>4</label><figDesc>Fig. 4. The center of mass explanation</figDesc><graphic coords="6,185.40,275.88,224.28,171.48" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Fig. 6 .</head><label>6</label><figDesc>Fig. 6. Neural network architecture We used mini-batch gradient descent in NN.The goal of the algorithm is to find model parameters (e.g. coefficients or weights) that minimize the error of the model on the training dataset. It does this by making changes to the model that move it along a gradient or slope of errors down toward a minimum error value. This gives the algorithm its name of "gradient descent."</figDesc><graphic coords="8,157.56,263.64,279.96,332.16" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>Training resultsTraining time is appr 7 hours and 20 min. The computer configuration is given below:Intel Core i3 (3,4 Ghz), 2 cores, NVidia GeForce, GT630, 2Gb</figDesc><table><row><cell></cell><cell cols="2">1000</cell></row><row><cell cols="2">NUMBEROFHIDDENLAYERS</cell><cell>1</cell></row><row><cell>HIDDENLAYERSLENGTH</cell><cell>15</cell></row><row><cell>LEARNINGRATE</cell><cell cols="2">0.0007</cell></row><row><cell>BATCHSIZE</cell><cell>200</cell></row><row><cell>EPOCH</cell><cell></cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Identification and control of dynamical systems using neural networks</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">S</forename><surname>Narendra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Parthasarathy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on neural networks</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="4" to="27" />
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Adaptive critic designs</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">V</forename><surname>Prokhorov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">C</forename><surname>Wunsch</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE transactions on Neural Networks</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="997" to="1007" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Training controllers for robustness: multi-stream DEKF</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Feldkamp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">V</forename><surname>Puskorius</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN&apos;94)</title>
				<meeting>1994 IEEE International Conference on Neural Networks (ICNN&apos;94)</meeting>
		<imprint>
			<date type="published" when="1994">1994</date>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="2377" to="2382" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Toyota Prius HEV neurocontrol and diagnostics</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">V</forename><surname>Prokhorov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Networks</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="issue">2-3</biblScope>
			<biblScope unit="page" from="458" to="465" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Haykin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Haykin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Haykin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Elektroingenieur</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Haykin</surname></persName>
		</author>
		<title level="m">Neural networks and learning machines</title>
				<meeting><address><addrLine>Upper Saddle River</addrLine></address></meeting>
		<imprint>
			<publisher>Pearson</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="volume">3</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Self-tuning PID control of a flexible micro-actuator using neural networks</title>
		<author>
			<persName><forename type="first">R</forename><surname>Kawafuku</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sasaki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kato</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">SMC&apos;98 Conference Proceedings. 1998 IEEE International Conference on Systems</title>
				<imprint>
			<date type="published" when="1998">1998</date>
			<biblScope unit="volume">98</biblScope>
			<biblScope unit="page" from="3067" to="3072" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Neuro-PID control for nonlinear plants with variable parameters</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">V</forename><surname>Burakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">G</forename><surname>Kurbanov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ARPN Journal of Engineering and Applied Sciences</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="1226" to="1229" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Stochastic representations of ion channel kinetics and exact stochastic simulation of neuronal dynamics</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">F</forename><surname>Anderson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Ermentrout</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">J</forename><surname>Thomas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of computational neuroscience</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="67" to="82" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Adaptive Kernel Data Streams Clustering Based on Neural Networks Ensembles in Conditions of Uncertainty About Amount and Shapes of Clusters</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">Y</forename><surname>Zhernova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">O</forename><surname>Deineko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">V</forename><surname>Bodyanskiy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">O</forename><surname>Riepin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Second International Conference on Data Stream Mining &amp; Processing (DSMP)</title>
				<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="7" to="12" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Evolving GMDH-neuro-fuzzy system with small number of tuning parameters</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Bodyanskiy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Boiko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zaychenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Hamidov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD)</title>
				<imprint>
			<date type="published" when="2017">2017. 2017</date>
			<biblScope unit="page" from="1321" to="1326" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Load frequency control of a dynamic interconnected power system using generalised Hopfield neural network based self-adaptive PID controller</title>
		<author>
			<persName><forename type="first">R</forename><surname>Ramachandran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Madasamy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Veerasamy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Saravanan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IET Generation, Transmission &amp; Distribution</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">21</biblScope>
			<biblScope unit="page" from="5713" to="5722" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">The development of a non-linear autoregressive model with exogenous input (NARX) to model climate-water clarity relationships: reconstructing a historical water clarity index for the coastal waters of the southeastern USA</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">C</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">C</forename><surname>Sheridan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">B</forename><surname>Barnes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">E</forename><surname>Pirhalla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Ransibrahmanakul</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Shein</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Theoretical and Applied Climatology</title>
		<imprint>
			<biblScope unit="volume">130</biblScope>
			<biblScope unit="issue">1-2</biblScope>
			<biblScope unit="page" from="557" to="569" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">S</forename><surname>Medvedev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">G</forename><surname>Potjomkin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Nejronnye seti. MATLAB 6</title>
				<meeting><address><addrLine>Moscow</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2002">2002</date>
			<biblScope unit="page">496</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Recurrent-neural-network-based multivariable adaptive control for a class of nonlinear dynamic systems with time-varying delay</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">L</forename><surname>Hwang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Jan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE transactions on neural networks and learning systems</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="388" to="401" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Data-driven identification and control of nonlinear systems using multiple NARMA-L2 models</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Xiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">H</forename><surname>Lee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Robust and Nonlinear Control</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="3806" to="3833" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Galerkin Method and Qualitative Approach for the Investigation and Numerical Analysis of Some Dissipative Nonlinear Physical Systems</title>
		<author>
			<persName><forename type="first">P</forename><surname>Pukach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Il'kiv</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Nytrebych</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vovk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Shakhovska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Pukach</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE 13th International Scientific and Technical Conference on Computer Sciences and Information Technologies (CSIT)</title>
				<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="143" to="146" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
