<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The simulator and neuro-controller for small satellite at- titude development</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Nataliya Shakhovska[</string-name>
          <email>nataliya.b.shakhovska@lpnu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dmytro Kozii</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pavlo Mukalov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Lviv Polytechnic National University</institution>
          ,
          <addr-line>Lviv 79013</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The paper describes the realization of simulator and neuro-controller for small satellite attitude. The main types of neuro-controllers are analyzed. A problem of proper neuroemulator choosing for neurocontroller training is analyzed. A new criterion on the basis of local control gradients analysis for input neuroemulator's neurons is proposed. Results of numerical simulations of neurocontroller training by a gradient descent method is given.</p>
      </abstract>
      <kwd-group>
        <kwd>satellite</kwd>
        <kwd>neuro-controller</kwd>
        <kwd>learning rate</kwd>
        <kwd>attitude</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Neural control is a kind of adaptive control when artificial neural networks (NN) are
used as building blocks of control systems. Neural networks have a number of unique
properties that make them a powerful tool for building control systems: the ability to
learn from examples and to summarize data, the ability to adapt to changing
properties of the object of control and the environment, the suitability for the synthesis of
nonlinear regulators. Over the past 20 years, a large number of neurological methods
have been developed, the most popular among them are Model Reference Adaptive
Neurocontrol and Adaptive Critics [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        The method of neural control with a reference model, also known as a "circuit with
neurotransmitter and neuro-controller" or "reciprocal distribution in time," was
proposed in the early 1990s [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], [
        <xref ref-type="bibr" rid="ref3 ref4 ref5">3 – 5</xref>
        ]. This method does not require knowledge of the
mathematical model of the control object. Instead, a separate neural network, a
neuromuscular, studies the direct dynamics of the control object and then it is used to
calculate derivatives when training a neuro-controller. At the same time, the trained
neuro-emulators with the lowest mean square error of the simulation of the control
object usually chooses from the set of trained neuro-emulators. However, is this
criterion best if the neural network is used for further training another neural network,
connected sequentially to the first, and not actually for modeling the control object?
      </p>
      <p>The paper presents neurocontroller development for satellite rotation control.</p>
    </sec>
    <sec id="sec-2">
      <title>State of arts</title>
      <p>NN was proposed in 1943. McCullock and Pitts as the result of studying the structure
and activity of biological neurons.</p>
      <p>
        A typical structure of the automatic control system with the PID-regulator and the
NN as an automatic adjustment unit is considered in the work [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. NN acts as a
functional transformation, which for each set of input signals the coefficients for the PID
regulator are produced. The most complicated part of the design of an NN-based
regulator is the training procedure, which reduces to the identification of unknown
NN parameters, such as weighting factors and displacement of neurons. For NN
training, the gradient search method uses a minimum criterion function, which depends on
the parameters of the neurons. The search process is inertial, at each iteration, the
search for all coefficients of the network occurs: first for the output layer, then for the
previous and so on to the first.
      </p>
      <p>
        The length of the learning process is a key issue when using NN methods for PID
regulators [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. In addition, when applying NN, there are difficulties due to the
impossibility of predicting regulation errors for incoming actions that were not included in
the set of training sequences by determining the structure of neurons in the network,
the duration of training, the range and the number of training actions.
      </p>
      <p>
        The main purpose of NM training is to choose the weighting factors of such a
network to ensure consistency between input and output values. The neuron with the
input p = {p1, p2, ..., pr} is shown in Fig. 1. The initial value is equal to the scalar
product of the vector W on the input vector p, the bias value b is added to the
weighted sum of inputs [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        Output signal is:
n  w11  p1  w12  p2  ...  w1R  p R  b
(1)
Inputs
р1 w11
р2 w12
…
p2 w1R
The choice of NN architecture is to determine the number of layers, the number of
neurons in each of the layers, the form of the activation function of each layer, and
information about the topological links of the neurons. Single-layer NNs are not
suitable for solving complex problems [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], but combining several neurons into one or
more layers has great potential. The two-layer NN, which in the first layer contains a
sigmoidal activation function, and in the second one linear, can be trained to
approximate any function with finite number of breakpoints with arbitrary accuracy [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        The purpose of identification is to determine the operator of the model, which
converts the input action of the controlled object to the output value. Different
identification methods are possible depending on the various forms of representation of
mathematical models in the form of ordinary differential equations, difference
equations, convolution equations [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], and others. However, none of the proposed methods
is universal.
      </p>
      <p>
        The paper [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] considers the use of NN as an alternative tool for the identification
of dynamic objects. The use of NN is based on the fact that in practice modern
electric drives are multi-mass systems with nonlinear links. Relevant linearized models
built based on transfer functions, cannot always adequately reflect the state of the
electric drive in all modes of its operation. The equivalence of a nonlinear system and
its linear approximation will be equal in a limited time interval, and when
transitioning the output system from one mode to another, it is expedient to use the
linearization method and obtain a new linear system.
      </p>
      <p>
        The paper [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] proposed the use of recurrent multi-layer N with external inputs
NARX.
      </p>
      <p>u(t)
y(n  1)  f ( y(n), ..., y(n  q  1), u (n)), ..., u (n  q  1) ,
(2)
where у(п) is output vector, u(n) is input vector, п is the discrete time moment, q is
power of the system.</p>
      <p>
        Such a NN, which has feedback with single delays, allows constructing on its basis
a model of a dynamic object of arbitrary complexity. Using this method requires
veriz-1
z-q
z-1
z-q
y (t)
fication of trained NF for adequacy with the use of new data not included in the
training sample. Such NN is associated with the possibility of re-training NN [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>
        The Matlab [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] Neural Network Toolbox application suite contains the most
popular neurocontrollers (NPCs) with
 Neural Predictive Control (NPC),
 Nonlinear Auto Regressive Moving Average (NARMA-L2) model,
 Model Reference Controller (MRC).
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], a mathematical description of predictive neurorization using MATLAB
system tools is presented. In [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], the NARMA-L2 controller is used for automatic
control of the vessel on a variable course. When solving the problem of guidance and
stabilization of the armament of a light armored machine, the NARMA-L2 neuro
regulator is used in the contour of speed. As the authors note, NARMA-L2 acts as a
relay regulator, whose output is switched to opposite limits, resulting in significant
fluctuations in speed (up to 40% of the maximum). However, these neuro-regulators
are not connected with physical model of object.
      </p>
      <p>The purpose of this work is to build model and neuro-controller to control small
satellite with default amount of reaction wheels.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Materials and methods</title>
      <p>The main tasks of the paper are to create:
1. Simple simulator of satellite rotations, controlled by 3 or 4 reaction wheels, placed
in different configurations. The simulation model will be configurable and easy to
read.
2. An Artificial Intelligence (AI) learning module which will trigger the simulator
and learn autonomously from the behavior of the simulated satellite, how to control
its rotations.
3. The AI module, after trained for different configurations of wheels, will get
commands with desired 3D rotation speeds and control the wheels to achieve the
desired rotation.
3.1</p>
      <sec id="sec-3-1">
        <title>Satellite simulator design</title>
        <p>Simulator is developed using C++ programming language.</p>
        <p>The satellite simulator is created to solve the next tasks:
 To provide physical model of physical object;
 To provide physical model of satellite with reaction wheels for rotation control;
 To provide possibility to control satellite using reaction wheels during simulation.
Simulator is divided to the such layers of logical implementations:
 Core of simulation,
 Satellite simulation.</p>
        <p>Core is a general simulation that grants us encapsulated logic for creating and moving
of material object. It also allows us to configure simulation and to log information
about all objects in simulation. Satellite simulation extends material object logic with
reaction wheels and physical facts (friction, gravity, gyroscope effect etc.).</p>
        <p>The class diagram is given in Fig. 3.
 Contracts shows main entities of simulator and grants low coupling between their
implementations. Contracts consists of abstractions;
 Core implements contracts. It contains primary physical model and Simulation
entity.
 Satellite simulation extends Core with a dynamic of reaction wheel and satellite.
Entities:
 Point - provides an abstract point for further implementations;
 MassPoint - point which has mass and movement vector;
 Object - provides enumeration of points which interact with each other;
 ReactionWheel - inherited from MassPoint instances, is used for changing rotation
speed of satellite by changing its angular momentum;
 Satellite - inherited from Object, provides simulated Satellite of arbitrary form,
which moves and rotates using thrusters(ForcePoint) and reaction wheels;
 Simulator - provides enumeration of Object instances and configuration of scenario
of their behavior.</p>
        <p>The sphere in the Fig. 4 is a space, which limits the set of material points of the
object. The center of mass is note center of the sphere, because its coordinates depends
of coordinates and masses off other points.</p>
        <p>During of training, the neural network must monitor and remember the dependence of
the control signal u(k-1) on the next value of the reaction of the control object that
was before in the state X(k-1). The values of the control signals and responses of the
object are recorded and, on this basis, a training sample is formed.</p>
        <p>M : Pi   y(i) X (i 1)T ,Ti  u(i)
U  Pi ,Ti i1
(3)</p>
        <sec id="sec-3-1-1">
          <title>We used and desired reaction.</title>
          <p>In the training mode neural network must find and remember the dependence of
control signal u(k  1) , in state before S (k 1) . When the object is controlled, the
inverse neuro-emulator is connected as a controller and it is receiving the rr(k )
value from input r (k  1) :
rr(k )  r(k  1) X (k )T .
(4)</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>The class diagram is given in Fig. 5.</title>
          <p>Inputs in the control network is the satellite state (speed for each axes). The output is
the control signal (torque) u(t). This is energy level for each rotation wheel.</p>
          <p>We used mini-batch gradient descent algorithm for neural network training.</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>The structure of neural network</title>
        <p>The neural network structure for this task looks like this:
 Input layer – 3 neurons (for speed by x,y,z),
 Hidden layer – 15 full-connected neurons with sigmoid activation function,
 Output layer – n neuron with predicted energy level, where n is equal
amount of rotation wheels,
 The bias is used too.</p>
        <p>The architecture of neuro-controller is chosen experimentally and given in Fig. 6.</p>
        <sec id="sec-3-2-1">
          <title>We used mini-batch gradient descent in NN.</title>
          <p>The goal of the algorithm is to find model parameters (e.g. coefficients or weights)
that minimize the error of the model on the training dataset. It does this by making
changes to the model that move it along a gradient or slope of errors down toward a
minimum error value. This gives the algorithm its name of “gradient descent.”</p>
          <p>Mini-batch gradient descent is a trade-off between stochastic gradient descent and
batch gradient descent. In mini-batch gradient descent, the cost function (and
therefore gradient) is averaged over a small number of samples, from around 10-500. This
is opposed to the SGD batch size of 1 sample, and the BGD size of all the training
samples.</p>
          <p>Mini-batch gradient descent finally takes the best of both worlds and performs an
update for every mini-batch of n training examples:
θ=θ−η⋅∇θJ(θ;x(i:i+n);y(i:i+n)).
(5)</p>
          <p>This allows us
─ to reduces the variance of the parameter updates, which can lead to more stable
convergence;
─ can make use of highly optimized matrix optimizations common to state-of-the-art
deep learning libraries that make computing the gradient w.r.t. a mini-batch very
efficient. Common mini-batch sizes range between 50 and 256, but can vary for
different applications.
4
4.1</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Results</title>
      <sec id="sec-4-1">
        <title>Stack of technologies</title>
        <sec id="sec-4-1-1">
          <title>For neuro-controller realization we used</title>
          <p>
            1. Eigen to provide vectors, matrixes, quaternions of different dimensions and
working with them (it was mostly used in simulator) [
            <xref ref-type="bibr" rid="ref16">16</xref>
            ].
2. MiniDnn to provide neural network for creating controller of a satellite.
Parameters of NN is saved in NeuralConfig.h. These neural network parameters were
chosen experimentally. We provided more than 500 training experiments with
different neural network configuration. In the best attempts mean loss was be equal 0.013
and parameters there was:
Intel Core i3 (3,4 Ghz), 2 cores, NVidia GeForce, GT630,
2Gb
5
          </p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusions</title>
      <p>To sum up this article described how we could use neural networks for controlling
satellites. Neural controllers is a very powerful method that allows us automate
different processes and improve accuracy of its results.</p>
      <p>An experimental study of the proposed criterion of 500 neuro-controllers was
conducted, which showed its effectiveness compared to the traditional method (Loss
function value is less than 0.05) of selecting neurotransmitters based on the least
square root error method on the test data voter.</p>
      <p>In the framework of further research, it is planned to test this criterion, along with
other methods of neuro-control, which include the stage of preliminary
neuroidentification of the control object: predictive model neuro management and hybrid
neuro-PID control as well as using the Kalman cube filter.
6</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Narendra</surname>
            ,
            <given-names>K. S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Parthasarathy</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Identification and control of dynamical systems using neural networks</article-title>
          .
          <source>IEEE Transactions on neural networks</source>
          ,
          <volume>1</volume>
          (
          <issue>1</issue>
          ),
          <fpage>4</fpage>
          -
          <lpage>27</lpage>
          (
          <year>1990</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Prokhorov</surname>
            ,
            <given-names>D. V.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Wunsch</surname>
            ,
            <given-names>D. C.</given-names>
          </string-name>
          :
          <article-title>Adaptive critic designs</article-title>
          .
          <source>IEEE transactions on Neural Networks</source>
          ,
          <volume>8</volume>
          (
          <issue>5</issue>
          ),
          <fpage>997</fpage>
          -
          <lpage>1007</lpage>
          (
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Feldkamp</surname>
            ,
            <given-names>L. A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Puskorius</surname>
            ,
            <given-names>G. V..:</given-names>
          </string-name>
          <article-title>Training controllers for robustness: multi-stream DEKF</article-title>
          .
          <source>In Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94) 4</source>
          ,
          <fpage>2377</fpage>
          -
          <lpage>2382</lpage>
          (
          <year>1994</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Prokhorov</surname>
            ,
            <given-names>D. V.</given-names>
          </string-name>
          :
          <article-title>Toyota Prius HEV neurocontrol and diagnostics</article-title>
          .
          <source>Neural Networks</source>
          ,
          <volume>21</volume>
          (
          <issue>2-3</issue>
          ),
          <fpage>458</fpage>
          -
          <lpage>465</lpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Haykin</surname>
            ,
            <given-names>S. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haykin</surname>
            ,
            <given-names>S. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haykin</surname>
            ,
            <given-names>S. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Elektroingenieur</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Haykin</surname>
            ,
            <given-names>S. S.:</given-names>
          </string-name>
          <article-title>Neural networks and learning machines</article-title>
          (Vol.
          <volume>3</volume>
          ). Upper Saddle River: Pearson. (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Kawafuku</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sasaki</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Kato</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Self-tuning PID control of a flexible micro-actuator using neural networks</article-title>
          .
          <source>In SMC'98 Conference Proceedings. 1998 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No. 98CH36218)</source>
          Vol.
          <volume>3</volume>
          ,
          <fpage>3067</fpage>
          -
          <lpage>3072</lpage>
          (
          <year>1998</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Burakov</surname>
            ,
            <given-names>M. V.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Kurbanov</surname>
            ,
            <given-names>V. G.</given-names>
          </string-name>
          :
          <article-title>Neuro-PID control for nonlinear plants with variable parameters</article-title>
          .
          <source>ARPN Journal of Engineering and Applied Sciences</source>
          ,
          <volume>12</volume>
          (
          <issue>4</issue>
          ),
          <fpage>1226</fpage>
          -
          <lpage>1229</lpage>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>D. F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ermentrout</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Thomas</surname>
            ,
            <given-names>P. J.:</given-names>
          </string-name>
          <article-title>Stochastic representations of ion channel kinetics and exact stochastic simulation of neuronal dynamics</article-title>
          .
          <source>Journal of computational neuroscience</source>
          ,
          <volume>38</volume>
          (
          <issue>1</issue>
          ),
          <fpage>67</fpage>
          -
          <lpage>82</lpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Zhernova</surname>
          </string-name>
          , P. Y.,
          <string-name>
            <surname>Deineko</surname>
            ,
            <given-names>A. O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bodyanskiy</surname>
            ,
            <given-names>Y. V.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Riepin</surname>
            ,
            <given-names>V. O.</given-names>
          </string-name>
          :
          <article-title>Adaptive Kernel Data Streams Clustering Based on Neural Networks Ensembles in Conditions of Uncertainty About Amount and Shapes of Clusters</article-title>
          .
          <source>In 2018 IEEE Second International Conference on Data Stream Mining &amp; Processing (DSMP)</source>
          <fpage>7</fpage>
          -
          <lpage>12</lpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Bodyanskiy</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boiko</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zaychenko</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Hamidov</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>Evolving GMDH-neuro-fuzzy system with small number of tuning parameters</article-title>
          .
          <source>In 2017 13th International Conference on Natural Computation</source>
          ,
          <article-title>Fuzzy Systems and Knowledge Discovery (ICNC-</article-title>
          FSKD)
          <fpage>1321</fpage>
          -
          <lpage>1326</lpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Ramachandran</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Madasamy</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Veerasamy</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Saravanan</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Load frequency control of a dynamic interconnected power system using generalised Hopfield neural network based self-adaptive PID controller</article-title>
          .
          <source>IET Generation, Transmission &amp; Distribution</source>
          ,
          <volume>12</volume>
          (
          <issue>21</issue>
          ),
          <fpage>5713</fpage>
          -
          <lpage>5722</lpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>C. C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sheridan</surname>
            ,
            <given-names>S. C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barnes</surname>
            ,
            <given-names>B. B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hu</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pirhalla</surname>
            ,
            <given-names>D. E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ransibrahmanakul</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Shein</surname>
            ,
            <given-names>K.:.</given-names>
          </string-name>
          <article-title>The development of a non-linear autoregressive model with exogenous input (NARX) to model climate-water clarity relationships: reconstructing a historical water clarity index for the coastal waters of the southeastern USA</article-title>
          .
          <source>Theoretical and Applied Climatology</source>
          ,
          <volume>130</volume>
          (
          <issue>1-2</issue>
          ),
          <fpage>557</fpage>
          -
          <lpage>569</lpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Medvedev</surname>
            <given-names>V. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Potjomkin</surname>
            <given-names>V. G.</given-names>
          </string-name>
          :
          <article-title>Nejronnye seti</article-title>
          .
          <source>MATLAB 6</source>
          . Moscow, Dialog-MIFI, 496 p (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Hwang</surname>
            ,
            <given-names>C. L.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Jan</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Recurrent-neural-network-based multivariable adaptive control for a class of nonlinear dynamic systems with time-varying delay</article-title>
          .
          <source>IEEE transactions on neural networks and learning systems</source>
          ,
          <volume>27</volume>
          (
          <issue>2</issue>
          ),
          <fpage>388</fpage>
          -
          <lpage>401</lpage>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xiang</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gao</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>T. H.</given-names>
          </string-name>
          :
          <article-title>Data‐driven identification and control of nonlinear systems using multiple NARMA‐L2 models</article-title>
          .
          <source>International Journal of Robust and Nonlinear Control</source>
          ,
          <volume>28</volume>
          (
          <issue>12</issue>
          ),
          <fpage>3806</fpage>
          -
          <lpage>3833</lpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Pukach</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , Il'kiv, V.,
          <string-name>
            <surname>Nytrebych</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vovk</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shakhovska</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Pukach</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Galerkin Method and Qualitative Approach for the Investigation and Numerical Analysis of Some Dissipative Nonlinear Physical Systems</article-title>
          .
          <source>In 2018 IEEE 13th International Scientific and Technical Conference on Computer Sciences and Information Technologies (CSIT)</source>
          Vol.
          <volume>1</volume>
          ,.
          <fpage>143</fpage>
          -
          <lpage>146</lpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>