<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>IDDM'</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vasyl Sheketa</string-name>
          <email>vasylsheketa@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mykola Pasieka</string-name>
          <email>pms.mykola@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nelly Lysenko</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksandra Lysenko</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nadia Pasieka</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National Tech. University of Oil &amp; Gas</institution>
          ,
          <addr-line>Ivano-Frankivsk, 76068</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Vasyl Stefanyk Precarpathian National University</institution>
          ,
          <addr-line>Ivano-Frankivsk, 76000</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <volume>3</volume>
      <fpage>19</fpage>
      <lpage>21</lpage>
      <abstract>
        <p>The main purpose of the work was to consider the problem of neural networks and their application, especially for data management and control in the medical industry. The software product, analyzes processing of unstructured and poorly structured medical data reliability, to support decision-making, implements the neural network, was developed and studied from sets of user-defined information flows. On the basis of the scientific task, the program training algorithm was developed, which provides comprehensive support for decision-making based on the study. The developed software application is focused on cross-platform, and the graphical interface is implemented using Java FX. The software product provides a network for the reverse propagation of neural network errors (BackPropagation) and a network of directed random search (Directed Random Search). Designed neural network is trained and further recognizes the type of distribution (uniform, normal) on the specified characteristics, and used the rule "3 Sigma" to generate synthetic data. According to the study, we can conclude that the Directed Random Search learning algorithm, although more complex to implement the search for relevant medical documents, works much faster than the classical reverse distribution. Neural network, mathematical models, systems architecture, software applications, CEUR-WS</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>1.1. Basic concepts of neural networks</title>
      <p>
        Artificial neural networks – mathematical models, as well as their software and hardware
implementation, built on the principle of biological neural networks - networks of nerve cells of a living
organism. Systems, architecture and principle of operation are based on analogy with the brain of living
beings. The key element of these systems is an artificial neuron as an imitation model of the brain nerve
cell, a biological neuron. This term arose when studying the processes occurring in the brain and
attempting to simulate these processes. The first such attempt was the McCalock and Pitts neural
networks [
        <xref ref-type="bibr" rid="ref1 ref11 ref12 ref18 ref23 ref4 ref6">1, 4, 6, 11, 12, 18, 23, 27, 34, 37</xref>
        ]. As a consequence, after the development of training
algorithms, the obtained models were used for practical purposes: in forecasting tasks, for pattern
recognition, in control tasks, and others [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. The neural networks can be classified by:
Type of input information:
      </p>
      <sec id="sec-2-1">
        <title>Character of learning:</title>
        <p>analog neural networks (use information in the form of real numbers);
binary neural networks (operate with information presented in binary form).</p>
        <p>2020 Copyright for this paper by its authors.
 learning with a teacher - known neural network output;
 teacher less learning - the neural network processes only the input of unstructured and poorly
structured medical data and generates the output results itself. Such networks are called
selforganizing;
 teacher-supported learning - a system of fines and incentives from the environment.
The nature of synapses setting:
 fixed-linked networks (neural network weights are selected immediately based on the
W
conditions of the task, with:</p>
        <p> 0 where W is the network weights);
i.e.,
 0 where W are the weights of the network).</p>
        <p>t
networks with dynamic links (for them, synaptic links are set up in the course of training,
W</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>1.2. Back propagation network</title>
      <p>
        Backpropagation is a method of teaching multilayer perceptron. The method was first described in
1974 by A.I. Galushkino and independently and simultaneously by Paul G. Verbos. It was further
developed in 1986 by David I. Rumelhart, J.E. Hinton and Ronald J. Williams and independently and
simultaneously by S.I. Bartsevim and V.A. Okhoninim (Krasnoyarsk Group). This is an iterative
gradient algorithm that is used to minimize the error of multilayer perceptron operation and to obtain
the desired output. The main idea of this method is to distribute error signals from the network outputs
to its inputs, in the direction opposite to the direct distribution of signals in normal operation. Barth and
Ohanian proposed a general method («duality principle»), applicable to a wider class of systems,
including systems with a delay, distributed systems, etc. [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] To be able to apply the method of inverse
error propagation, the neuron transfer function should be differentiated [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]. The method is a
modification of the classical gradient descent method. Error reverse propagation algorithm is one of the
methods to teach multilayer forward propagation neural networks (multilayer perceptron’s) [
        <xref ref-type="bibr" rid="ref2 ref7">2, 7, 25,
30, 36 40</xref>
        ]. Training in the method of error propagation reverse involves two passes through all layers
of the network: forward and reverse. In a forward pass, the input vector is fed to the input layer of the
neural network and then propagated through the network from layer to layer. As a result, a set of output
signals is generated, which is the actual reaction of the network to this input image. All synaptic weights
of the network are fixed during the direct pass. During the reverse pass, all synaptic weights are adjusted
according to the error correction rule, namely: the actual network output is subtracted from the desired
one, resulting in an error signal. This signal is then propagated through the network in the opposite
direction to the synaptic links. Hence, the name is the method of error propagation backwards. Synaptic
weights are adjusted to bring the network output as close to the desired one as possible [
        <xref ref-type="bibr" rid="ref10 ref13 ref9">9, 10, 13</xref>
        ]. The
appearance of the reverse propagation algorithm has become a landmark event in the field of neural
network development, as it implements a computationally efficient method of multilayer perceptron
training. It would be wrong to say that the error reverse propagation algorithm offers a really optimal
solution to all potential problems, but it has dispelled the pessimism about multilayer machine learning
[26, 29, 33]. Let us consider the work of the algorithm in more detail. Let's assume that it is necessary
to teach the next neural network (Fig. 1) by applying the algorithm of error back propagation:
The following symbols are used in this picture:
 each layer of the neural network has its own letter, e.g. the input layer has its own letter a
and the source layer c;
 all neurons of each layer are numbered in Arabic numerals;
 w (a1-b1) - synaptic scales between neurons a1 and b1.
      </p>
      <p>As an activation function in multilayer perceptron’s, the sigmoid activation function, in particular
the logistic one, is usually used:

=</p>
      <p>1
1 + exp⁡(− )
⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡(1)
where a is the tilt parameter of the sigmoid function.</p>
      <p>By changing this parameter, you can build functions with different steepness. Let us agree that for
all subsequent considerations will be used exactly the logistic activation function (Fig. 2), represented
by the (formula 1).</p>
      <p>The sigmoid narrows the range of change so that the OUT value lies between zero and one.
Multilayer neural networks have more reflective power than single-layer networks only when
nonlinearity is present. The compression function provides the necessary non-linearity. In fact, there are
many functions that could be used. For an algorithm to reverse the error propagation, it is only necessary
that the function is differentiated everywhere. The sigmoid meets this requirement. Its additional
advantage is the automatic gain control. For weak signals (i.e. when OUT is close to zero) the
input/output curve has a strong slope which gives a high gain. When the signal becomes larger the gain
drops. Thus large signals are perceived by the network without saturation, while weak signals pass
through the network without excessive attenuation. The purpose of error propagation training for the
network is to adjust the weights so that a certain number of inputs lead to the required number of outputs.
For short, these sets of inputs and outputs are called vectors. At training it is supposed that for each
input vector there is a target vector paired to it, which sets the required output. Together, they are called
a training pair. The network learns on many pairs.</p>
      <p>The algorithm of error back propagation is as follows:
1. Initialize synaptic weights with small random values.
2. Select the next training pair from the training set; submit the input vector to the network input.
3. Calculate the network output.</p>
      <p>4. Calculate the difference between a network output and the required output (target vector of a
training pair).</p>
      <p>5. Adjust the network weights to minimize the error.
level.
output.
here.</p>
      <p>6. Repeat steps 2 to 5 for each vector of a training set until the error on all set reaches an admissible
The operations performed in Steps 2 and 3 are similar to those performed at operation of the already
trained network, i.e. input vector is supplied and output is calculated. Calculations shall be performed
layer-by-layer. In Fig. 1, the outputs of neurons of layer B (layer A of the input layer, which means there
are no calculations in it) are calculated first, then they are used as inputs of layer C, the outputs of OUT
(CN) of neurons of layer C are calculated, which form the output vector of the OUT network. Steps 2
and 3 form the so-called «pass forward» because the signal is distributed in the network from input to
network and is used to adjust the weights.</p>
      <p>Steps 4 and 5 make up a «reverse pass», here the calculated error signal spreads back through the
Let us consider in detail step 5 - correction of the network scales. Two cases should be highlighted</p>
      <sec id="sec-3-1">
        <title>Case 1. Correction of synaptic weights of the output layer</title>
        <p>
          For example, for the neural network model in Fig. 1., these scales have the following designations:
w(B1-C1) and w(B2-C1). Let us define that the index p will denote the neuron from which the synaptic
scales follow, and the q-neuron into which it enters [
          <xref ref-type="bibr" rid="ref19 ref8">8, 19</xref>
          ].
        </p>
        <p>Enter the value Δ, which is equal to the difference between the required 
outputs multiplied by the derivative of the activation logistic function (formula 1.):
 and the real</p>
      </sec>
      <sec id="sec-3-2">
        <title>Then, the weights of the output layer after the correction will be equal:</title>
        <p>∆⁡= ⁡ OUT (1– OUT )(T – OUT )⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡(2)
  − (i + 1) ⁡ = ⁡   − (i) + n∆ OUT ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡(3)
where i is the number of the current learning iteration;   − is the value of synaptic weight
connecting neuron p with neuron q; n - the «learning rate» coefficient, which allows to control the
average value of weight change; OUT - neuron p output.</p>
        <sec id="sec-3-2-1">
          <title>Here is an example of calculations for synaptic weight   1− 1:</title>
          <p>∆ 1⁡= ⁡ OUT 1(1– OUT 1)(T– OUT 1)⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡(4)
  1− 1(i + 1) =   1− 1(i) + ⁡n∆ 1OUT 1⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡(5)</p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>Case 2. Correction of synaptic weights of the hidden layer. For the neural network model in Fig. 1, it will be the corresponding weights between layers A and B. Let us define that the index p will indicate the neuron from which the synaptic weight follows, and q - the neuron which enters (we pay attention to the appearance of a new variable k):</title>
      </sec>
      <sec id="sec-3-4">
        <title>Then, the weights of the hidden layer after the correction will be equal:</title>
        <sec id="sec-3-4-1">
          <title>Here is an example of a calculation for synaptic weight   1− 1:</title>
          <p>− (i + 1) =   − (i) + ⁡n∆ OUT ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡(7)</p>
          <p>∆ 1⁡= ⁡ OUT 1(1– OUT 1)⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡(8)
  1− 1(i + 1) =   1− 1(i) + ⁡n∆ 1OUT 1⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡(9)</p>
          <p>For each neuron in the hidden layer, Δ must be calculated and all weights associated with this layer
must be set. This process is repeated layer by layer until all weights have been corrected.</p>
          <p>Despite the many successful applications of inverse propagation, it is not a universal solution. What
causes most trouble is an indefinitely long learning process. In complex tasks, it may take days or even
weeks to train a network and it may not learn at all. The reason may be one of the following.

 −1
∆ ⁡= ⁡ OUT (1– OUT ) ∑ ∆kwq − k ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡ (6)</p>
          <p>Network fixation. In the process of learning a network, the weights may become very large values
as a result of correction. This can cause all or most neurons to function at very high OUT values, in an
area where the compression function derivative is very small. Since the error sent back in the learning
process is proportional to this derivative, the learning process can almost freeze. In theory, this problem
is poorly understood. Usually this is avoided by reducing the step size of η, but this increases the
learning time. Different heuristics have been used to prevent loops or to recover from them, but so far
they can only be seen as experimental.</p>
          <p>Local minimums. Reverse propagation uses a type of gradient descent, i.e. the descent down the
error surface, continuously adjusting the weights in the direction of the minimum. The error surface of
a complex network is severely cut and consists of elevations, valleys, folds in the space of high
dimensionality. A network can hit the local minimum (shallow valleys) when there is a much deeper
minimum nearby. At the point of the local minimum, all directions lead upwards and the network is
unable to get out of it. The main difficulty in training neural networks are precisely the methods of
getting out of the local minimum: each time, based on the local minimum is again looking for the next
local minimum by the same method of reverse propagation of the error until it is no longer possible to
find an exit. Step size. A careful analysis of the evidence of convergence shows that the correction of
weights is assumed to be infinitely small. It is clear that this is not feasible in practice, as it leads to
infinite learning time. The step size should be taken as a final one. If the step size is fixed and very
small, the convergence is too slow; if it is fixed and too large, you may experience paralysis or constant
instability. Effectively increase the step until the improvement of the evaluation in this direction of the
anti-gradient stops and decrease if such improvement does not occur. P. D. Wasserman described an
adaptive step selection algorithm that automatically corrects step size during training. The book by
A. N. Gorban offers a branched technology of learning optimization.</p>
          <p>
            It should also be noted that the possibility of network retraining is rather a result of erroneous design
of its topology [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ]. If there are too many neurons the property of the network to generalize information
is lost. The whole set of images provided for learning will be studied by the network, but any other
images, even very similar ones, can be classified incorrectly. [
            <xref ref-type="bibr" rid="ref14 ref15 ref3">3, 14, 15</xref>
            ]]
          </p>
          <p>1.2.1. Extracting knowledge from a dataset to determine the distribution type (Normal,
uniform)</p>
          <p>
            The results of any measurement or observation, presented as figures, can be considered random
values corresponding to probable laws. They are likely to understand that it is fundamentally impossible
to obtain the actual value of the parameter we are interested in — too many factors affect it, the process
of change, etc. We can only get closer to the actual value, estimate the interval in which it falls. All
conclusions made when working with random variables are not definitions but probabilistic. An
arbitrary random value is most fully described by the distribution function, which determines the
probability that a given value will take a value less than or equal to a given one as a result of a single
experiment. If a random variable is continuous, its derivative will be a probability density function, and
it is impossible to calculate it explicitly for each natural object because of the large set. It is known from
the experience of long-term use of the apparatus of applied mathematical statistics that the absolute
majority of random phenomena in nature describe functions of only a few types with high accuracy
[31]. They are well known; I am called in detail the basic laws of distribution of random variables.
Among them, there is one — the normal law of distribution. It is one of the most common models, and
the specific features of the function that describes it make this law the main one in applied statistics
methods. The normal distribution density graph is characterized by symmetry. This means that
deviations from the most probable value of a random value are equal to both greater and lesser ones, a
property that simplifies calculations. Most of the methods described prove that the random value under
study is distributed by a normal law. Therefore, at the beginning of any statistical analysis processing
of unstructured and poorly structured medical of data at least approximately determine the distribution
law and estimate the degree of its deviation from the normal [
            <xref ref-type="bibr" rid="ref20">20, 38, 39</xref>
            ]. There are many methods that
solve the medical problem. The simplest method is based on the visual evaluation of the distribution of
asymmetry and excess coefficients by histograms and value. The histogram is a simplified model of the
curve of random value density distribution. By constructing it and comparing it with reference plots of
the basic laws, we can roughly judge the degree of similarity between them. To build a histogram, a
random value is broken down into a certain number of bits (grouping intervals) and counts how many
of them fall into each bit. Then, the abscissa is placed on the abscissa axis and the frequencies
corresponding to them are placed on the ordinate axis. There are no absolutely strict methods of
determining the number of discharges. Mostly 8-12 bits are used. There is no ideal match for histograms
of real random variables. By the type of the built histogram we can judge about the degree of deviation
from its normal distribution. If the histogram is symmetric with respect to the vertical axis passing
through the apex, we can speak about possible approximation of its normal laws. The discrete random
value is the most probable value, while continuity is the value where the density of distribution is
maximum. If a curve of the distribution law has more than one maximum, the distribution is called
binary or poly-modal, respectively. A media random value is a value in relation to which a random
value is equally likely to be detected more or less than that value.
          </p>
          <p>1.2.2. Rule "3 sigma"</p>
          <p>A normal distribution, also called a Gaussian distribution, is a distribution of probabilities defined
by the probability density function, which coincides with the Gauss function:
 ( ) = ⁡</p>
          <p>1
√2

( −µ)2
22 ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡(10)
where µ - mathematical expectation, 2 – the variance of a random variable.</p>
          <p>The central limit theorem states that the normal distribution occurs when a given random
variable is the sum of a large number of independent random variables, each of which plays an
insignificant role in the formation of the whole sum. For example, the distance from a projectile
hitting a target with a large number of shots is characterized by the normal distribution. The
standard normal distribution is called normal distribution with mathematical expectation
µ = 0 and standard deviation  = 1.</p>
          <p>The rule of 3 sigma (3) - where almost all values of a normally widespread random value
lie in the interval [ − 3;  + 3]⁡(Fig. 3).</p>
          <p>To be more precise - at least with 99.7% reliability, the value of a normally distributed random value
lies within the specified interval (unless the value of x is precisely known and not obtained as a result
of sample processing). If the true value of the value is unknown, then you should use not , but s. Thus,
the rule of 3 s will turn into the rule of 3 s.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>2. Practical Implementation of the</title>
    </sec>
    <sec id="sec-5">
      <title>Method</title>
      <p>of Searching for</p>
    </sec>
    <sec id="sec-6">
      <title>Weakly</title>
    </sec>
    <sec id="sec-7">
      <title>Structured Information 2.1.</title>
    </sec>
    <sec id="sec-8">
      <title>UML class diagrams</title>
      <p>
        The GUI package (classes that implement the interface) includes 2 classes: MainController and
Main. MainController is a class responsible for user interaction; it contains event handlers and the main
interface elements (buttons, tables, combo boxes), as well as links to Main class. The Main class is
responsible for initializing the main window of the program and loading the interface structure from an
FXML file [
        <xref ref-type="bibr" rid="ref16">16, 32</xref>
        ].
      </p>
      <p>The neuralNets package contains the NeuralNet interface and the DirectedRandomSearchNet and
BackPropagationNeuralNet classes (Fig. 5). The NueralNet interface is used (according to SOLID
principles) to provide flexibility of the program architecture. It contains signatures of the following
methods: train - training of the neural network, solve - calculation of results by already trained neural
network, loadWeights - loading of scales from a file, saveWeights - saving of scales to a file, getWeights
- return of scales, and others.</p>
    </sec>
    <sec id="sec-9">
      <title>Description of neural network structure</title>
      <p>
        To accomplish this task, a neural network (multilayer perceptron) with 3 layers was designed. The
input layer consists of 4 neurons (due to the size of the input image), the hidden layer consists of 10
neurons (the size is chosen empirically) and the output layer contains one neuron (Fig. 6) [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ].
      </p>
      <sec id="sec-9-1">
        <title>For the hidden layer the logistic unipolar function - sigmoid (11) is used.</title>
        <p>1
 ( ) = 1 +  − ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡(10)</p>
        <p>The value of landslide neurons - units. Initial values of weights are small random numbers within
[-0.3; 0.3].</p>
      </sec>
    </sec>
    <sec id="sec-10">
      <title>2.3. GUI Description</title>
      <p>The application is focused on cross-platform so developed by means of Java SE 14 and JavaFX
(note, to run the application must be installed JRE 10.0.2). To get acquainted with the functionality of
the application, consider the main window of the program (Fig. 7).
1 – Menu file
2 – Selection of neural network type
3 – Button to start the training of the neural network
4 – Training data table
5 – Table of forecasted data
6 – Button for generation of synthetic data
7 – The button to start the process of classifying objects by the neural network
8 – Text field of execution results
The file menu includes the following sub-items:
1 - (New training data) - clearing the "Training data" table;
2 - (Download training data) - uploading data to the "Training data" table from the CSV file;
3 - (Save training data) - save data from "Training data" table to CSV-file;
4 - (New data) - clearing the "Real data" table;
5 - (Download data) - upload data to the table "Real data" from the CSV-file;
6 - (Save data) - save data from table "Real data" to CSV-file;
7 - (New weights) - clear weights for neural network;
8 - (Download Weights) - download weights from the file;
9 - (Save Weights) - save weights to a file;
10 - (Clear Results) - clear the text field results;
11 - (Exit) - finish the program;</p>
      <p>To increase the functionality of the application, dynamic tables were implemented, i.e. you can add
new records to the table, edit and delete existing ones, as well as upload and save unstructured and
poorly structured medical data to a file. To improve the functionality of the application, dynamic tables
were implemented, i.e. you can add new records to the table, edit and delete existing ones, as well as
upload and save data to a file [28, 35]. To load data into a table, you need to go to the menu "File" -&gt;
("Download training data" or "Download data") and select the file you want to upload (only CSV files
are supported). The data and weights are saved in the same way - "File" -&gt; ("Save Training Data", "Save
Data" or "Save Weights" respectively).</p>
      <p>To train a neural network, first you need to select the algorithm by which it will learn
(Fig. 9).</p>
      <p>After that press the "Train" button and set the neural network training parameters, such as: training
speed, maximum number of iterations and maximum permissible error (Fig. 10).</p>
      <p>After setting the training parameters for the neural network, you need to press the "Yes" key to start
training. When the neural network finishes training, the user will see the graph of error change (Fig. 11,
Fig. 12), as well as information about training results (Fig. 13) and final weights, which can be saved
to a file.</p>
      <p>According to the error graphs (Fig. 11, Fig. 12) and the number of iterations, it can be concluded
that the Directed Random Search learning algorithm, although more complex to implement, works
much faster than the classic Back Propagation. With the same error and speed of learning, training the
network using Directed Random Search completed in the twentieth iteration, and using Back
Propagation for nine hundred and forty.</p>
      <p>Synthetic analytical processing of unstructured and poorly structured medical data generated by the
"3 sigma" rule is used to test the neural network operation. To generate artificial medical data, press the
"Generate" key, specify the number of columns to be filled with artificial medical data and press the
"Yes" key (Fig. 14).</p>
      <p>After the generation process is complete, the table will be filled with the corresponding number of
columns with artificial data (Fig. 15, Fig. 16).</p>
      <p>The main objective of this work was to review the problem of neural networks and their applications,
especially for management and control. A software product implementing the neural network was
developed and learned from user-defined medical data. After the training, the program provides support
for decision-making based on what it has learned. This program is focused on cross-platform, so it is
made by means of Java SE14, and the graphical interface is designed by means of Java FX. The software
product implements such a neural network error reverse propagation network (BackPropagation) and
the network of directed random search (Directed Random Search). The neural network is trained and
further recognizes the type of distribution (uniform, normal) by the specified characteristics. The "3
Sigma" rule is used for generation of synthetic unstructured and poorly structured medical data.
According to the research we can conclude that the learning algorithm Directed Random Search,
although there is more difficult to implement, but works much faster than the classic Back Propagation.
With the same error and speed of learning, training the network using Directed Random Search can be
several times faster than Back Propagation.
[25] S. Ying and Q. Jianguo, "A Method of Arc Priority Determination Based on Back-Propagation
Neural Network," 2017 4th International Conference on Information Science and Control
Engineering (ICISCE), Changsha, 2017, pp. 38-41, doi: 10.1109/ICISCE.2017.18.
[26] Seungwan Seo, Czangyeob Kim, Haedong Kim, Kyounghyun Mo, Pilsung Kang, "Comparative
Study of Deep Learning-Based Sentiment Classification", Access IEEE, vol. 8, pp. 6861-6875,
2020.
[27] T. Dong and T. Huang, "Neural Cryptography Based on Complex-Valued Neural Network," in
2019 IEEE Transactions on Neural Networks and Learning Systems, doi:
10.1109/TNNLS.2019.2955165.
[28] Tao Dong, Qinqin Zhang, "Dynamics of a Hybrid Circuit System With Lossless Transmission</p>
      <p>Line", Access IEEE, vol. 8, pp. 92969-92976, 2020.
[29] Tianyu Gao, Jin Yang, Wenjun Peng, Luyu Jiang, Yihao Sun, Fangchuan Li, "A Content-Based
Method for Sybil Detection in Online Social Networks via Deep Learning", Access IEEE, vol. 8,
pp. 38753-38766, 2020.
[30] Ting He, Ying Liu, Chengyi Xu, Xiaolin Zhou, Zhongkang Hu, Jianan Fan, "A Fully
Convolutional Neural Network for Wood Defect Location and Identification", Access IEEE, vol.
7, pp. 123453-123462, 2019.
[31] Tkachenko, R., Izonin, I., Kryvinska, N., Dronyuk, I., &amp; Zub, K. (2020). An approach towards
increasing prediction accuracy for the recovery of missing iot data based on the grnn-sgtm
ensemble. Sensors (Switzerland), 20(9) doi:10.3390/s20092625
[32] Tkachenko, R., Izonin, I., Vitynskyi, P., Lotoshynska, N., &amp; Pavlyuk, O. (2018). Development of
the non-iterative supervised learning predictor based on the ito decomposition and sgtm
neurallike structure for managing medical insurance costs. Data, 3(4) doi:10.3390/data3040046
[33] V. Sheketa, L. Poteriailo, Y. Romanyshyn, V. Pikh, M. Pasyeka and M. Chesanovskyy,
"CaseBased Notations for Technological Problems Solving in the Knowledge-Based Environment,"
2019 IEEE 14th International Conference on Computer Sciences and Information Technologies
(CSIT), Lviv, Ukraine, 2019, pp. 10-14, doi: 10.1109/STC-CSIT.2019.8929784.
[34] W. Huang, S. Oh and W. Pedrycz, "Hybrid Fuzzy Wavelet Neural Networks Architecture Based
on Polynomial Neural Networks and Fuzzy Set/Relation Inference-Based Wavelet Neurons," in
IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 8, pp. 3452-3462, Aug.
2018, doi: 10.1109/TNNLS.2017.2729589.
[35] Xiangyu Bu, Tao Dong, "Differential Privacy Optimal Consensus for Multiagent System by Using
Functional Perturbation", Information Cybernetics and Computational Social Systems (ICCSS)
2019 6th International Conference on, pp. 157-162, 2019.
[36] Y. Huang, L. F. Capretz and D. Ho, "Neural Network Models for Stock Selection Based on
Fundamental Analysis," 2019 IEEE Canadian Conference of Electrical and Computer Engineering
(CCECE), Edmonton, AB, Canada, 2019, pp. 1-4, doi: 10.1109/CCECE.2019.8861550.
[37] Y. Lin, C. Chou, S. Yang, H. Lai, Y. Lo and Y. Chen, "Neural Decoding Forelimb Trajectory
Using Evolutionary Neural Networks with Feedback-Error-Learning Schemes," 2018 40th Annual
International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC),
Honolulu, HI, 2018, pp. 2539-2542, doi: 10.1109/EMBC.2018.8512775.
[38] Yan Cheng, Leibo Yao, Guoxiong Xiang, Guanghe Zhang, Tianwei Tang, Linhui Zhong, "Text
Sentiment Orientation Analysis Based on Multi-Channel CNN and Bidirectional GRU With
Attention Mechanism", Access IEEE, vol. 8, pp. 134964-134975, 2020.
[39] Z. Mohammadi, A. Klug, C. Liu and T. C. Lei, "Data reduction for real-time enhanced growing
neural gas spike sorting with multiple recording channels," 2019 9th International IEEE/EMBS
Conference on Neural Engineering (NER), San Francisco, CA, USA, 2019, pp. 1084-1087, doi:
10.1109/NER.2019.8717062.
[40] Z. Ying, Z. Xing, C. Jian and S. Hui, "Processor Free Time Forecasting Based on Convolutional
Neural Network," 2018 37th Chinese Control Conference (CCC), Wuhan, 2018, pp. 9331-9336,
doi: 10.23919/ChiCC.2018.8483132.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Lisovskaya</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>Skripnik</surname>
          </string-name>
          ,
          <article-title>"</article-title>
          <source>Processing of Neural System Information with the Use of Artificial Spiking Neural Networks," 2019 IEEE Conference of Russian Young Researchers in Electrical and Electronic</source>
          Engineering (EIConRus),
          <source>Saint Petersburg and Moscow</source>
          , Russia,
          <year>2019</year>
          , pp.
          <fpage>1183</fpage>
          -
          <lpage>1186</lpage>
          , doi: 10.1109/EIConRus.
          <year>2019</year>
          .
          <volume>8656651</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Sun</surname>
          </string-name>
          and
          <string-name>
            <given-names>Q.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <article-title>"A Deep Fully Convolution Neural Network for Semantic Segmentation Based on Adaptive Feature Fusion,"</article-title>
          <source>2018 5th International Conference on Information Science and Control Engineering (ICISCE)</source>
          ,
          <year>Zhengzhou</year>
          ,
          <year>2018</year>
          , pp.
          <fpage>16</fpage>
          -
          <lpage>20</lpage>
          , doi: 10.1109/ICISCE.
          <year>2018</year>
          .
          <volume>00013</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Andrunyk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vasevych</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chyrun</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chernovol</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Antonyuk</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gozhyj</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gozhyj</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kalinina</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Korobchynskyi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Development of information system for aggregation and ranking of news taking into account the user needs</article-title>
          .
          <source>Paper presented at the CEUR Workshop Proceedings</source>
          , ,
          <volume>2604</volume>
          <fpage>1127</fpage>
          -
          <lpage>1171</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B. J.</given-names>
            <surname>Isaac</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kinjo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Nakazono</surname>
          </string-name>
          and
          <string-name>
            <given-names>N.</given-names>
            <surname>Oshiro</surname>
          </string-name>
          ,
          <article-title>"</article-title>
          <source>Suitable Activity Function of Neural Networks for Data Enlargement," 2018 18th International Conference on Control, Automation and Systems (ICCAS)</source>
          ,
          <year>Daegwallyeong</year>
          ,
          <year>2018</year>
          , pp.
          <fpage>392</fpage>
          -
          <lpage>397</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Ageyev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mohsin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Radivilova</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Kirichenko</surname>
          </string-name>
          ,
          <article-title>"Infocommunication Networks Design with Self-Similar Traffic,"</article-title>
          <source>IEEE 15th International Conference on the Experience of Designing and Application of CAD Systems (CADSM)</source>
          , Polyana, Ukraine,
          <year>2019</year>
          , pp.
          <fpage>24</fpage>
          -
          <lpage>27</lpage>
          , doi: 10.1109/CADSM.
          <year>2019</year>
          .
          <volume>8779314</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>F.</given-names>
            <surname>Lotfi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ajallooeian and H. D. Taghirad</surname>
          </string-name>
          ,
          <article-title>"</article-title>
          <source>Robust Object Tracking Based on Recurrent Neural Networks," 2018 6th RSI International Conference on Robotics and Mechatronics (IcRoM)</source>
          , Tehran, Iran,
          <year>2018</year>
          , pp.
          <fpage>507</fpage>
          -
          <lpage>511</lpage>
          , doi: 10.1109/ICRoM.
          <year>2018</year>
          .
          <volume>8657608</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>G.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Lv</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Qiao</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <article-title>"Hierarchical Attention-based Fuzzy Neural Network for Subject Classification of Power Customer Service Work Orders,"</article-title>
          <source>2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE)</source>
          , New Orleans, LA, USA,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          , doi: 10.1109/FUZZ-IEEE.
          <year>2019</year>
          .
          <volume>8858852</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Hengliang</given-names>
            <surname>Tang</surname>
          </string-name>
          , Yuan Mi, Fei Xue,
          <string-name>
            <surname>Yang</surname>
            <given-names>Cao</given-names>
          </string-name>
          ,
          <article-title>"An Integration Model Based on Graph Convolutional Network for Text Classification"</article-title>
          ,
          <string-name>
            <surname>Access</surname>
            <given-names>IEEE</given-names>
          </string-name>
          , vol.
          <volume>8</volume>
          , pp.
          <fpage>148865</fpage>
          -
          <lpage>148876</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>I.</given-names>
            <surname>Dronyuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Fedevych</surname>
          </string-name>
          and
          <string-name>
            <given-names>N.</given-names>
            <surname>Kryvinska</surname>
          </string-name>
          ,
          <article-title>"High Quality Video Traffic Ateb-Forecasting and Fuzzy Logic Management,"</article-title>
          <source>2019 7th International Conference on Future Internet of Things and Cloud (FiCloud)</source>
          , Istanbul, Turkey,
          <year>2019</year>
          , pp.
          <fpage>308</fpage>
          -
          <lpage>311</lpage>
          , doi: 10.1109/FiCloud.
          <year>2019</year>
          .
          <volume>00051</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>I.</given-names>
            <surname>Dronyuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Klishch</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Chupakhina</surname>
          </string-name>
          ,
          <article-title>"Developing Fuzzy Traffic Management for Telecommunication Network Services,"</article-title>
          <source>2019 IEEE 15th International Conference on the Experience of Designing and Application of CAD Systems (CADSM)</source>
          , Polyana, Ukraine,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          , doi: 10.1109/CADSM.
          <year>2019</year>
          .
          <volume>8779323</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Heck</surname>
          </string-name>
          and
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Salem</surname>
          </string-name>
          ,
          <article-title>"Simplified minimal gated unit variations for recurrent neural networks,"</article-title>
          <source>2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS)</source>
          , Boston, MA,
          <year>2017</year>
          , pp.
          <fpage>1593</fpage>
          -
          <lpage>1596</lpage>
          , doi: 10.1109/MWSCAS.
          <year>2017</year>
          .
          <volume>8053242</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>K.</given-names>
            <surname>Vulinović</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Ivković</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Petrović</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Skračić</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Pale</surname>
          </string-name>
          ,
          <article-title>"Neural Networks for File Fragment Classification," 2019 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija</article-title>
          , Croatia,
          <year>2019</year>
          , pp.
          <fpage>1194</fpage>
          -
          <lpage>1198</lpage>
          , doi: 10.23919/MIPRO.
          <year>2019</year>
          .
          <volume>8756878</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Benyamini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. R.</given-names>
            <surname>Nason</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. A.</given-names>
            <surname>Chestek</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Zacksenhouse</surname>
          </string-name>
          ,
          <article-title>"Neural Correlates of error processing during grasping with invasive brain-machine interfaces*</article-title>
          <source>," 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER)</source>
          , San Francisco, CA, USA,
          <year>2019</year>
          , pp.
          <fpage>215</fpage>
          -
          <lpage>218</lpage>
          , doi: 10.1109/NER.
          <year>2019</year>
          .
          <volume>8717020</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pasyeka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Sheketa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Pasieka</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          <article-title>Chupakhina and I. Dronyuk, "</article-title>
          <source>System Analysis of Caching Requests on Network Computing Nodes," 2019 3rd International Conference on Advanced Information and Communications Technologies (AICT)</source>
          , Lviv, Ukraine,
          <year>2019</year>
          , pp.
          <fpage>216</fpage>
          -
          <lpage>222</lpage>
          , doi: 10.1109/AIACT.
          <year>2019</year>
          .
          <volume>8847909</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Medykovskyy</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pasyeka</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pasyeka</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Turchyn</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Scientific research of life cycle perfomance of information technology</article-title>
          .
          <source>Paper presented at the Proceedings of the 12th International Scientific and Technical Conference on Computer Sciences and Information Technologies</source>
          ,
          <string-name>
            <surname>CSIT</surname>
          </string-name>
          <year>2017</year>
          , , 1
          <fpage>425</fpage>
          -
          <lpage>428</lpage>
          . doi:
          <volume>10</volume>
          .1109/STC-CSIT.
          <year>2017</year>
          .809882
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Mishchuk</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Tkachenko</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>One-step prediction of air pollution control parameters using neural-like structure based on geometric data transformations</article-title>
          .
          <source>Paper presented at the 2019 11th International Scientific and Practical Conference on Electronics and Information Technologies, ELIT 2019 - Proceedings</source>
          ,
          <fpage>192</fpage>
          -
          <lpage>196</lpage>
          . doi:
          <volume>10</volume>
          .1109/ELIT.
          <year>2019</year>
          .8892333
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Nazarkevych</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lotoshynska</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brytkovskyi</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dmytruk</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dordiak</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Pikh</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Biometric identification system with ateb-gabor filtering</article-title>
          .
          <source>Paper presented at the 2019 11th International Scientific and Practical Conference on Electronics and Information Technologies, ELIT 2019 - Proceedings</source>
          ,
          <fpage>15</fpage>
          -
          <lpage>18</lpage>
          . doi:
          <volume>10</volume>
          .1109/ELIT.
          <year>2019</year>
          .8892282
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Ö. F. Ertuğrul</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Tekin</surname>
            and
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Kaya</surname>
          </string-name>
          ,
          <article-title>"Randomized feed-forward artificial neural networks in estimating short-term power load of a small house: A case study,"</article-title>
          <source>2017 International Artificial Intelligence and Data Processing Symposium (IDAP)</source>
          ,
          <year>Malatya</year>
          ,
          <year>2017</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          , doi: 10.1109/IDAP.
          <year>2017</year>
          .
          <volume>8090344</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Pasieka</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sheketa</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Romanyshyn</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pasieka</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Domska</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Struk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Models, methods and algorithms of web system architecture optimization</article-title>
          .
          <source>Paper presented at the 2019 IEEE International Scientific-Practical Conference: Problems of Infocommunications Science and Technology, PIC S and T 2019 - Proceedings</source>
          ,
          <fpage>147</fpage>
          -
          <lpage>152</lpage>
          . doi:
          <volume>10</volume>
          .1109/PICST47496.
          <year>2019</year>
          .9061539
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Pasyeka</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sheketa</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pasieka</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chupakhina</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Dronyuk</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>System analysis of caching requests on network computing nodes</article-title>
          .
          <source>Paper presented at the 2019 3rd International Conference on Advanced Information and Communications Technologies, AICT 2019 - Proceedings</source>
          ,
          <fpage>216</fpage>
          -
          <lpage>222</lpage>
          . doi:
          <volume>10</volume>
          .1109/AIACT.
          <year>2019</year>
          .8847909
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Wang</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Iwaihara</surname>
          </string-name>
          ,
          <article-title>"Deep Neural Architectures for Joint Named Entity Recognition and Disambiguation,"</article-title>
          <source>2019 IEEE International Conference on Big Data and Smart Computing (BigComp)</source>
          , Kyoto, Japan,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          , doi: 10.1109/BIGCOMP.
          <year>2019</year>
          .
          <volume>8679233</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>R.</given-names>
            <surname>Jozefowicz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zaremba</surname>
          </string-name>
          and
          <string-name>
            <surname>I. Sutskever</surname>
          </string-name>
          ,
          <article-title>"An empirical exploration of recurrent network architectures"</article-title>
          ,
          <source>Proc. Int'l Conf. on Machine Learning</source>
          , pp.
          <fpage>2342</fpage>
          -
          <lpage>2350</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>R.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Chen</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>"Convolutional Recurrent Neural Networks for Text Classification,"</article-title>
          <source>2019 International Joint Conference on Neural Networks (IJCNN)</source>
          , Budapest, Hungary,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          , doi: 10.1109/IJCNN.
          <year>2019</year>
          .
          <volume>8852406</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Romanyshyn</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sheketa</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pikh</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Poteriailo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kalambet</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Pasieka</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Socialcommunication web technologies in the higher education as means of knowledge transfer</article-title>
          .
          <source>Paper presented at the IEEE 2019 14th International Scientific and Technical Conference on Computer Sciences and Information Technologies</source>
          , CSIT 2019 - Proceedings, ,
          <volume>3</volume>
          <fpage>35</fpage>
          -
          <lpage>38</lpage>
          . doi:
          <volume>10</volume>
          .1109/STCCSIT.
          <year>2019</year>
          .8929753
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>