<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Improvement of decisions support subsystems of information-analytical systems on the neural networks ensembles basis</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mikryukov Andrey</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>PhD, associate professor, Plekhanov University</institution>
          ,
          <addr-line>89166019804</addr-line>
        </aff>
      </contrib-group>
      <fpage>371</fpage>
      <lpage>375</lpage>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>At present, neural network approaches, in particular, the use of ensembles of neural
networks, which are an example of a collective solution of problems, have been
widely used for modeling complex sociotechnical systems that are difficult to formalize
and poorly structured.</p>
      <p>The solution quality of the specific task (data mining, forecasting, pattern
recognition, classification, etc.) can be significantly improved using neural network
ensembles, in which it is expected to form and train a finite set of neural networks, the
results of which are taken into account in the overall solution. At the same time,
individual decisions are coordinated in such a way that the overall final decision is the
best.</p>
      <p>
        One of the fundamental problems of improving the functioning of an ensemble of
neural networks in the subsystems of decision support for information and analytical
systems (DSS IAS) in terms of increasing their accuracy and reliability is the
generation of diversity ensemble (the differences of individual models) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Aggregation of
similar models in the ensemble can not lead to a significant improvement in the
quality of the solution of the problem.
      </p>
      <p>
        To resolve this contradiction, it is proposed to use approaches based on the
collective of the neural networks, for the construction of which a new class of artificial
neurons, the so-called selective neurons, differing from classical neurons by a more
efficient method of processing input information approximating the biological neuron
is used [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Due to the peculiarities of their construction, training and functioning,
selective neurons provide, on the one hand, the construction of more efficient neural
networks for solving the tasks posed, and, on the other hand, the greater diversity of
the neural network ensemble.
      </p>
    </sec>
    <sec id="sec-2">
      <title>Neural network ensembles based on elective neural networks.</title>
      <p>
        The models used in the IAS-based IAS based on neural networks have a number of
features and advantages. They are adaptive self-learning systems, which are difficult
to dynamically simulate, and often, it is simply impossible, because They often
contain a significant array of hidden, uncontrolled, incomplete and noisy parameters and
mutual connections between them. Their use makes it possible to solve problems that
are difficult or impossible to solve by traditional methods due to the absence of
formalized mathematical descriptions of the functioning of the object of investigation.
Neural network models have associative memory and in the process of work they
accumulate and generalize information, from which their effectiveness increases with
time. Their use is based on training the neural network to extract information from
experimental data, which ensures the objectivity of the results and increases their
reliability and reliability [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>When constructing an ensemble of neural networks, a finite set of previously
trained neural networks is simultaneously used, the output signals of which are
combined into a joint estimate, which exceeds the quality of the results obtained with the
help of local networks entering the ensembles.</p>
      <p>The ensemble H ( x̄) of the models hi( x̅) (i=1,2, ... N) is the composition of the
algorithmic operators hi: Rd → R and the corrective operation</p>
      <p>
        F: RN → R, in which the final estimate of H( x̄) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] is put in correspondence with
the set of estimates h1 ( x̅), h2 ( x̄) ,..., hN ( x̅):
      </p>
      <p>
        H( x̅) = F(h1( x̄), h2( x̅) ,…, hN( x̅)).
(1)
As is known, the fundamental task in the construction of ensembles is the generation
of a variety of ensemble (or the difference of individual models) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>Obviously, the aggregation of similar models in the ensemble can not lead to a
significant improvement in the quality of the solution of the problem.</p>
      <p>The problem of generating diversity lies in the fact that individual models of
classical neuron networks based on a neuron McCullock-Pitts, are trained to solve one
task for one training sample and, as a result, usually quite strongly</p>
      <p>Correlated, which affects the accuracy of the solution obtained.</p>
      <p>
        Electoral networks are built on the basis of selective neurons, whose model is close
to the model of a biological neuron. The training of the elective neural network is
carried out not by changing the weight coefficients of the synaptic connections, but by
changing the quantity and quality of the inhibitory and exciting dendrites (signal
transmitters) on the basis of which selective neurons are constructed [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>A mathematically artificial neuron is usually represented as a non-linear function
of a linear combination of signals with a limited number of inputs. The result of the
function is directed to the only output of the neuron, and then to the input of the
neuron from the next layer. Scales characterize communication channels, through which
signals are received. The task of network training is to select such weights for each
connection that would minimize the final value of the output error. However, the
classical types of neural networks have a serious drawback – instability to retraining. If
the number of weights in the network is large, then the operation of summing up their
linear combination will be extremely computationally capacious, and the effect of
retraining is likely to arise when the network recognizes images on the training set
with extreme accuracy, but it shows a large error percentage on the test set Input data.</p>
      <p>
        Thus, the shortcomings of classical neural networks are a consequence of the use
of weighting coefficients of synaptic connections. Indeed, in a biological neuron,
there is no weight gradation – this is an artificial device. The solution of the main
problems of processing the input signal is achieved by changing the clustering
configuration and the number of dendrites at the input of the biological neuron [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        The analysis showed that, by analogy with biological mechanisms, it is advisable
to use a neural network that includes neurons with controlled clustering of the input
channels (synapses) in quantity, and then clustering the channels in quality (exciting
or inhibiting). As an effective classifier of images, it was suggested in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] to use a
neural network based on selective neurons. The selective neuron does not have weight
coefficients of synaptic connections and is close in properties to its biological
analogue (Fig. 1).
      </p>
      <p>
        Cluster formation includes blocking of non-informative communication channels
that do not conduct exciting or inhibitory signals [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Thus, after training, each neuron
has a cluster with an individual transfer characteristic, where some inputs for signals
that are obviously different from the training code combination are blocked.
, … , – input signals in the form of a binary code;
      </p>
      <p>K – a cluster of communication channels formed in accordance with the binary
code at the input;
Σ – adder;</p>
      <p>F – a nonlinear threshold function.</p>
    </sec>
    <sec id="sec-3">
      <title>Mathematical model of the selective neuron</title>
      <p>
        Possible characteristic code combinations of the investigated objects at the input of
the selective neuron can be represented in the form of vectors [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]:
      </p>
      <p>x 1 = ( x11 , ..., x1n ) ; … ; x m = ( x m1 , ..., x m n ) ,
where n is the number of elements of the code combination; M = is the number of
objects to be examined. All possible code combinations of input objects form a matrix
A, which can be represented as:
 x11, x12 ..., x1n 
 
A =  ... ... .. 
 
 xm1, xm2 ..., xmn 
Sij =  xik xkj = ( xi , x j )</p>
      <p>k=1
The sum values S ij are equal to the elements of matrix B:</p>
      <p>Let a particular selective neuron contain a cluster of bonds characterized by a code
combination x i = ( xi1 , ..., xin ) . When entering the input combination of the input
neuron at the input of the neuron, we obtain
n</p>
      <p>T</p>
      <p>B = A* A ,</p>
      <p>Where AT is the transposed matrix of A. We obtain in total m х m sums. The largest
is the sum:</p>
      <p>n
Sii =  xij x ji = ( xi , xi ) = Ni = N , (6)</p>
      <p>j=1
where N i is the number of units in the code combination x i = ( xi1 , ..., xin ) . The
property of the sums is that Sij &lt; N , it is used to decide on the recognition of input
signals.</p>
      <p>Due to the peculiarities of its construction and functioning, the selective neural
network provides:
• selective recognition achieving of input signals without using the weighting of
their synaptic connections;
• the possibility achieving of encoding an input signal of a certain type with the
channel number or the number of the registering neuron;
• input information compression, due to the preservation of only that information
about objects that falls into a given channel or registering a neuron;
• the speed increase of operation;
• the reliability increasing of object recognition when they are large;
• achievement of a much greater adequacy of the selective neuron to a biological
single-layer perceptron.
(2)
(3)
(4)
(5)</p>
      <p>Simulation of the time series forecasting process based on an ensemble of
selective neural networks has shown that the average error in forecasting individual models
of electoral neural networks has decreased by 8-10%. The quality of the diversity of
the models of elective neural networks, calculated as the difference between the
quadratic error of the ensemble and the average error of individual models of electoral
neural networks has improved (increased) by 10-14%.
4</p>
    </sec>
    <sec id="sec-4">
      <title>The conclusion</title>
      <p>Studies have shown that the use of a new class of neural networks based on selective
neurons for the construction of neural network ensembles in subsystems supporting
the decision-making of information and analytical systems makes it possible to
significantly improve the accuracy and reliability of decision-making by an ensemble of
neural networks.</p>
      <p>The key difference between the classical McCulloch-Pitts neuron from the
selective one is that the model of the latter is close to the model of a biological neuron. The
operation of selective neural networks is based on the controlled clustering of the
input channels, individual for each neuron, which in turn ensures an increase in the
accuracy and reliability of their operation.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Kuncheva</surname>
            <given-names>L.I.</given-names>
          </string-name>
          <article-title>Combining Pattern Classifiers: Methods and algorithms</article-title>
          . John Wiley &amp; Sons, Hoboken, NJ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Mazurov</surname>
            <given-names>M.E.</given-names>
          </string-name>
          <article-title>Single-layer perceptron based on selective neurons</article-title>
          .
          <source>Patent for invention No2597497 dated January 13</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>A.A.</given-names>
            <surname>Morozov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.P.</given-names>
            <surname>Klimenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.L.</given-names>
            <surname>Lyakhov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.P.</given-names>
            <surname>Alyoshin</surname>
          </string-name>
          <article-title>State and prospects of neural network modeling DSS in complex sociotechnical systems // Mathematical machines and systems</article-title>
          .
          <source>- 2010</source>
          .-No.1. P.
          <volume>127</volume>
          -
          <fpage>149</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>Zhuravlev</given-names>
            <surname>Yu</surname>
          </string-name>
          .I.
          <article-title>On the Algebraic Approach to the Solution of the Problems of Recognition</article-title>
          and Classification // Problems of Cybernetics.
          <year>1978</year>
          . Т.
          <volume>33</volume>
          . - C.
          <fpage>5</fpage>
          -
          <lpage>68</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Zhou Z.- H. Ensemble</surname>
          </string-name>
          <article-title>Methods: Foundations and algorithms</article-title>
          .
          <source>Chapman &amp; Hall / Crc Machine Learning &amp; Pattern Recognition</source>
          .
          <year>2012</year>
          . 236 p.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Gutman</surname>
            <given-names>A</given-names>
          </string-name>
          .
          <article-title>Dendrites of nerve cells: theory, electrophysiology</article-title>
          , function.
          <year>1984</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>