<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>A.N. Danilenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Samara National Research University</institution>
          ,
          <addr-line>34 Moskovskoe Shosse, 443086, Samara</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2017</year>
      </pub-date>
      <fpage>296</fpage>
      <lpage>299</lpage>
      <abstract>
        <p>This paper introduces a hybrid model of the neuro-fuzzy classifier with an integrated prediction of pilots' mistakes. Experiments and studies of the network were conducted on real and test samples.The upgraded hybrid neuro-fuzzy classifier structure and the learning algorithm can solve the problem of the need for multiple individual performance measurements, the dynamics of which would make it possible to build a trend and solve the problem on small samples. Used in organizational and management activities, this principle can help in predicting the danger caused by the human factor.</p>
      </abstract>
      <kwd-group>
        <kwd>forecast</kwd>
        <kwd>wrong actions of the pilot</kwd>
        <kwd>intellectual support</kwd>
        <kwd>hybrid neuro-fuzzy classifier</kwd>
        <kwd>two-layer perceptron</kwd>
        <kwd>small samples</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>where  Rl - the degree of the rule belonging to R1 class.</p>
    </sec>
    <sec id="sec-2">
      <title>3. Methods</title>
      <p>Regression analysis appears to be the traditional method of forecasting in psychology. It is assumed that the values of the time
series is a random time function, and the task is to identify the correct model. The choice of the form of the function is no t
formalized and depends entirely on the expert's experience. At the same time, a neural network acts as a universal approximator
of the training data, so the use of neural networks for the prediction is very promising.</p>
      <p>In addition, the neural network can be seen as an adaptive model, as it can develop while gaining new information. Human
behavior by nature is evolutional, and the use of static models leads to the forecast quality deterioration.</p>
      <p>Another problem that we faced was the need for a large amount of input data for network training. It is usually assumed that
the time series contains at least hundreds of values, and it is impossible for us to complete this amount of observations. Ho wever,</p>
      <p>Mathematical Modeling / A.N. Danilenko
there are opportunities to train the neural network on small amounts of input data. In this case, the a repeated learning on the
same examples is being used, as well as different methods of time series processing, allowing to extend the training set.</p>
      <p>The peculiarity of the problem lies in the fact that self-assessment and stress levels cannot be the input vector for the
prediction of the network. The input is the values vector of the professional suitability dynamics for the period from six mo nths
to two years (that means, from 3 to 12 measurements, testing being conducted no more frequently than once every two months).
By the professional suitability dynamics of the candidate we mean the degree of affiliation to one of four classes: the candidate
fully meets the requirements of the specialty, basically corresponds, partially meets or does not meet - which in turn is obtained
by analysis of 55 psychological characteristics.</p>
      <p>Since the information, based on which a decision on the professional suitability of the candidate is made, is the result of
various psychological techniques, classified data should be inaccurate or poorly defined. Due to this fact, it is necessary to use
fuzzy logic and fuzzy sets theory as an effective approach to solving this problem.</p>
      <p>To solve all the problems above, a modified hybrid model of neuro-fuzzy classifier with an integrated forecast function has
been developed.</p>
      <sec id="sec-2-1">
        <title>3.1. Hybrid neuro-fuzzy classifier</title>
        <p>
          San and Jang offered the architecture to solve the fuzzy classification problem[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. One possible structure of a hybrid
neurofuzzy classifier is shown in Fig. 1.
        </p>
        <p>Neuro-fuzzy network consists of four layers.</p>
        <p>First-layer-elements implement the fuzzification operation, in other words, they form the degree of the membership of input
data for the defined fuzzy sets Aij</p>
        <p> 1  x' j сij 2 
 Aji (x' j )  exp
 2   ij  

where сij , ij – the parameters of the membership bell-shaped type function.</p>
        <p>The initial values of these parameters are set so as membership function satisfies the completeness, normality and convexity
properties. Values should be equally distributed in the input vectors X. The values of these parameters can be adjusted in the
process of the network education, which is based on the gradient method.</p>
        <p>Each element of the second layer is an "I" neuron. It performs the aggregation of each database rule prerequisites truth
degrees according to the T-norm operation using the following formulas:
1  min{ A11(x1 ), A12 (x2 ),..., A1n (xn )}
 2  min{ A21(x1 ), A22 (x2 ),..., A2n (xn )}
…
 n  min{ An1 (x1 ), An2 (x2 ),..., Ann (xn )}</p>
        <p>Third-layer elements perform the aggregation of each database rule prerequisites truth degrees according to the S-norm
operation.</p>
        <p>To solve the problem of candidates’ classification for vacancies, basing on psycho-diagnostics, an input volume is quite
small, with an average of 50 values. In order to speed up the network training algorithm and its simplifications, we should
replace neurons of the third layer with the neurons that perform normalization and calculate the following values:
 1   1</p>
        <p> 1  2  ...  n</p>
        <p>Mathematical Modeling / A.N. Danilenko
 2 
…</p>
        <p>These outputs are interpreted as the membership degree of the object to the corresponding class. Since hybrid neuro-fuzzy
classifier is represented as a multi-layer structure with a direct signal spread and the output variable value can be changed by
adjusting the parameters of elements in layers, the gradient algorithms can be used to train the network.</p>
        <p>Using this neuro-fuzzy network model the problem of classification can be solved, the results of which are the input vector
for the prediction network.</p>
      </sec>
      <sec id="sec-2-2">
        <title>3.2. The network structure</title>
        <p>
          Then, a conventional two-layer perceptron can be added to a modified hybrid neuro-fuzzy classifier [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] with an additional
neuron, which accumulates input values for classification prediction vector and, in fact, is one of Grossberg star [5] (Fig. 2).
Two-layer perceptron was implemented without any changes.
        </p>
        <p>Hybrid
neurofuzzy classifier</p>
        <p>Two-layer
perceptron</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Results and discussion</title>
      <p>Tsukomoto algorithm was implemented in the hybrid neuro-fuzzy classifier, as well as a backpropagation method – a
learning algorithm. The influence of the hybrid neuro-fuzzy classifier was also detected. The optimal structure of the network
was selected: the volume of training sample - 35 samples, one network learning step h = 0,45 and Gaussian fuzzification
function, defuzzification method based on the r.m.s. deviations.</p>
      <p>A study of the prediction quality using the constructed neural network was conducted on the test and the real-time series. For
each time series, a structure of the network was chosen, providing the best quality of forecasting. The results are shown in Table
1.</p>
      <p>The prediction is considered to be sufficiently accurate, if the prediction error is not more than 20%.</p>
      <p>Mathematical Modeling / A.N. Danilenko
Fig. 3 shows the dependence of the prediction accuracy on the noise effects using different methods.</p>
      <p>The "ideal" forecast - a forecast, the values deviation of which is caused only by random factors. If the prediction error is
slightly different from the error of the "ideal" forecast, then it can be considered accurate.
- noise / signal coefficient, which is a ratio of noise power to the power of the desired signal.</p>
      <p>- inconsistency coefficient (the second Teil coefficient), estimating the forecast accuracy.</p>
      <p>K1 
К 2 
5. Conclusion
 2</p>
      <p> 100%
 D2
nl
 Yt  Yt 2
tn1
nl nl
Yt 2   Yt 2
tnl tnl</p>
      <p>According to the study it can be concluded that the predictions, obtained using a neural network, have high level of accuracy
and for many dynamics types seem to be significantly superior to the ones obtained using the regression model.</p>
      <p>Moreover, the upgraded hybrid neuro-fuzzy classifier structure and the learning algorithm can solve the problem of the need
for multiple individual performance measurements, the dynamics of which would make it possible to build a trend and solve the
problem on small samples.</p>
      <p>This approach allows with a certain degree of probability to calculate a predisposition to wrong actions in each case. If used
in organizational and management activities, this principle can help in predicting the danger caused by the human factor.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Danilenko</surname>
            <given-names>AN</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ihsanova</surname>
            <given-names>SG</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Komakov</surname>
            <given-names>VV</given-names>
          </string-name>
          .
          <article-title>The diagnostic prediction of the professional performance of the pilot</article-title>
          .
          <source>Moscow: Mechanical engineering Flight</source>
          <year>2012</year>
          ;
          <volume>7</volume>
          :
          <fpage>53</fpage>
          -
          <lpage>60</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Novak</surname>
            <given-names>V</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Perfilieva</surname>
            <given-names>I</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mochkorzh</surname>
            <given-names>I</given-names>
          </string-name>
          .
          <source>Mathematical Principles of Fuzzy Logic</source>
          . Moscow: Fizmatlit,
          <year>2006</year>
          ; 252 p.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Borisov</surname>
            <given-names>VV</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kruglov</surname>
            <given-names>VV</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fedulov</surname>
            <given-names>AS</given-names>
          </string-name>
          .
          <article-title>Indistinct models and networks</article-title>
          .
          <source>Moscow: The hot line -Telecom</source>
          ,
          <year>2007</year>
          ; 284 p.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Osovsky</surname>
            <given-names>S.</given-names>
          </string-name>
          <article-title>Neural networks for information processing</article-title>
          . Moscow: Finance and Statistics,
          <year>2002</year>
          ; 344 p.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>