<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The Neo-Fuzzy Autoencoder for Adaptive Deep Neural Systems and its Learning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yevgeniy Bodyanskiy</string-name>
          <email>yevgeniy.bodyanskiy@nure.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Iryna Pliss</string-name>
          <email>iryna.pliss@nure.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Olena Vynokurova</string-name>
          <email>vynokurova@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Control Systems Research Laboratory, Kharkiv National University of Radio Electronics, UKRAINE</institution>
          ,
          <addr-line>Kharkiv, 14 Nauky Ave.</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <fpage>1</fpage>
      <lpage>3</lpage>
      <abstract>
        <p>In this paper the autoencoder based on the generalized neo-fuzzy neurons is proposed. Also its fast learning algorithm was proposed. Such system can be used as part of deep learning neural networks. The proposed neo-fuzzy autoencoder is characterized by high learning speed and less number of tuned parameters in comparison with well-known approaches. The efficiency of proposed approach has been examined based on different benchmarks and real data sets.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>
        The task of compressing information that should be further
processed, is one of main problem, which is solved in Data
Mining. For solving this problem, a lot of approaches [
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4 ref5">1-5</xref>
        ] are
proposed, at that more of these methods comprehend the
information processing in batch mode, when the fixed-sized
data set is processed many times.
      </p>
      <p>
        It is very important in order to the process of compression,
the loss of information is minimal. Nowadays the approaches
based on Deep neural networks (DNNs) [
        <xref ref-type="bibr" rid="ref6 ref7 ref8 ref9">6-9</xref>
        ] are widely used
for solving the many tasks, which is connected with analysis
of Big Data. As it can be seen from many researches the DNNs
provide significantly better results than the conventional
shallow neural networks.
      </p>
      <p>The inherent part of DNN is, so-called autoencoder, which
implements the compression of the input data and forms the
input layers of the neural network.</p>
      <p>As such autoencoders the multilayer associative
“bottleneck” perceptrons or restricted Boltzmann machines, in which
nodes are the elementary Rosenblatt perceptrons with the
sigmoidal activation functions, are often used.</p>
      <p>Unfortunately, the learning process of such autoencoders
demands a large time spending and cannot be implemented in
online mode.</p>
      <p>In the connection with the intensive development of Data
Mining, Data Stream Mining, Web Mining over recent years
the development of high speed information compression
systems is an important problem. Such systems have to process
data in sequential mode (perhaps in online mode) as the real
information processing systems need.</p>
      <p>II. THE ARCHITECTURE OF NEO-FUZZY</p>
    </sec>
    <sec id="sec-2">
      <title>AUTOENCODER</title>
      <p>
        Fig. 1 shows the architecture of the proposed autoencoder,
which is autoassociative “bottle-neck” modification of
Kolmogorov’s neuro-fuzzy network [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13 ref14">10-14</xref>
        ] that implements
the multiresolution approach and is the universal approximator
according to the theorem of Kolmogorov-Arnold and
YamNguen-Kreinovich.
      </p>
      <p>
        It should be notice in [
        <xref ref-type="bibr" rid="ref15 ref16">15, 16</xref>
        ] the architecture, which nodes
are the neo-fuzzy neurons (NFNs) [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], is considered. In spite
of the simplicity of learning algorithm for synaptic weights,
such system is abundant in the sense of the membership
functions number.
      </p>
      <p>
        Using the generalized neo-fuzzy neurons [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] instead of
conventional NFN allows significantly to reduce the
membership functions number and to introduce stacked NN
[
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. Such stacked NN allows to simplify the architecture of
autoencoder and in this way to speed up the learning process.
      </p>
      <p>
        Therefore, autoencoder consists of two sequentially
connected layers, which are implemented with the generalized
neo-fuzzy neurons GNFN[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and GNFN[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        The sequence of input signals, which have to compress
x(k ) =(k ( x1 (k ), x2 ),, xn (k ))T ∈ Rn , ( k = 1, 2, is a number
of the observation or a current instant of time), is fed to
GNFN[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        GNFN[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] consists of the n multidimensional nonlinear
synapses MNSi[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] , i = 1, 2,, n , where each of them has a one
input,
m
outputs,
h
membership
functions
µl[i1] (xi (k )), l = 1, 2,, h and mh tuned synaptic weights
w[j1li] , j = 1, 2,, m .
signals
      </p>
      <p>
        The output of GNFN[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] is the compressed vector of the
y(k ) =),, ( y1 (k y j (k ),, ym (k )) ∈ Rm , m &lt; n ,
T
which simultaneously is output of the autoencoder. The signal
y(k ) is fed to the inputs of GNFN[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], which contains m
inputs, m
      </p>
      <p>multidimensional nonlinear synapses MNS[j2] ,
where each of them has one input, n outputs, h membership
functions µl[j2] ( y j (k )), l = 1, 2,, h and nh synaptic weights</p>
      <p>
        Thus, considered autoencoder contains 2nmh tuned
synaptic weights and (n + m)h membership functions that is
significantly fewer than in the architecture in [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
      </p>
      <p>
        In the outputs of GNFN[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] the recovered signal
xˆ(k ) = ( xˆ1 (k ),, xˆi (k ),, xˆn (k ))T is formed. In such manner
the autoencoder is the autoassociative hybrid neo-fuzzy
system of computational intelligence.
Fig 1. The architecture of the proposed neo-fuzzy autoencoder.
      </p>
      <p>The proposed system is implemented a nonlinear mapping
in the form
j=1 l =1</p>
      <p>
         i=1 l =1
xˆi (k ) = ∑
m h
∑ wi[l2j]µ [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]  n h 
lj  ∑ ∑ w[j1li]µl[i1] (xi (k ))  ,

or in the matrix form
xˆ(k ) = W [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]µ [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] (W [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]µ [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] (x(k ))) ,
where
W [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] =  w2[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]1
W [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] =  w2[
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]1

 w1[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]1
 

 wm[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]1

 w1[121]
 

 wn[211]
w1[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]1
w2[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]1
      </p>
      <p>
        
w[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]
      </p>
      <p>
        m21
w1[221]
w2[
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]1
      </p>
      <p>
        
wn[
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]1








w1[1h]n 
w2[1h]n 
w2[1h]n 
w[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] 
      </p>
      <p>
        mhn 
w1[h2m] 
w2[2h]m 
w2[2h]m 
wn[2h]m 
µ [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] (x(k )) = (µ1[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] (x1 (k )),µ 2[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] (x1 (k )),,µ h[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] (x1 (k )),
ІІІ. THE LEARNING ALGORITHM FOR SYNAPTIC
      </p>
      <p>WEIGHTS OF NEO-FUZZY AUTOENCODER</p>
      <p>
        For the tuning the synaptic weights of GNFN[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] we can use
the gradient procedure of minimizing the quadratic criterion in
the form
wi[l2j] (k ) =wi[l2j] (k −1) −η [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] (k )
=wi[l2j] (k −1) +η [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] (k )ei (k )µl[j2] ( y j (k ))
∂ei2 (k )
∂wi[l2j]
=
where η [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] (k ) is learning rate parameter of the output layer,
which is chosen accordingly to the condition in [
        <xref ref-type="bibr" rid="ref20 ref21">20, 21</xref>
        ]
η [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] (k ) =(r[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] (k ))−1; r[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] (k ) =α r[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] (k −1) + µ [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] ( y(k )) 2
where 0 ≤ α ≤ 1 is forgetting factor.
      </p>
      <p>
        For tuning the synaptic weights GNFN[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] the optimized
backpropagation error procedure, which for uniformly
distributed in the line of X-axis the triangular membership
functions with centers xl[i1] yl[j2] can be write in the form
w[j1li] (k ) =w[j1li] (k −1) +η [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] (k )ei (k )µl[i1] (xi (k ))wi[j2] (k )
where
η [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] (k ) =(r[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] (k ))−1; r[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] (k ) =α r[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] (k −1) + µ [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] (x(k )) 2 ,
 ( yl[,2j] − yl[−21], j )−1, if y j (k ) ∈[ yl[−21], j , yl[,2j] ], 
h  
wi[j2] (k ) =∑ wi[l2j] (k )  ( yl[,2j] − yl[+21], j )−1, if y j (k ) ∈[ yl[,2j] , yl[+21], j ],  .
      </p>
      <p>l=1  0 otherwise 
The proposed learning algorithm for synaptic weights of
autoencoder is characterized by high speed and adjugate
following and filtering properties.</p>
    </sec>
    <sec id="sec-3">
      <title>IV. EXPERIMENTS</title>
      <p>
        For effectiveness verification of the proposed neo-fuzzy
autoencoder, the data sets were taken from UCI Repository
[
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]: Iris, Wine, Hayes-roth. Data set “Iris” contains 150
observations (Number of Attributes: 4) of 3 classes, Data set
“Wine” contains 178 observations (Number of Attributes: 13)
of 3 classes, data set “Hayes-roth” contains 160 observations
(Number of Attributes: 5) of 3 classes.
      </p>
      <p>It is seen from Fig.2 data, which are compressed using
neofuzzy autoencoder, are more compact clusters than data, which
are compressed based on the autoassociative multilayer neural
network “Bottle Neck”.</p>
      <p>a)
b)</p>
      <p>Fig. 2. Data set Hayes-roth after compression based on the
autoassociative multilayer neural network “Bottle Neck” (а) and
neo-fuzzy autoencoder (b)</p>
      <p>The results, which were obtained using the proposed
neofuzzy autoencoder, were compared with the results of
autoassociative multilayer neural network “Bottle Neck”
(Table I). The dimension of compression data was 2
components. The simulation was performed 20 times with
different initial condition and the results were averaged.</p>
      <p>The architecture of «bottle-neck» two-layer autoencoder
and its learning algorithm are proposed. Such system is based
on generalized neo-fuzzy neurons and is autoassociative
“bottle-neck” modification of Kolmogorov’s neuro-fuzzy
network. The proposed hybrid neo-fuzzy system of
computational intelligence provides high quality of
information compression, which are fed sequentially for
processing. It is characterized by computational simplicity and
high speed of the learning process.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Han and M. Kamber</surname>
          </string-name>
          , “
          <article-title>Data Mining: Concepts and Techniques”</article-title>
          . Amsterdam: Morgan Kaufman Publ.,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>C.C.</given-names>
            <surname>Aggarwal</surname>
          </string-name>
          , “Data Mining”, N.Y.: Springer,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bifet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Gavaldà</surname>
          </string-name>
          , G. Holmes, and
          <string-name>
            <given-names>B.</given-names>
            <surname>Pfahringer</surname>
          </string-name>
          ,
          <article-title>Machine Learning for Data Streams with Practical Examples in MOA</article-title>
          . The MIT Press,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bifet</surname>
          </string-name>
          , Adaptive Stream Mining:
          <article-title>Pattern Learning and Mining from Evolving Data Streams</article-title>
          . Amsterdam: IOS Press,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Menshawy</surname>
          </string-name>
          ,
          <article-title>Deep Learning By Example: A hands-on guide to implementing advanced machine learning algorithms and neural networks</article-title>
          .
          <source>Packt Publishing Limited</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Fullan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Quinn</surname>
          </string-name>
          , and
          <string-name>
            <surname>J. McEachen</surname>
          </string-name>
          ,
          <source>Deep Learning: Engage the World Change the World. Corwin</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A. L.</given-names>
            <surname>Caterini</surname>
          </string-name>
          and
          <string-name>
            <given-names>D. E.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <source>Deep Neural Networks in a Mathematical Framework</source>
          . Springer,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Y.</given-names>
            <surname>LeCun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G.E.</given-names>
            <surname>Hinton</surname>
          </string-name>
          , “
          <article-title>Deep Learning”</article-title>
          .
          <source>Nature</source>
          ,
          <year>2015</year>
          , v.
          <volume>521</volume>
          , pp.
          <fpage>436</fpage>
          -
          <lpage>444</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.</given-names>
            <surname>Graupe</surname>
          </string-name>
          , “
          <article-title>Deep Learning Neural Networks: Design and Case Studies”</article-title>
          . World Scientific Publishing Company,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>V.</given-names>
            <surname>Kolodyazhniy</surname>
          </string-name>
          and Ye. Bodyanskiy, “
          <article-title>Fuzzy Kolmogorov's Network,”</article-title>
          <source>in Lecture Notes in Computer Science</source>
          , vol.
          <volume>3214</volume>
          ,
          <string-name>
            <surname>M.G. Negoita</surname>
          </string-name>
          et al., Eds.,
          <source>SpringerVerlag</source>
          ,
          <year>2004</year>
          , pp.
          <fpage>764</fpage>
          -
          <lpage>771</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Ye. Bodyanskiy</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Kolodyazhniy</surname>
            and
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Otto</surname>
          </string-name>
          , “
          <article-title>Neurofuzzy Kolmogorov's network for time-series prediction and pattern classification</article-title>
          ,
          <source>” in Lecture Notes in Artificial Intelligence</source>
          , vol. 3698, U. Furbach, Ed., Heidelberg: Springer -Verlag,
          <year>2005</year>
          , pp.
          <fpage>191</fpage>
          -
          <lpage>202</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>V.</given-names>
            <surname>Kolodyazhniy</surname>
          </string-name>
          , Ye. Bodyanskiy and
          <string-name>
            <given-names>P.</given-names>
            <surname>Otto</surname>
          </string-name>
          , “
          <article-title>Universal approximator employing neo-fuzzy neurons,” in Computational Intelligence Theory</article-title>
          and Applications, Ed. B. Reusch, Ed., Berlin-Heidelberg: Springer,
          <year>2005</year>
          , pp.
          <fpage>631</fpage>
          -
          <lpage>640</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>V.</given-names>
            <surname>Kolodyazhniy</surname>
          </string-name>
          , Ye. Bodyanskiy,
          <string-name>
            <given-names>V.</given-names>
            <surname>Poyedyntseva</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Stephan</surname>
          </string-name>
          <article-title>“Neuro-fuzzy Kolmogorov's network with a modified perceptron learning rule for classification problems</article-title>
          ,” in
          <source>Advances in Soft Computing</source>
          , vol.
          <volume>38</volume>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Reuch</surname>
          </string-name>
          , Ed., Berlin-Heidelberg: Springer-Verlag,
          <year>2006</year>
          , pp.
          <fpage>41</fpage>
          -
          <lpage>49</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Ye</surname>
            . Bodyanskiy, Ye. Gorshkov,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Kolodyazhniy</surname>
            , and
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Poyedyntseva</surname>
          </string-name>
          “
          <article-title>Neuro-fuzzy Kolmogorov's network,”</article-title>
          <source>in Lecture Notes in Computer Science</source>
          , vol.
          <volume>3697</volume>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Duch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kacprzyk</surname>
          </string-name>
          , E. Oja, and S. Zadrozny, Eds., BerlinHeidelberg: Springer-Verlag,
          <year>2005</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>V.</given-names>
            <surname>Kolodyazhniy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Klawonn</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Tschumitschew</surname>
          </string-name>
          , “
          <article-title>A neuro-fuzzy model for dimensionality reduction</article-title>
          and its application”
          <source>International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems</source>
          vol.
          <volume>15</volume>
          , is. 05,
          <year>October 2007</year>
          , pp.
          <fpage>571</fpage>
          -
          <lpage>593</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Vynokurova</surname>
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bodyanskiy</surname>
            <given-names>Ye.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pliss</surname>
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peleshko</surname>
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Rashkevych</given-names>
            <surname>Yu</surname>
          </string-name>
          . “
          <article-title>Neo-fuzzy encoder and its adaptive learning for Big Data processing</article-title>
          .”
          <source>Scientific Journal of RTU</source>
          , Series “Computer Science” Volume “
          <source>Information Technology and Management Science” 2017</source>
          , vol.
          <volume>20</volume>
          , pp.
          <fpage>6</fpage>
          -
          <lpage>11</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>T.</given-names>
            <surname>Yamakawa</surname>
          </string-name>
          , E. Uchino,
          <string-name>
            <given-names>T.</given-names>
            <surname>Miki</surname>
          </string-name>
          and
          <string-name>
            <given-names>H.</given-names>
            <surname>Kusanagi</surname>
          </string-name>
          , “
          <article-title>A neo-fuzzy neuron and its applications to system identification and prediction of the system behavior,”</article-title>
          <source>in Proc. of 2-nd Int. Conf. on Fuzzy Logic and Neural Networks “IIZUKA-92”</source>
          , Iizuka, Japan, pp.
          <fpage>477</fpage>
          -
          <lpage>483</lpage>
          ,
          <year>1992</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>R.P.</given-names>
            <surname>Landim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Rodrigues</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.R.</given-names>
            <surname>Silva</surname>
          </string-name>
          , and
          <string-name>
            <given-names>W.M.</given-names>
            <surname>Caminhas</surname>
          </string-name>
          , “
          <article-title>A neo-fuzzy-neuron with real time training applied to flux observer for an induction motor”</article-title>
          .
          <source>In: Proceedings of IEEE Vth Brazilian Symposium on Neural Networks, Belo Horizonte</source>
          ,
          <fpage>9</fpage>
          -11
          <source>Dec</source>
          <year>1998</year>
          , pp.
          <fpage>67</fpage>
          -
          <lpage>72</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>J.</given-names>
            <surname>Schmidhuber</surname>
          </string-name>
          , “
          <article-title>Deep learning in neural networks: An overview</article-title>
          ,”
          <source>Neural Networks</source>
          , vol.
          <volume>61</volume>
          , pp.
          <fpage>85</fpage>
          -
          <lpage>117</lpage>
          , Jan.
          <year>2015</year>
          . (doi: 10.1016/j.neunet.
          <year>2014</year>
          .
          <volume>09</volume>
          .003)
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Ye. Bodyanskiy</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <string-name>
            <surname>Kokshenev</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Kolodyazhniy</surname>
          </string-name>
          , “
          <article-title>An adaptive learning algorithm for a neo fuzzy neuron,”</article-title>
          <source>in Proc. 3rd Int. Conf. of European Union Society for Fuzzy Logic and Technology (EUSFLAT</source>
          <year>2003</year>
          ), Zittau,
          <year>2003</year>
          , pp.
          <fpage>375</fpage>
          -
          <lpage>379</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>P.</given-names>
            <surname>Otto</surname>
          </string-name>
          , Ye. Bodyanskiy,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kolodyazhniy</surname>
          </string-name>
          , “
          <article-title>A new learning algorithm for a forecasting neuro-fuzzy network,” Integrated Computer-Aided Engineering</article-title>
          , vol.
          <volume>10</volume>
          , pp.
          <fpage>399</fpage>
          -
          <lpage>409</lpage>
          , Dec. 2003
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <article-title>UCI Repository of machine learning databases</article-title>
          . CA: University of California, Department of Information and Computer Science. [Online]. Available: http://www.ics.uci.edu/~mlearn/MLRepository.html
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>