<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Simple neuro-fuzzy system with combined learning for pattern recognition under conditions of short training set in medical diagnostics tasks</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yevgeniy Bodyanskiy</string-name>
          <email>yevgeniy.bodyanskiy@nure.ua</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Olha Chala</string-name>
          <email>olha.chala@nure.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ivan Izonin</string-name>
          <email>ivan.v.izonin@lpnu.ua</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sergiy Popov</string-name>
          <email>serhii.popov@nure.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Artificial intelligence department, Kharkiv National University of Radio Electronics</institution>
          ,
          <addr-line>Kharkiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Control systems research laboratory, Kharkiv National University of Radio Electronics</institution>
          ,
          <addr-line>Kharkiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Department of Artificial Intelligence of Lviv Polytechnic National University</institution>
          ,
          <addr-line>Lviv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Institute of Theological Studies of Our Lady of Immaculate Conception</institution>
          ,
          <addr-line>Gorodok, Khemlnytska obl.</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>There is a widespread problem in the medical diagnostic tasks of the training datasets deficit. The medical data is hard to clinically collect and process into the ready-to-use dataset for supervised learning leading to difficulties in achieving computer-aided detection and diagnosis. The traditional approaches can works with big enough training datasets, however, thay cannot show their efficiency under conditions of limited number of samples. We propose a system that combines various learning paradigms and a neuro-fuzzy approach to solve medical classification problems under conditions of a limited number of training observations-images. The distinctive feature of the proposed system is the usage of the “scatter partitioning” of input space, which provides better system performance both in learning and classification. The results of the computer experiment proved the effectiveness of the proposed system in solving image recognition in the medical diagnostic task. The computational experiment showed that the proposed model works better with limited training datasets than the advanced systems, however, the proposed one yields with bigger amount of training observations.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Neuro-fuzzy system</kwd>
        <kwd>combined learning</kwd>
        <kwd>medical diagnostics</kwd>
        <kwd>short training set</kwd>
        <kwd>overlapping classes</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The task of image classification-recognition is one of the primary ones in the Data Mining domain.
Many approaches to its solution have been developed, and Deep Neural Networks (DNN) [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1-3</xref>
        ] are
quite prominent among them, which demonstrate really impressive results in terms of accuracy, but
not without significant drawbacks. First of all, it is the requirements of large volumes of training data,
which are not always available in practical situations. At the same time, the use of transfer learning
does not always conquer this problem. Secondly, deep neural networks are rather
“slow” systems that require a lot of time for their training, hence online training is not possible in this
case. Additionally, there are some numerical implementation problems, the “vanishing gradient”
problem in the first place. It can be overcome either by employing piecewise linear activation
functions, which will lead to the increase of the number of adjustable synaptic weights, or by
techniques like “dropout”, “shortcut” or “skeeping” that complicate, i.e. increase in time, the
processes of synaptic weights adjustment, whose number in modern DNNs is in the order of billions
or more.
      </p>
      <p>
        Neuro-fuzzy systems [
        <xref ref-type="bibr" rid="ref4 ref5 ref6">4-6</xref>
        ], which belong to the class of so-called hybrid systems of computational
intelligence, are free of most of the abovementioned drawbacks [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. These systems have their
drawbacks too. NFS with grid partitioning, which from a formal point of view are analogs of
radialbasis function neural networks (RBFN) [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ], also require significant volumes of training data. In
addition, the absolute majority of known NFS solve the problems of approximation-extrapolation, but
not classification directly, in contrast to the widely used convolutional neural networks.
      </p>
      <p>It is possible to improve the quality of NFS in pattern recognition tasks by using the combined
learning/self-learning of both synaptic weights and membership functions based on objective
functions directly related to the classification task, which requires both the modification of the
algorithms for system parameters adjustment and the NFS architecture.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Architecture of a neuro-fuzzy system with combined learning</title>
      <p>The input information for the considered NFS training is a classified set of observations
X  {x(1), x(2),..., x(k),..., x(N )}, x(k )  (x1(k ),..., xi (k ),..., xn (k ))T  Rn (here k  1, 2,..., N is either
the current observation index in the training set, or the current discrete time if training is implemented
in online mode). It is assumed that all components x(k ) are pre-coded on some interval, usually in
fuzzy systems 0  xi (k) 1i 1, 2,...n; k 1, 2,..., N. The input information enters the first layer of the
 O1(k)
expO1(k)
%
yˆ1(k)
w11
w12
w1h
wj1
wj2
wjh
wj0
wm0
wj1
 Oj (k)</p>
      <p>expO j (k)
 Om(k)
expOm (k)</p>
      <p>.</p>
      <p>The width parameter  is usually chosen to be equal for all functions. The total number of
membership functions in the system is hn , i.e. h functions at each input. The centers of these
functions are, as a rule, distributed uniformly along the abscissa axis, hence the distance between two
adjacent centers is defined as</p>
      <p>Signals from the first layer proceed the second hidden layer of aggregation, which is formed by h
elementary multiplication blocks yielding the following outputs
system which is formed by one-dimensional membership functions li (xi , cli , li ) (here cli – the
center of the corresponding function,  li
dimensional Gaussian is used for this purpose</p>
      <p>– parameter determining its width). Usually
oneThe first two layers calculate the signals formed at the outputs of many RBFN activation functions,
that is, in fact, the first two layers calculate multidimensional RBFN activation functions.</p>
      <p>It should be noted that the uniform placement of membership functions along all coordinates leads
to the so-called diagonal partitioning (Figure 2(b)) of input space, i.e. the centers of multidimensional
Gaussians are located along the diagonals of a unit hypercube. This, in turn, leads to the fact that
observations to be classified that are located far from this diagonal will be processed with rather poor
accuracy. This undesirable effect can be avoided by using grid partitioning (Figure 2(a)), however, the
number of multidimensional Gaussians in the second layer will be hn leading to the so called “curse
of dimensionality”. This undesirable effect can be avoided by using the so-called “scatter
partitioning” (Figure 2(c)), but in this situation the issue of placing the membership functions along
the feature axes remains open.</p>
      <p>(a)
(b)
(c)
The third hidden layer is formed by (h  1)m adjustable synaptic weights wjl (here
;
– the number of available classes in the processed dataset, ;
are in the process of learning – the adopted goal function optimization.</p>
      <p>The fourth hidden layer is formed by m elementary accumulators that calculate values
– bias signal), which
(1)
(2)
(3)
h h n  (xi (k)  cli )2 
oj (k)  wj0  l1 wjlul (k)  wj0  l1 wjl i1 exp   2 2  
 wj0  lh1 wjlexp   x(k2)2cl 2   wTj u(k),
where wj  (wj0 , wj1,..., wjh )T ,u(k)  1,u1(k),...,uh (k )T .</p>
      <p>The signals oj (k) are then fed to the output layer of the system formed by m softmax activation
functions that calculate the NFS output signals in the form
The maximum value of the signal yˆ j (k) determines whether the input observation belongs to a
specific class j  1, 2,..., m, , as well as the level of its fuzzy membership to this class.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Combined learning of the neuro-fuzzy system</title>
      <p>
        It is easy to see that the proposed NFS architecture is quite close to the Wang-Mendel system [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ],
but the main difference is that the Wang-Mendel system is focused on solving approximation
problems and is tuned based on the quadratic criterion, and our NFS is focused on solving
classification problems, has many outputs, softmax output activation functions, and bases its learning
on crossentropy
      </p>
      <p>N m
E   y j (k) ln yˆ j (k),</p>
      <p>k1 j1
where y j (k) – external reference signal that can take only two values 0 or 1, the so called “one hot
coding”.</p>
      <p>The combined NFS learning is performed in two stages: setting the centers of the membership
functions cli ,l 1, 2,..., h;i 1, 2,..., n and tuning of synaptic weights wjl j  1, 2,..., m;l  0,1,..., h .</p>
      <p>
        Setting the centers of membership functions is made according to the principle of “neurons at data
points” [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] based on “just in time models” [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. In the simplest case, when h  N , the components
of the input signals are designated as centers. In case of h  N , some
indistinguishability threshold  is introduced and if x(k 1)  x(k)   the observation x(k 1) is
ignored and a new center is not formed.
      </p>
      <p>In such a way, “scatter partitioning” of input space is implemented, i.e. the input space is
completely covered by multidimensional activation functions (3). If the recognition quality is
insufficient, the threshold  can be reduced which automatically leads to an increase in the number of
membership-activation functions.</p>
      <p>
        A modified optimization procedure of criterion (6) can be used for synaptic weights tuning, which
in this case has the form [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]:
(5)
(6)
(4)
oj (k)  wTj (k 1)u(k),
  m 1
 yˆ j (k)  exp oj (k) exp oj (k)  ,
  j1 

wj (k)  wj (k 1)  r 1(k)( y j (k)  yˆ j (k)u(k),

r(k)  r(k) 1  u(k) 2 ,0   1.
(7)
where  is a forgetting factor.
      </p>
      <p>It is easy to see that, if   0 , (7) takes the form of the optimal Kaczmarz algorithm [13] and, if
 1, the form of adaptive Goodwin-Ramadge-Caines algorithm [14] for the stochastic objects’
online identification. Thus, (7) provides online tuning of NFS synaptic weights. The smaller the value
of the forgetting factor  , the higher the learning process convergence rate.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Experiment results</title>
      <p>The performance of the proposed neuro-fuzzy system with one-dimensional Gaussian membership
function and combined learning was compared to the convolutional neural networks such as LeNet-5
[15] and ResNet-20 [16] with transfer learning and modified classifier. Also, we experimentally tested
the effectiveness of the scatter partitioning above the grid and diagonal ones.</p>
      <p>The LeNet-5 has three sets of convolution and max pooling, with kernel sizes 5x5 and 2x2
respectively and a ReLU activation function. Additionally, it has two fully connected layers where the
first FC layer has a ReLU activation function and the second one SoftMax classifier.</p>
      <p>The ResNet-20 was taken as a baseline for the transfer learning, so the original architecture
remained, however, as a classifier was chosen the radial-basis support vector machine
(RBFSVM)[17].
4.1.</p>
    </sec>
    <sec id="sec-5">
      <title>Dataset</title>
      <p>One of the most frequent diagnostic-examination image data is Chest X-ray, but X-rays in clinical
diagnosis are challenging and occasionally can be much harder to read in comparison to CT scans of
the chest. Such data is hard to obtain clinically thus their little amount of datasets online with
annotations, so which leads to difficulties in achieving computer-aided detection and diagnosis. So,
there are only a few datasets with X-ray images one of which is OpenAI with 4 143 images available
and the second one NIH Chest X-ray Dataset [18].</p>
      <p>(a) (b)
Figure 2: Observations from the NIH Chest X-ray Dataset labeled as (a) Cardiomegaly, (b) Normal</p>
      <p>The second dataset contains 112 120 X-ray images some of which are presented in Figure 2 with
various disease labels (Atelectasis, Cardiomegaly, Effusion, Infiltration, Mass, Nodule, Pneumonia,
Pneumothorax, Normal) from 30 805 unique patients. To label obtained observations, authors
processed the associated radiological reports using Natural Language Processing, where accuracy was
more than 90% and suitable for supervised learning. The dataset was split on training and validation
set in ration 80% and 20%.
4.2.</p>
    </sec>
    <sec id="sec-6">
      <title>The results analysis</title>
      <p>We track the accuracy of each version of the proposed neuro-fuzzy system, which is the original
proposed neuro-fuzzy system, the one with the grid partitioning, diagonal partitioning, and the deep
neural networks LeNet-5 and ResNet-20 with RBF-SVM classificator. As for the spread-width
parameter in the proposed neuro-fuzzy system, was chosen 0,33 using the “3 sigmas” rule. Since, we
consider the situation of the training dataset deficit, here to simulate this problem we randomly chose
a similar amount of the observations of each class, forming a subset with 10 000 observations, but
also we tested these systems on the full dataset. The results of the comparison are shown in Table 1.</p>
      <p>As we can see, the maximum accuracy we achieved in convolutional neural networks is
comparable to the proposed neuro-fuzzy system. At the same time, the modifications of the
neurofuzzy system with grid and diagonal partitioning showed poor performance.</p>
      <p>Performing grid partitioning, we tried to improve accuracy by changing the similarity parameter,
because, if we left this number as it was in the case with scatter partitioning, we ended up with the
curse of dimensionality, but increasing it led to a drastic decrease in accuracy. Performing diagonal
partitioning we obtained great speed which was 16,43 minutes, however, the accuracy was the worst
among competitors. The reason behind that was the clusters of data were scattered through all space
of features, leaving only part of it on the main diagonal, so the computed probability of the
observations which were located further from the diagonal was calculated poorly.</p>
      <p>Advanced systems such as convolutional neural networks always show outstanding accuracy,
however, the processing speed is a sore spot. The time consumption of the proposed network,
LeNet5, and ResNet-20 with RBF-SVM is shown in Table 2.
The proposed neuro-fuzzy system</p>
      <p>LeNet-5
ResNet-20 with RBF-SVM
The proposed system, in comparison to the CNNs, showed better speed on both datasets, even though
the accuracy was comparable on the subset, however on the full dataset NFS yields in accuracy to the
advanced systems. This is conditioned not just by the architecture of the NFS but also by combined
learning. It allows, without affecting performance, narrowing down the amount of the training
observations, speeding up the image processing, and avoiding the “curse of dimensionality”.
Additionally, the scatter partitioning and probability nature of the Gaussian membership function
helps calculate membership levers of new observations with decent accuracy.</p>
      <p>LeNet-5, in comparison to the ResNet-20 with RBF-SVM, showed drastically better speed on the
full dataset, when the first took around 2,5 hours to process images, but the second one more than 7
hours. This can be explained by the fact that LeNet-5 has a “lighter” architecture than Res-Net-20: the
autoencoder of LeNet-5 has fewer layers of convolution-pulling, the RBF-SVM classificatory of
ResNet-20 provides better accuracy, yet is time-consuming in comparison to typical fully connected
layers on LeNet.</p>
      <p>Also, we compared these system variations, the proposed one, but with a different activation
function, the Epanechnikov one, however, we did not include it in the table due to the poor
performance. So, the piece-wise approximation gives relatively good speed but the highest accuracy
achieved was 49%. The reason is that in the space of features the “tails” of the Gaussian function is
essential to compute probabilities solving classification task under conditions of overlapping classes.
Thus the usage of the triangular membership function is inappropriate for these conditions.</p>
    </sec>
    <sec id="sec-7">
      <title>5. Conclusion</title>
      <p>Architecture and algorithm of combined learning of the neuro-fuzzy system for solving
classification problems under the conditions of limited training data are proposed. The system
implements “scatter partitioning” of input space, provides a high rate of its parameters tuning and is
characterized by simplicity of numerical implementation in comparison with known neuro-fuzzy
systems. The simulation results confirm the effectiveness of the proposed system.
6. References
Conference of the European Society for Fuzzy Logic and Technology, Zittau, Germany,
September 10-12, 2003. University of Applied Sciences at Zittau/Görlitz, Germany, pp 375–
379
[13] Kaczmarz S (1993) Approximate solution of systems of linear equations. International</p>
      <p>Journal of Control 57:1269–1271. https://doi.org/10.1080/00207179308934446
[14] G. Goodwin, P. Ramadge and P. Caines, "Discrete-time multivariable adaptive control,"
in IEEE Transactions on Automatic Control, vol. 25, no. 3, pp. 449-456, June 1980, doi:
10.1109/TAC.1980.1102363.
[15] Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to
document recognition. Proc IEEE 86:2278–2324. https://doi.org/10.1109/5.726791
[16] He K, Zhang X, Ren S, Sun J (2015) Deep Residual Learning for Image Recognition
[17] Apostolidis-Afentoulis V (2015) SVM Classification with Linear and RBF kernels.</p>
      <p>https://doi.org/10.13140/RG.2.1.3351.4083
[18] Wang X, Peng Y, Lu L, et al (2017) ChestX-Ray8: Hospital-Scale Chest X-Ray Database
and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax
Diseases. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
IEEE, Honolulu, HI, pp 3462–3471</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Bengio</surname>
            <given-names>Y</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cun</surname>
            <given-names>YL</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hinton</surname>
            <given-names>G</given-names>
          </string-name>
          (
          <year>2015</year>
          )
          <article-title>Deep learning</article-title>
          .
          <source>Nature</source>
          <volume>521</volume>
          :
          <fpage>436</fpage>
          -
          <lpage>444</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Schmidhuber</surname>
            <given-names>J</given-names>
          </string-name>
          (
          <year>2015</year>
          )
          <article-title>Deep learning” in Neural networks: An overview</article-title>
          .
          <source>Neural Networks</source>
          <volume>61</volume>
          :
          <fpage>85</fpage>
          -
          <lpage>117</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Goodfellow</surname>
            <given-names>I</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bengio</surname>
            <given-names>Y</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Courville</surname>
            <given-names>A</given-names>
          </string-name>
          (
          <year>2016</year>
          )
          <article-title>Deep Learning</article-title>
          . MIT Press
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Souza</surname>
            <given-names>PVC</given-names>
          </string-name>
          (
          <year>2020</year>
          )
          <article-title>Fuzzy neural networks and neuro-fuzzy networks: A review the main techniques and applications used in the literature</article-title>
          .
          <source>Applied Soft Computing</source>
          <volume>92</volume>
          :
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Kacprzyk</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pedrycz</surname>
            <given-names>W</given-names>
          </string-name>
          (
          <year>2015</year>
          ) Springer Handbook of Computational Intelligence. Springer, Verlag, Berlin Heidelberg
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Osowski</surname>
          </string-name>
          ,
          <article-title>Sieci neuronowe do przetwarzania informacji</article-title>
          , Warszawa: Oficijna Wydawnicza Politechniki Warszawskiej,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Moody Cjd</surname>
          </string-name>
          (
          <year>1989</year>
          )
          <article-title>Fast learning in networks of locally tuned processing units</article-title>
          .
          <source>Neural Computation</source>
          <volume>1</volume>
          :
          <fpage>281</fpage>
          -
          <lpage>294</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Poggio</surname>
            <given-names>T</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Girosi</surname>
            <given-names>F</given-names>
          </string-name>
          (
          <year>1990</year>
          )
          <article-title>Networks for approximation and learning”</article-title>
          .
          <source>Proceedings of the IEEE</source>
          <volume>78</volume>
          :
          <fpage>1481</fpage>
          -
          <lpage>1497</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <fpage>10</fpage>
          .
          <string-name>
            <surname>Babuška</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Verbruggen</surname>
            ,
            <given-names>H.B.</given-names>
          </string-name>
          (
          <year>1997</year>
          ).
          <article-title>Constructing Fuzzy Models by Product Space Clustering</article-title>
          . In: Hellendoorn,
          <string-name>
            <given-names>H.</given-names>
            ,
            <surname>Driankov</surname>
          </string-name>
          ,
          <string-name>
            <surname>D</surname>
          </string-name>
          . (eds) Fuzzy Model Identification. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-
          <fpage>642</fpage>
          -60767-
          <issue>7</issue>
          _
          <fpage>2</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.R.</given-names>
            <surname>Zahirniak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Chapman</surname>
          </string-name>
          , Rogers,
          <article-title>Pattern recognition using radial basis function networks</article-title>
          ,
          <source>in: Sixth Annual Aerospace Applications of AI Conf, Dayton</source>
          ,
          <year>1990</year>
          , pp.
          <fpage>249</fpage>
          -
          <lpage>260</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>O.</given-names>
            <surname>Nelles</surname>
          </string-name>
          , Nonlinear Systems Identification. Berlin: Springer,
          <year>2001</year>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>662</fpage>
          -04323-3.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Bodyanskiy</surname>
            <given-names>YV</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kokshenev</surname>
            <given-names>I</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kolodyazhniy</surname>
            <given-names>V</given-names>
          </string-name>
          (
          <year>2003</year>
          )
          <article-title>An adaptive learning algorithm for a neo fuzzy neuron</article-title>
          . In:
          <string-name>
            <surname>Wagenknecht</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hampel</surname>
            <given-names>R</given-names>
          </string-name>
          <source>(eds) Proceedings of the 3rd</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>