<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>ORCID:</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Generalized neo-fuzzy-neuron with membership functions of special type in medical diagnostics</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yevgeniy Bodyanskiy</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Olha Chala</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Artificial intelligence department, Kharkiv National University of Radio Electronics</institution>
          ,
          <addr-line>Kharkiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Control systems research laboratory, Kharkiv National University of Radio Electronics</institution>
          ,
          <addr-line>Kharkiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>The neuro-fuzzy system for medical diagnostics under conditions of short training datasets and overlapping classes is proposed. The proposed system is based on the multidimensional generalized neuro-fuzzy neuron which was modified by introducing an additional softmax output layer and membership functions of the special form. As for the learning, the optimal by speed adaptive algorithm provided additional soothing properties. In contrast to wellknown neuro-fuzzy systems, the proposed one is simple in numerical implementation and has fewer tuning parameters. In this case, the possibility of processing information that is specified in different scales is provided. Experiments have confirmed the effectiveness of the proposed system, especially in situations where information is fed for processing in online mode.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Neo-fuzzy neuron</kwd>
        <kwd>online learning</kwd>
        <kwd>medical diagnostics</kwd>
        <kwd>membership functions</kwd>
        <kwd>overlapping classes</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Nowadays, artificial intelligence methods, in particular computational intelligence [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1-3</xref>
        ] gained its
spread to solve many tasks which are related to medical information processing, and specifically in
medical diagnostics task [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref4 ref5 ref6 ref7 ref8 ref9">4-12</xref>
        ]. Generally this task can be considered as pattern
classificationrecognition problem under conditions of deficit of priori information and overlapping classes with its
arbitrary form in multidimensional space of features that can be defined in numerical, nominal and
ordinal scale. Artificial neural networks, both traditional shallow [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] and more advanced deep [
        <xref ref-type="bibr" rid="ref14 ref15">14,
15</xref>
        ] ones, which provide high quality classification due to their universal approximating properties,
proved to be the best here. As for situations with overlapping classes, the most effective are
neurofuzzy systems [
        <xref ref-type="bibr" rid="ref16 ref17 ref3">3, 16, 17</xref>
        ], which are hybrid systems of computational intelligence that combine the
properties of both artificial neural networks and fuzzy reasoning systems. These systems require
significant amounts of a priori information from training samples for their training settings, which are
not always available in medical diagnostic tasks.
      </p>
      <p>In situations of limited training dataset, an alternative to multilayer neural networks can be used
probabilistic ones [18, 19], which have shown themselves quite well in medical applications [20].
Probabilistic neural networks implement their training on the basis of the principle "Neurons in data
points" [21], i.e. the so-called lazy learning [22], when the centers of kernel activation functions of the
image layer coincide with the coordinates of observations from the training dataset. The disadvantage
of probabilistic neural networks is that it is a necessity of the a priori task of the whole training
dataset. This means that if the training dataset receives one classified observation, then in order to
take this into account it is necessary to restructure the network architecture as a whole.</p>
      <p>In situations of limited training dataset and sequential online data entry and processing in the
learning mode, the so-called neo-fuzzy systems [23-26] can be most effective, characterized by good
approximating properties, and can be tuned online with the maximum possible learning speed [27].</p>
      <p>At the same time, the recognition system is based on neo-fuzzy neurons, not designed to work with
many classes [28], redundant, contains too many membership functions and tuned parameters, which
makes it cumbersome and difficult to implement.</p>
      <p>This problem can be overcame by using the idea of a generalized neo-fuzzy neuron [29], which in
the general case is an approximating multidimensional system and is not meant to solve classification
problems. Therefore, it is advisable to introduce its modification, which is designed specifically to
solve the problems of image recognition with high accuracy and speed in terms of overlapping
classes.</p>
      <p>Hence, the subject of the research in this paper analysis process of medical data in online mode
under conditions of overlapping classes of diagnosis. The generalized modified neo-fuzzy neuron and
process of its online learning under conditions of priori information deficit.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Architecture of the modified</title>
      <p>classification-diagnostic task solving
generalized
neo-fuzzy-neuron
for</p>
      <p>The generalized neo-fuzzy neuron in the general case is a multidimensional nonlinear system that
is able to adjust its parameters-synaptic weights online, while implementing the mapping.
o( )  F (x( )),o  Rm , x  Rn.
(1)</p>
      <p>Through the approximation of priori unknown multidimensional function according to
observation- vectors o( )  o1( ),..., o j ( ),..., om ( )T , x( )   x1( ),..., xi ( ),..., xn ( ) T , 1, 2,...,T –
the number of the observation vectors in the training dataset, or the index of the current discrete time
in the case when the information for processing is fed in real time. In addition to the actual problem of
approximation, GNFN can be used in solving problems of extrapolation-prediction of
multidimensional sequences, nonlinear identification, and adaptive control of nonlinear objects [29].</p>
      <p>
        In order to enable the solution of image classification-recognition problems, an additional output
layer formed by softmax activation functions [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] should be added to GNFN, by means of which the
procedure of defuzzification of output signals is first implemented o j ( ) , and secondly, it is possible
to use crosscentropy as a learning criterion, which is the most common in the problems of pattern
recognition.
      </p>
      <p>
        In Figure1 is shown the architecture of a generalized neo-fuzzy neuron with an additional layer of
defuzzification to solve pattern recognition problem of medical diagnostics, the distinctive feature of
which is that the classes of diagnoses arbitrarily overlap, can have a rather complex shape, and input
signals can be set in any scale, not just in the traditional numerical. It is easy to see that this scheme is
a kind of hybrid of traditional GNFN and Takagi-Sugeno-Kang neuro-fuzzy-system, which is a
multidimensional universal approximator, the training of which may encounter some computational
difficulties [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. It should also be added that the system under consideration can in principle be
implemented by connecting ordinary neo-fuzzy neurons in parallel with the softmax output function.
However, such a scheme will contain n times more membership functions in the first layer, which
complicates its implementation.
      </p>
      <p>The nonlinear transformation is implemented by the system and shown in Figure 1, in the general
case can be written as
 n hi
o j ( )   i (xi ( ))wjli ( 1),
 i1 l1 eoj ( )
 yˆ j ( )  softmax o j ( )  m
  eop ( )
 p1
, j  1, 2,...m
(2)
where wjli (  1) – tuned synaptic weights, which were obtained on the basis of  1 previous
observations, yi (xi ) – one-dimensional membership functions, the total number of which is equal to
n
 hi (here hi – the number of these functions at the i-th input of the system). Using of these
i1
functions the fuzzification of the input variables space is realized, i.e. this dimension is increased,
which allows solving linearly inseparable recognition problems.
x1( )
x2 ( )
xn ( )
11(x1)
 h11(x1)
12 (x2 )
 h2 2 (x2 )
1n (xn )
 hnn (xn )
w111
wj11
w1h1
wph1
w1hnn
wjhnn
Σ
Σ
eom ( )
%
yˆ2 ( )</p>
      <p>In traditional neo-fuzzy neurons as such functions, triangular ones are usually used, which satisfy
the Ruspini partition condition. The advantages of these functions are, firstly, the system does not
require an output layer of defuzzification, and, secondly, for processing each observation xi ( ) the
only two neighbouring functions on each input are fired. This in its turn leads to the fact that at each
moment of the current time are tuned only 2n synaptic weights wjli ( ) that simplify the
implementation of the learning procedure. In principle, in neo-fuzzy neurons, any function that
satisfies Ruspini partitioning conditions can be used, such as B-splines, but in this case, each step
n
must be tuned for all  hi synaptic weights.</p>
      <p>i1</p>
      <p>Since in the system under consideration the softmax layer performs the defuzzification procedure
at the same time, there is no need to fulfill the conditions of unity partitioning. Therefore, any
functions used in traditional neuro-fuzzy systems can be used as membership ones. These are usually
kernel functions (Gaussians, Cauchyans e.a.), which are widely used in Parzen "windows" [30], or
Nadaraya-Watson estimates [31]. Since these functions have an infinite interval of determination on
each step, it is necessary to tune all the synaptic weights of the system.</p>
      <p>Therefore, we propose to use Epanechnikov's kernel functions [32], which are shown in Figure2,
as membership functions and can be written in analytical form as it is presented in formula 3
  xi  cl 2 
yli (xi )  1  ri2 i  li (3)
where cli – centers (extremum points) of the corresponding functions, ri – the distance between two
tuned centers (in the case of their uniform distribution on the interval of determination xi min , xi max  ),
 li  10 oifhexriwisceli.  ri ,
ri  xi min  xi max .</p>
      <p>hi 1
(4)
(5)</p>
      <p>If the i-th input is evenly spaced hi functions, the distance between the centers is described by the
expression</p>
      <p>It is important to notice that in the case of binary inputs which are often used in medical diagnostic
tasks requires only two membership functions with centers c1i  0,c2i  1, and ri  1.</p>
      <p>It should also be noted that in comparison with triangular membership functions that implement
piecewise-linear approximation, approximation using Epanechnikov's linear functions provides a
higher accuracy separating hypersurfaces’ reconstruction in the feature space.</p>
      <p>In a more general case, the centers of membership functions can be located unevenly, as shown in
Figure 3.</p>
      <p>In this case, the membership functions are not symmetrical and can be written as</p>
      <p>
liL (xi )  1



liR (xi )  1


 xi  cli 2 </p>
      <p>2  ,
cl1,i  cli  
 xi  cli 2 </p>
      <p>2 
cl1,i  cli  
(6)
where    max0, .</p>
      <p>Thus, output signals are generated at the GNFN outputs</p>
      <p>n hi
o j ( )   li (xi ( ))wjli ( 1) (7)</p>
      <p>i1 l1
(to the tact of refining synaptic weights based on observation xi ( ) ), which pass through the output
layer of softmax activation functions, forming levels of fuzzy membership yˆ j ( ) of each observation
x( ) to each of possible classes-diagnoses clj , j  1, 2,..., m.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Learning of the generalized neo-fuzzy-neuron with softmax output layer in classification-diagnostic task solving</title>
      <p>
        As a learning criterion, the crossentropy criterion can be used, which is most often currently
implemented in convolutional deep neural networks [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
      </p>
      <p>m m
J ( )   J j ( )   y j ( ) ln yˆ j ( ) (8)</p>
      <p>j1 j1
where y j ( ) – an external reference signal that determines whether or not the observation x( ) is
related to j-th class and can only take two values 0 or 1 (so-called “one-hot coding”).</p>
      <p>n
Let’s introduce furtheinrto consideration  hi 1– vector of the mutual membership functions
i1
 (x)  (1(x1),2 (x2 ),...,h11(x1),..., li (xi ),...,hnn (xn ))T and the vector of synaptic weights associated
with the j-th output of the system w(x)  (wj11,..., wjh11,..., wjl1,..., wjhnn )T then let’s
write down initial
signals and local criterion of training in the form
oj ( )  wTj ( 1) (x( )),</p>
      <p>eoj ( )
yˆ j ( )  m
 eop ( )
p1</p>
      <p>ewTj ( 1) (x( ))
 m
 ewTp ( 1) (x( ))
p1</p>
      <p>,
J j ( )   y j ( ) ln yˆ j ( )   y j ( ) ln m
 ewTp ( 1) (x( ))</p>
      <p>.
ewTj ( 1) (x( ))
where 0   1 – forgetting factor that provides additional filtering properties of the learning process
and coincides with the algorithm of Kaczmarz -Widrow-Hoff [36] when   0 . Having   1 this
procedure is transformed into a Goodwin-Ramage-Caines stochastic approximation algorithm [37].</p>
      <p>The quality of the learning process of the system under consideration can be improved by tuning
not only the synaptic weights, but also the corresponding memberships of the first layer of the system
[38]. It is quite simple to do this, using the ideas of lazy learning [21], which is the basis of
probabilistic neural networks.</p>
      <p>Introducing some threshold of proximity of two neighbouring centers on each input
ri min  cl1,i  cli min  cl1,i  cli min ,
(16)
the process of setting centers can be organized as follows: the first center coincides with the first
observation from the training dataset, i.e. c1  x(1) . Upon receipt of the second observation x(2)
check this distance at all coordinates from c1 . If this distance is less than ri mini , then a new center is
not formed. If this distance is greater than ri mini , then the center is set c(2) . This process takes place
until the entire system of centers, the number of which is determined by the value (formula 17), is
established:
which is essentially the optimal Kaczmarz-Widrow-Hoff algorithm [33, 34], which provides the
maximum possible rate of convergency in the class of gradient procedures.</p>
      <p>The regularized version of this algorithm can also be used to protect against the possible effect of
an "exploiding" gradient.</p>
      <p>wj ( )  wj ( 1) 
( y j ( )  yˆ j ( )) (x( ))</p>
      <p>   (x( )) 2
(here   0 – momentum term) or exponentially weighted stochastic approximation [35]
wj ( )  wj ( 1)  r 1( )( y j ( )  wTj ( 1) (x( )) (x( ))),

r( )  r( 1)   (x( )) 2</p>
      <p>ri min</p>
      <p>It is clear that this procedure can be implemented only for inputs that receive information in a
numerical scale.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Experiment results</title>
      <p>The experiment was held with the usage of “Urinary biomarkers for pancreatic canc[3er9”]
dataset from the Kaggle repository, since this type of cancer is extremely aggressive and lethal,
because on early stages it is extremely hard to detect since there are no symptoms, thus, the survival
rate within five years after diagnosis less than ten per cent. However, if is caught early, the odds of
surviving are quite promising. Hence, the detection of pancreatic cancer is a burning issue.</p>
      <p>The dataset itself contains fourteen features, however, the key ones among them are creatinine,
LYVE1, REG1B, and TFF1. Where, creatinine is a protein for an indication of kidney function,
YVLE1( lymphatic vessel endothelial hyaluronan receptor 1) – a protein that can take part in tumour
metastasis, REG1B is a protein that can be related to pancreas regeneration, and TFF1(trefoil factor 1)
can be assosiated to regeneration and repair of the urinary tract. Besides that, the dataset contains age
and sex that can also be related to the fact of having pancreatic cancer, also there are a few other
biomarkers as well, however, not all patients were measured.</p>
      <p>The goal function of this dataset can take three possible values 3 (pancreatic cancer), 2
(noncancerous pancreas condition) and 1 (healthy), also the dataset length is 590 instances. The available
data from the dataset are presented in the Table 1.</p>
      <p>It can be seen that raw data has different scales such as nominal and numerical. As for
preprocessing the one-hot coding was used for the first scale.</p>
      <p>The idea that underlie in the following experiment is the test of the efficiency of the chosen
activation function. Thus, the proposed network was taken as a basis. However, the activation
function was changed from the Epanechnikov to the mentioned first order B-splines instance of which
can be seen in the Figure 5 and the traditional Gaussian activation function which, by its nature can
provide membership levels – it can work with overlapping classes.</p>
      <p>From the Table 2 it can be seen that the Epanechnikov’s membership function does not yield to the
first-order B-splines in accuracy, but still having comparable time as it was expected from the
piecewise approximation. Having Gaussian as a membership activation function, the system requires
additional normalisation step so the Ruspini condition partition will be satisfied and it is important to
notice, that the function itself has cumbersome computations. This slows down the data processing
and leading to the unnecessary steps of the whole process, additionally, normalisation leading to the
loss of important information thus the accuracy is low.</p>
      <p>The next step was the comparison of the proposed approach with the existing ones. In this case, the
advanced systems such as deep neural networks that could have provided with high accuracy can be
used, however, considering the volume of dataset and the importance of the speed, and the hardware
resource usage, the usage of DNNs is ineffective. Thus, the simple machine learning approach
“Support vector machine” (SVM) and the “Multilayer perceptron” (MarLeP)chowseen for the
further comparison.</p>
      <p>As it can be seen the both accuracy and time values that were obtained from SVM and MLP
algorithms are yield to the proposed approach since machine learning algorithms are crisp and can
work only under conditions when classes are not overlapped. Besides that the volume of the dataset
may have influenced on the speed of both approaches leading to the more significant
timeconsumption in comparison to the proposed approach.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>In order to solve the problems of medical diagnostics from the theory point of view of the image
recognition-fuzzy classification task, a hybrid system of computational intelligence is proposed. This
system is essentially a combination of a generalized neo-fuzzy neuron, neuro-fuzzy system and a
convolutional deep neural network. The system under consideration is designed to operate under
conditions of overlapping classes of arbitrary shape and limited training dataset quite often found in
real situations associated with a deficit of a priori information. The optimal speed algorithm is used to
train the system which can be provided with additional filtering properties. The results of the
experiment prove that the proposed system is simple in numerical implementation and provides high
quality diagnosis, in comparison to traditional artificial neural networks, under conditions of
ambiguity and lack of a priori information. High speed of the proposed system allows processing
information in online mode.</p>
    </sec>
    <sec id="sec-6">
      <title>6. References</title>
      <p>[18] S. Osowski, Sieci neuronowe do przetwarzania informacji, Warszawa: Oficijna Wydawnicza</p>
      <p>Politechniki Warszawskiej, 2006.
[19] Souza, P.V.C.: Fuzzy neural networks and neuro-fuzzy networks: A review the main techniques
and applications used in the literature. Applied Soft Computing, vol. 92 (2020).
[20] D. F. Specht, “Probabilistic neural networks and polynomial ADALINE as complementary
techniques to classification”,” IEEE Trans. on Neural Networks, vol. -112,1,p1p9.901.11
[21] D. F. Specht, “Probabticlis neural networks,” Neural Network, vol. 3, -1p1p8., 1190990.
[22] Y. Bodyanskiy, A. Deineko, I. Pliss, O. Chala, “Probabilis-tFicuzzNyeSuyrostem in Medical
Diagnostic Task and its Lazy Learning-Selflearning,” Informatics &amp; D-Datraiven Medicine
(IDDM) 2020, pp. 29-35.
[23] D.R. Zahirniak, R. Chapman, Rogers, Pattern recognition using radial basis function networks,
in: Sixth Annual Aerospace Applications of AI Conf, Dayton, 1990, pp. 249–260.
[24] O. Nelles, Nonlinear Systems Identification. Berlin: Springer, 2001.
doi:10.1007/978-3-66204323-3.
[25] T. Yamakawa, E. Uchino, J. Miki, H. Kusanagi, A neo-fuzzy neuron and its application to
system identification and prediction of the system behavior, in: Proceedings of the 2nd
International Conference on Fuzzy Logic &amp; Neural Networks, Iizuka, Japan, 1992.
[26] E. Uchino, T. Yamakawa, Soft Computing Based Signal Prediction, Restoration, and Filtering,
in: D. Ruan (Ed.), Intelligent Hybrid Systems, Springer US, Boston, MA, 1997, pp. 331–351.
https://doi.org/10.1007/978-1-4615-6191-0_14.
[27] Miki T, Yamakawa T, “Analog implementation of-funzzeoy neuron and its on-board learning,”</p>
      <p>In Computational Intelligence and Applications, Piraeus: WSES Press, 1999, pp. 144-149.
[28] D. Zurita, M. Delgado, J.A. Carino, J.A. Ortega, G. Clerc, Industrial Time Series Modelling by
Means of the Neo-Fuzzy Neuron, volume 4 of IEEE Access, 2016 6151–6160.
https://doi.org/10.1109/ACCESS.2016.2611649.
[29] Y. Bodyanskiy, I. Kokshenev, V. Kolodyazhniy, An adaptive learning algorithm for a neo-fuzzy
neuron, in: Proceedings of the 3rd Conference of the European Society for Fuzzy Logic and
Technology, Zittau, Germany, 2003.
[30] Y. Bodyanskiy, N. Kulishova, O. Chala, The Extended Multidimensional Neo-Fuzzy System and
Its Fast Learning in Pattern Recognition Tasks, 3 volume of Data, 2018.
https://doi.org/10.3390/data3040063.
[31] R. P. Landim, B. Rodrigues, S. R. Silva, and W. M. Cami-nfuhzazsy,-n“eAuronewo ith real
time training applied to flux observer for an induction motor,” in Proceedings 5th Brazilia
Symposium on Neural Networks (Cat. No.98EX209), Belo Horizonte, Brazil, 1998, pp. 67–72.
doi: 10.1109/SBRN.1998.730996.
[32] E. Parzen, “On Estimation of a Probability Density Function and Mode,” Ann. Math. Statist., v
33, no. 3, pp. 1065–1076, Sep. 1962, doi: 10.1214/aoms/1177704472.
[33] G. S. Watson, “Smooth Regression Analysis,” Sankhya: The Indian Journal of Statistics, Series</p>
      <p>A, Vol. 26, No. 4, 1964, pp. 359-372.
[34] V. A. Epanechnikov, “N-Poanrametric Estimation of a Multivariate Probability Density,” Theory</p>
      <p>Probab. Appl., vol. 14, no. 1, pp. 153–158, Jan. 1969, doi: 10.1137/1114019.
[35] S. Kaczmarz, “Approximate solution of systems of linear equations,” International Journal of</p>
      <p>Control, vol. 57, no. 6, pp. 1269–1271, Jun. 1993, doi: 10.1080/00207179308934446.
[36] B. Widrow and M. E. Hoff, Adaptive Switching Circuits. IRE WESCON Convention Record,
1960.
[37] G. C. Goodwin, P. J. Ramadge, and P. E. Caines, “A globally convergent adaptive predictor</p>
      <p>Automatica, vol. 17, no. 1, pp. 135–140, Jan. 1981, doi: 10.1016/0005-1098(81)90089-3.
[38] Y. Bodyanskiy, V. Kolodyazhniy, and A. Stephan, “An Adaptive Learning Algorithm for a
Neuro-fuzzy Network,” Computational Intelligence. Theory and Applications. Fuzzy Days, vol.
2206, pp. 68–75, 2001.
[39] S. Debernardi et al., “A combination of urinary biomarker panelISaKndscoPraenfcoRr earlier
detection of pancreatic cancer: A case–control study,” PLoS Med, vol. 17, no. 12, p. e1003489,
Dec. 2020, doi: 10.1371/journal.pmed.1003489.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Mumford</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jain</surname>
          </string-name>
          , Computational Intelligence, Collaboration, Fuzzy and Emergence, Berlin: Springer, Vergal,
          <year>2009</year>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>642</fpage>
          -01799-5.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Kruse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Borgelt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Klawonn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Moewes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Steinbrecher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Held</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Computational</given-names>
            <surname>Intelligence</surname>
          </string-name>
          .
          <string-name>
            <given-names>A Methodological</given-names>
            <surname>Introduction</surname>
          </string-name>
          . Berlin: Springer-Verlag,
          <year>2013</year>
          . doi:
          <volume>10</volume>
          .1007/978-1-
          <fpage>4471</fpage>
          -5013-8.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kacprzyk</surname>
          </string-name>
          , W. Pedrycz, Springer Handbook of Computational Intelligence, Berlin Heidelberg: Springer, Verlag,
          <year>2015</year>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>662</fpage>
          -43505-2.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Schmitt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.-N.</given-names>
            <surname>Teodorescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <source>Computational Intelligence Processing in Medical Diagnosis</source>
          . Springer-Verlag Berlin Heidelberg ,
          <year>2002</year>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>7908</fpage>
          -1788-1.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Eu</surname>
          </string-name>
          .
          <article-title>Giannopoulou, Data Mining in Medical and Biological Research</article-title>
          . N.Y.: ITAC,
          <year>2008</year>
          . doi:
          <volume>10</volume>
          .5772/95.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>P.</given-names>
            <surname>Berka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Rauch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zighed</surname>
          </string-name>
          ,
          <article-title>Data Mining and Medical Knowledge Management: Cases and Applications</article-title>
          . N.Y.:
          <string-name>
            <surname>Herskey</surname>
          </string-name>
          ,
          <year>2009</year>
          . doi:
          <volume>10</volume>
          .4018/978-1-
          <fpage>60566</fpage>
          -218-3.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R.</given-names>
            <surname>Kantchev</surname>
          </string-name>
          , R.:
          <article-title>Advances in Intelligent Analysis of Medical Data and Decision Support Systems</article-title>
          . Springer,
          <year>2013</year>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -00029-9.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Ye. Bodyanskiy</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <string-name>
            <surname>Perova</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Vynokurova</surname>
            ,
            <given-names>I. Izonin</given-names>
          </string-name>
          ,
          <article-title>Adaptive wavelet diagnostic neuro-fuzzy network for biomedical tasks</article-title>
          .
          <source>Conference: 14th International Conference on Advanced Trends in Radioelecrtronics</source>
          , Telecommunications and Computer Engineering (TCSET),
          <year>2018</year>
          , pp.
          <fpage>711</fpage>
          -
          <lpage>715</lpage>
          . doi:
          <volume>10</volume>
          .1109/TCSET.
          <year>2018</year>
          .
          <volume>8336299</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Yu. Syerov</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Shakhovska</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Fedushko</surname>
          </string-name>
          ,
          <article-title>Method of the Data Adequacy Determination of Personal Medical Profiles</article-title>
          .
          <source>Advances in Artificial Systems for Medicine and Education II. Springer Nature Switzerland AG</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>333</fpage>
          -
          <lpage>343</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -12082-5_
          <fpage>31</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>R.</given-names>
            <surname>Tkachenko</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Izonin</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Tkachenko</surname>
          </string-name>
          , -
          <source>F“uNzezuyrDoiagnostics Systems Based on SGTM Neural-Like Structure and T-Controller,” in Lecture Notes in ComputatiotnealligeInce and Decision Making</source>
          , vol.
          <volume>77</volume>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Babichev</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Lytvynenko</surname>
          </string-name>
          , Eds. Cham: Springer International Publishing,
          <year>2022</year>
          , pp.
          <fpage>685</fpage>
          -
          <lpage>695</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -82014-5_
          <fpage>47</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Nataliya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Oleh</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Mariia</surname>
          </string-name>
          , “
          <article-title>Generalized formal model of big 9d0a5t</article-title>
          .
          <year>a0</year>
          ,
          <volume>3</volume>
          ”06a1rXiv:1 [cs],
          <source>May</source>
          <year>2019</year>
          , Accessed: Nov.
          <volume>02</volume>
          ,
          <year>2021</year>
          . [Online]. Available: http://arxiv.org/abs/
          <year>1905</year>
          .03061
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>I. Izonin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Tkachenko</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Gregus ml</article-title>
          .,
          <string-name>
            <given-names>K.</given-names>
            <surname>Zub</surname>
          </string-name>
          , and
          <string-name>
            <surname>P.</surname>
          </string-name>
          <article-title>Tkachenk-ob,ase“dAApGpRroNaNch towards Prediction from Small Datasets in Medical AppliPcaroticoend,”ia Computer Science</article-title>
          , vol.
          <volume>184</volume>
          , pp.
          <fpage>242</fpage>
          -
          <lpage>249</lpage>
          ,
          <year>2021</year>
          , doi: 10.1016/j.procs.
          <year>2021</year>
          .
          <volume>03</volume>
          .033.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>R.</given-names>
            <surname>Tkachenko</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Izonin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Vitynskyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Lotoshynska</surname>
          </string-name>
          , and
          <string-name>
            <given-names>O.</given-names>
            <surname>Pavlyuk</surname>
          </string-name>
          , “
          <article-title>Development of th Non-Iterative Supervised Learning Predictor Based on the Ito Decomposition and SGTM NeuralLike Structure for Managing Medical Insurance Costs,” Data</article-title>
          , vol.
          <volume>3</volume>
          , no.
          <issue>4</issue>
          , p.
          <fpage>46</fpage>
          ,
          <string-name>
            <surname>Oct</surname>
          </string-name>
          .
          <year>2018</year>
          ,
          <volume>10</volume>
          .3390/data3040046.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>I. Izonin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Tkachenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Fedushko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Koziy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Zub</surname>
          </string-name>
          , and
          <string-name>
            <surname>O. -BVasoevdk</surname>
          </string-name>
          ,Inp“uRtBF
          <source>Doubling Method for Small Medical Data Processing,” in Advances in Artificial Systems for Logistics Engineering</source>
          , vol.
          <volume>82</volume>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , S. Petoukhov, and M. He, Eds. Cham: Springer International Publishing,
          <year>2021</year>
          , pp.
          <fpage>23</fpage>
          -
          <lpage>31</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -80475-
          <issue>6</issue>
          _
          <fpage>3</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>K.-L. Du</surname>
            and
            <given-names>M. N. S.</given-names>
          </string-name>
          <string-name>
            <surname>Swamy</surname>
          </string-name>
          ,
          <source>Neural Networks and Statistical Learning</source>
          . London: Springer London,
          <year>2014</year>
          . doi:
          <volume>10</volume>
          .1007/978-1-
          <fpage>4471</fpage>
          -5571-3.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>I.</given-names>
            <surname>Goodfellow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Courville</surname>
          </string-name>
          ,
          <article-title>Deep Learning</article-title>
          . MIT Press,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>D.</given-names>
            <surname>Graupe</surname>
          </string-name>
          ,
          <source>Deep Learning Neural Networks: Design And Case Studies. N.Y.: World Scientific</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>