<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Seventh International Workshop on Computer Modeling and Intelligent Systems, May</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Online Stacking Credibilistic Fuzzy Clustering for Data Stream Mining</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yevgeniy Bodyanskiy</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alina Shafronenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Diana Rudenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksii Tanianskyi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Kharkiv National University of Radio Electronics</institution>
          ,
          <addr-line>Nauky ave 14, Kharkiv, 61166</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>3</volume>
      <issue>2024</issue>
      <fpage>0000</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>An important problem that arises when processing large amounts of observations is data compression to highlight the most essential information and identify certain latent factors that implicitly determine the nature of the phenomenon being studied. One of the most effective approaches to solving this problem is the apparatus of factor analysis, which has found wide application in problems of processing empirical data in various fields. Fuzzy clustering is a popular approach for soft data partitioning, its use always encounters difficulties in solving the problems of processing high-dimensional real data with complex hidden distributions. This paper proposes a disclosure of a kind of stack fuzzy clustering method where the data is represented in a new feature space created by a staking neural network. This approach aims to overcome the challenges associated with processing complex data and can bring significant improvements in the quality of clustering.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Stack learning</kwd>
        <kwd>data compression</kwd>
        <kwd>fuzzy clustering</kwd>
        <kwd>credibilistic fuzzy clustering</kwd>
        <kwd>data stream mining1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Clustering is a technique in machine learning and data analysis that involves grouping a set of
data points into subsets, or clusters, based on the similarity between them. Fuzzy clustering is a
variation of traditional clustering methods that allows for more flexible and nuanced assignments
of data points to clusters [1-5]. In contrast, fuzzy clustering allows data points to belong to
multiple clusters simultaneously with varying degrees of membership. This reflects the inherent
uncertainty or ambiguity present in real-world data.</p>
      <p>The Fuzzy C-Means (FCM) algorithm, proposed by James Bezdek in 1973, is a prominent
method in fuzzy clustering [6]. FCM assigns membership degrees to data points, indicating the
likelihood of each point belonging to different clusters. This flexibility makes fuzzy clustering
particularly useful in scenarios where data points may exhibit overlapping characteristics or
uncertainty in their categorization.</p>
      <p>Over the years, fuzzy clustering has found applications in diverse fields, including pattern
recognition, image processing, and Data Mining. Researchers have developed various extensions
and enhancements to the original FCM algorithm, addressing specific challenges and improving
its adaptability to different data patterns.</p>
      <p>The validity of fuzzy clustering solutions became a key focus, leading to the introduction of
indices to assess the quality of clustering results. These indices help researchers and practitioners
evaluate the effectiveness of fuzzy clustering algorithms in capturing meaningful patterns within
datasets.</p>
      <p>The evolution of fuzzy clustering has seen ongoing advancements, with researchers exploring
sophisticated membership functions and integrating fuzzy clustering with other machine
learning techniques. This integration has expanded the capabilities of fuzzy clustering, making it
applicable to complex problems in large-scale data analysis.</p>
      <p>The era of big data has significantly influenced the field of clustering, including traditional
clustering methods and the development of fuzzy clustering techniques. Deep learning in big data
represents a powerful combination that has transformed various fields by enabling more
sophisticated analysis, pattern recognition, and decision-making capabilities.</p>
      <p>Recently, there has been significant research into leveraging deep learning to uncover
meaningful data representations through neural networks. A notable area of exploration involves
the integration of unsupervised clustering algorithms with stack neural networks. This synergy
has become a vibrant field of research, aiming to jointly optimize the performance of deep
learning models and clustering algorithms.</p>
      <p>The goal of the work is propose the stack neuro-fuzzy system for Data Stream Mining using
credibilistic approach and designed to work both in batch and its recurrent online version.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Neural Network Data Compression</title>
      <p>An important problem that arises when processing large amounts of observations is data
compression to highlight the most essential information and identify certain latent factors that
implicitly determine the nature of the phenomenon being studied. One of the most effective
approaches to solving this problem is the apparatus of factor analysis [7], which has found wide
application in problems of processing empirical data in various fields: psychology, sociology,
technology, economics, medicine, criminology, etc.</p>
      <p>The basic idea of factor analysis, which allows for the presence of a priori unknown hidden
factors, leads to the following informal task: by observing a large number of measured
parameters (indicators), identify a small number of parameter-factors that mainly determine the
behavior of the measured parameters, or in other words: knowing the values of a large number
of measured functions parameters, set the appropriate values of the factor arguments common
to all functions and restore the form of these functions.</p>
      <p>
        The initial information for factor analysis is the ( N × n) observation matrix:
 x1(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) x2 (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) ... xn (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )   xT (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) 
 x1(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) x2 (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) ... xn (
        <xref ref-type="bibr" rid="ref2">2</xref>
        )   xT (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) 
 ... ... ... ...  ... 
X ( N ) =  =, 
 x1(k) x2 (k) ... xn (k)   xT (k) 
... ... ... ...  ... 
 x1(N ) x2 (N ) ... xn (N )   xT (N ) 

      </p>
      <p>1 N
x (k) = ∑ x(k), x(k) =x(k) − x (k)</p>
      <p>N k=1
vectors of measured indicators centered relative to the average of the data array.</p>
      <p>One of the most common and effective methods for finding factors is the principal component
method or component analysis, which is widely used in problems of data compression, pattern
recognition, coding, image processing, spectral analysis, etc. and known in pattern recognition
theory as the Karhunen-Loeve transform.</p>
      <p>The task of component analysis is to project data vectors from the original n-dimensional
space into a m-dimensional one (m &lt; n) space of principal components and reduces to searching
for a system w1, w2 ,..., wm orthonormal eigenvectors of the matrix R(N ) such that
w = (w11, w12 ,..., w1n )T corresponds to the largest eigenvalue λ1 matrix R(N ) , w2 - second largest
1
eigenvalue λ2 , etc. In other words, the problem comes down to finding solutions to the matrix
T
that formed by an array of N n-th dimensional vectors x(k) = ( x1, x2 ,..., xN ) and autocorrelation
matrix (n × n)
where</p>
      <p>R(N ) =
1 N</p>
      <p>∑( x(k) − x (N ))( x(k) − x (N ))T =
N k =1
1 N</p>
      <p>
        ∑ x(k)xT (k), (
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
N k =1
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
( R(N ) − λ j In ) wj
=0
such, that λ1 ≥ λ2 ≥ ... ≥ λm ≥ ε ≥ 0 and wj = 1.
      </p>
      <p>The dimension of the space of principal components m is determined, as a rule, from
empirical considerations and the required degree of compression of the data array.</p>
      <p>Thus, in algebraic terms, solving a factor problem is closely related to the problem of
eigenvalues and finding the rank of the correlation matrix; in a geometric sense, this is the
problem of moving to a lower-dimensional space with minimal loss of information; in a statistical
sense, this is the problem of finding a set of orthonormal vectors in the input space that “accept”
the maximum possible variation of the data, and finally, in an algorithmic sense, this is the
problem of sequentially determining a set of eigenvectors w1, w2 ,..., wm by optimizing a set of local
criteria that form a global objective function
with constraints
w1(k + 1) =w1(k ) +η (k ) y1(k ) ( x(k ) − w1(k ) y1(k )),

 y1(k ) =)x(k w1T (k ), w1(0) ≠ 0
the first principal component can be isolated.</p>
      <p>Next, following the procedure of standard principal component analysis, from each vector
x(k ), k = 1, 2,..., N its projection onto the first principal component is subtracted and the first
principal component of the differences is calculated, which is the second principal component of
the original data and the orthonormal first. The third principal component is calculated by
projecting each original vector x(k ) into the first two components, subtracting this projection
from x(k ) and finding the first principal component of the differences, which is the third principal
component of the original data array. The remaining principal components are calculated
recursively according to the described strategy.</p>
      <p>It is this idea of recursive calculation of principal components that forms the basis of the
algorithm proposed by T. Sanger [8] and in a modified form having the form [9]</p>
      <p>The first principal component w1 can be found by maximizing the criterion:
by solving a nonlinear programming problem using uncertain Lagrange multipliers.</p>
      <p>However, if data processing must be carried out in real time, neural network technologies
come to the fore, among which the self-learning rule and E. Oya’s neuron should be noted.</p>
      <p>It is with the help of Oya's rule in the form:</p>
      <p>Ek
=1∑m E kj
k j=1</p>
      <p>1 m k
=∑ ∑ ( wTj x( p))
k j=1 p=1</p>
      <p>2
wT w
j l
=0, j ≠ l, wTj wj</p>
      <p>=1.</p>
      <p>E1k =
1 k</p>
      <p>
        ∑ ( w1T x( p))
k p=1
2
(
        <xref ref-type="bibr" rid="ref4">4</xref>
        )
(
        <xref ref-type="bibr" rid="ref5">5</xref>
        )
wj (k + 1) =wj (k) +η (k)ej (k) y j (k),
ej (k) =ej−1(k) − wj (k) y j (k),

 y j (k) =(k)x(k), wTj wj (0) ≠ 0,

e0 (k) =x(k), j =0,2,..., m,
η (k) =r−1(k), r(k) =α r(k −1) + x(k) 2 , 0 ≤α ≤ 1.
(
        <xref ref-type="bibr" rid="ref6">6</xref>
        )
      </p>
      <p>It is easy to see that the first principal component is calculated using the Oya algorithm, then
the projection of the input vectors onto w1(k) are subtracted from the inputs and the differences
are processed by the next neuron, etc.</p>
      <p>
        In Fig. 1 shows a diagram of a modified artificial T. Sanger’s neural network, composed of
E. Oya’s neurons and implementing the algorithm (
        <xref ref-type="bibr" rid="ref6">6</xref>
        ).
      </p>
      <p>The first layer of the network is formed by encoder neurons that pre-process signals by
centering and normalizing them. Further signals x1(k), x2 (k),..., xn (k) are processed in the second
hidden layer formed by E. Oya's neurons, after which they are sent to the output layer formed by
elements with activation rectifier functions with a dead zone</p>
      <p>u, if u ≥θ ,
ψ (u) = </p>
      <p>0, otherwise
which allows you to highlight informative signals y j (k) and filter out the noise.</p>
      <p>The Sanger neural network is an effective means of compressing information with minimal
loss of accuracy, but its capabilities are limited by the fact that, implementing essentially the
standard technique of factor analysis, it solves a linear problem, while the main advantage of
neural network technologies is the ability to work in purely nonlinear situations.</p>
      <p>The problem of nonlinear factor analysis can be effectively solved using credibilistic theory
and clustering analysis.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Fuzzy credibilistic clustering</title>
      <p>Alternatively, to probabilistic and possibilistic procedures [10] it was introduced credibilistic
fuzzy clustering approach using as its basis the credibility theory [11] and is largely devoid of the
drawbacks of known methods.</p>
      <p>The most common approach within the framework of probabilistic fuzzy clustering is
associated with minimizing the goal function [12-14].
with constraints</p>
      <p>N m
E (uq (k),cq ) = ∑ ∑uqβ (k)d 2 ( x(k),cq )</p>
      <p>k=1 q=1
 m
∑uq (k) = 1,
 q=1</p>
      <p>N
0 &lt; ∑uq (k) &lt; N.</p>
      <p> k=1</p>
      <p>
        Solution of nonlinear programming problem using the method of Lagrange indefinite
multipliers leads to the well-known result [9, 11, 15]:
(
        <xref ref-type="bibr" rid="ref7">7</xref>
        )
(
        <xref ref-type="bibr" rid="ref8">8</xref>
        )
(
        <xref ref-type="bibr" rid="ref9">9</xref>
        )
(
        <xref ref-type="bibr" rid="ref10">10</xref>
        )
1
(d 2 ( x(k),cq ))1−β


uq (k) =



 ∑N(uq (k))β x(k)
cq = k=1∑kN=1 (uq (k))β
m
∑(d 2 ( x(k),cl ))
l=1
1 ,
1−β
coinciding with β = 2 a popular method of Fuzzy C-Means of J. Bezdek (FCM) [6].
      </p>
      <p>
        If the data are fed to processing sequentially, the solution of the nonlinear programming
problem (
        <xref ref-type="bibr" rid="ref8">8</xref>
        ), (
        <xref ref-type="bibr" rid="ref9">9</xref>
        ) using the Arrow-Hurwitz-Uzawa algorithm leads to an online procedure:
      </p>
      <p>
        The goal function of credibilistic fuzzy clustering has the form [6, 11] close to (
        <xref ref-type="bibr" rid="ref8">8</xref>
        )
N m
E (Credq (k ), cq ) = ∑ ∑ Credqβ (k )d 2 ( x(k ), cq )
k=1 q=1
with "softer" than (
        <xref ref-type="bibr" rid="ref9">9</xref>
        ) constraints:
0 ≤ Credq (k ) ≤ 1, for all q and k,

sup Credq (k ) ≥ 0,5, for all k,

Credq (k ) + sup Credl (k ) 1,
      </p>
      <p>=
for any q and k, for which Credq (k ) ≥ 0,5.</p>
      <p>uq (k ) =ϕ q (d ( x(k ), cq ))
uq (k ) =</p>
      <p>1
1 + d 2 ( x(k ), cq )</p>
      <p>.</p>
      <p>
uq (k ) = 1 +

</p>
      <p>−1
d 2 ( x(k ), cq )  ,</p>
      <p>
        σ q2 
 
σ q2 =  ∑m d 2 ( x(k ), cl ) 
 l=1 
 l≠ε 
−1
(
        <xref ref-type="bibr" rid="ref12">12</xref>
        )
(
        <xref ref-type="bibr" rid="ref13">13</xref>
        )
(
        <xref ref-type="bibr" rid="ref14">14</xref>
        )
(
        <xref ref-type="bibr" rid="ref15">15</xref>
        )
(
        <xref ref-type="bibr" rid="ref16">16</xref>
        )
(17)
which is a generalization of the function (
        <xref ref-type="bibr" rid="ref15">15</xref>
        ) (for σ q2 = 1 (
        <xref ref-type="bibr" rid="ref15">15</xref>
        ) coincides with (17)) and satisfies
all the conditions for (
        <xref ref-type="bibr" rid="ref14">14</xref>
        ).
      </p>
      <p>
        It should be noted that the goal functions (
        <xref ref-type="bibr" rid="ref8">8</xref>
        ) and (
        <xref ref-type="bibr" rid="ref12">12</xref>
        ) are similar and that there are no rigid
probabilistic constraints in (
        <xref ref-type="bibr" rid="ref13">13</xref>
        ) on the sum of the membership in (
        <xref ref-type="bibr" rid="ref9">9</xref>
        ).
      </p>
      <p>In the procedures of credibilistic clustering, there is also the concept of fuzzy membership,
which is calculated using the neighborhood function of the form:
monotonically decreasing on the interval [0, ∞] so that ϕ q (0) =1,ϕ q (∞) → 0.</p>
      <p>Such a function is essentially an empirical similarity measure of [13, 15, 16] related to distance
by the relation:</p>
      <p>
        Note also that earlier it was shown in [14] that the first relation (
        <xref ref-type="bibr" rid="ref10">10</xref>
        ) for β = 2 can be rewritten
as
where
      </p>
      <p>In batch form the algorithm of credibilistic fuzzy clustering in the accepted notation can be
written as
 −1
uq (k ) = (1 + d 2 ( x(k ), cq )) ,
uq∗ (k ) = uq (k ) (sup ul (k ))−1 ,

Credq (k )

 ∑N (Credq (k ))β x(k )

cq = k=1 ∑N (Credq (k ))β
 k=1
=12 uq∗ (k ) + 1 − sup ul∗ (k ) ,</p>
      <p>
        
l≠q 
and in the online mode, considering (
        <xref ref-type="bibr" rid="ref16">16</xref>
        ), (17):



 
uq (k + 1) = 1 +


,
      </p>
      <p>−1
d 2 ( x(k + 1), cq (k )) </p>
      <p> ,
σ q2 (k + 1) 



uq∗ (k + 1) =uq (k + 1) ,</p>
      <p>sup ul (k + 1)
Credq (k + 1) =12 uq∗ (k + 1) + 1 − sup ul∗ (k)  ,</p>
      <p>
 l≠q 



cq (k + 1) =cq (k ) +η (k + 1)Credqβ (k + 1) ( x(k + 1) − cq (k )).
(18)
(19)</p>
      <p>
        From the point of view of computational implementation, algorithm (19) is not more
complicated than procedure (
        <xref ref-type="bibr" rid="ref11">11</xref>
        ) and, in the general case, is its generalization to the case of
credibilistic approach to fuzzy clustering.
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental research</title>
      <p>Conducting experimental studies and comparative analysis of the quality of data clustering using
various metrics allows you to objectively assess the effectiveness of the developed method in
accordance with analogues. To estimate the quality of the method we used quality criteria
partitioning into clusters such as [3, 6]:
− Partition Coefficient (PC);
− Classification Entropy (CE);
− Partition Index (SC);
− Separation Index (S);
− Xie and Beni's Index (XB);
− Dunn's Index (DI).</p>
      <p>Partition Coefficient (PC): measures the amount of "overlapping" between clusters.</p>
      <p>Classification Entropy (CE): it measures the fuzzyness of the cluster partition only, which is
similar to the Partition Coefficient.</p>
      <p>Partition Index (SC): is the ratio of the sum of compactness and separation of the clusters. It is
a sum of individual cluster validity measures normalized through division by the fuzzy cardinality
of each cluster. SC is useful when comparing different partitions having equal number of clusters.
A lower value of SC indicates a better partition.</p>
      <p>Separation Index (S): on the contrary of partition index (SC), the separation index uses a
minimum-distance separation for partition validity.</p>
      <p>Xie and Beni's Index (XB): it aims to quantify the ratio of the total variation within clusters and
the separation of clusters. The optimal number of clusters should minimize the value of the index.</p>
      <p>Dunn's Index (DI): this index is originally proposed to use at the identification of "compact and
well separated clusters".</p>
      <p>So the result of the clustering has to be recalculated as it was a hard partition method. The
specific information of the data sets is shown in Table 1.
Online Stack Fuzzy Credibilistic Clustering for Data
Stream Mining
SOM based on possibilistic fuzzy clustering</p>
      <p>SOM based on probabilistic fuzzy clustering
Online Stack Fuzzy Credibilistic Clustering for Data
Stream Mining
SOM based on possibilistic fuzzy clustering
SOM based on probabilistic fuzzy clustering
7
0
е
9
4
2
1
.
9
5. Discussions
Upon analyzing the results acquired, it can be inferred that irrespective of the volume of the initial
data provided, the processing through the proposed method exhibits comparable speed and
clustering quality when contrasted with established clustering algorithms and methodologies.</p>
      <p>The obtained results confirm that the performance stack neuro-fuzzy system is better than
other network structures, and it can be a viable structure for Data Stream Mining.</p>
      <p>The results of accuracy that demonstrated in Table 4 confirm that proposed method online
stack fuzzy credibilistic clustering for Data Stream Mining time more superiority regardless of
the number observations that are fed on clustering process.</p>
      <p>Based on the experimental findings, it is advisable to endorse the proposed system for
practical application in addressing the challenges associated with automatic clustering of large
datasets.</p>
    </sec>
    <sec id="sec-5">
      <title>6. Conclusion</title>
      <p>The problem of fuzzy clustering of data streams by stack neuro-fuzzy system is considered. In the
paper was proposed the stack neuro-fuzzy system for Data Stream Mining using credibilistic
approach and designed to work both in batch and its recurrent online version.</p>
      <p>The network shows that stack structures based on fuzzy models can be applicable in data
clustering. The proposed stack neuro-fuzzy system is quite simple in numerical implementation
and can use the well-known online fuzzy clustering methods intended for solving Data Stream
Mining problems.</p>
      <p>Future research endeavors could explore the potential of employing stack neuro-fuzzy
systems for fuzzy clustering of data streams, aiming to address the complexities inherent in
automatic clustering of big data.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <p>The work is supported by the state budget scientific research project of Kharkiv National
University of Radio Electronics "Deep hybrid systems of computational intelligence for data
stream mining and their fast learning" (state registration number 0119U001403).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1] GAs,
          <source>Genetic Algorithms. "On Genetic-Fuzzy Data-Mining Techniques."</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2Liu,
          <string-name>
            <surname>Hongfu</surname>
          </string-name>
          , et al.
          <article-title>"Infinite ensemble for image clustering</article-title>
          .
          <source>" Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining</source>
          .
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <surname>Jindong</surname>
          </string-name>
          , et al.
          <article-title>"A fuzzy C-means clustering algorithm based on spatial context model for image segmentation."</article-title>
          <source>International Journal of Fuzzy Systems</source>
          <volume>23</volume>
          (
          <year>2021</year>
          ):
          <fpage>816</fpage>
          -
          <lpage>832</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Gong</surname>
          </string-name>
          ,
          <string-name>
            <surname>Maoguo</surname>
          </string-name>
          , et al.
          <article-title>"Fuzzy clustering with a modified MRF energy function for change detection in synthetic aperture radar images</article-title>
          .
          <source>" IEEE Transactions on Fuzzy Systems 22.1</source>
          (
          <year>2013</year>
          ):
          <fpage>98</fpage>
          -
          <lpage>109</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Rani</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Kumar</surname>
          </string-name>
          .
          <article-title>"Dissimilarity measure between intuitionistic Fuzzy sets and its applications in pattern recognition and clustering analysis</article-title>
          .
          <source>" Journal of Applied Mathematics, Statistics and Informatics 19.1</source>
          (
          <year>2023</year>
          ):
          <fpage>61</fpage>
          -
          <lpage>77</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Bezdek</surname>
            ,
            <given-names>James C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Robert</surname>
            <given-names>Ehrlich</given-names>
          </string-name>
          , and
          <string-name>
            <given-names>William</given-names>
            <surname>Full</surname>
          </string-name>
          .
          <article-title>"FCM: The fuzzy c-means clustering algorithm."</article-title>
          <source>Computers &amp; geosciences 10</source>
          .2-
          <fpage>3</fpage>
          (
          <year>1984</year>
          ):
          <fpage>191</fpage>
          -
          <lpage>203</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Cureton</surname>
            ,
            <given-names>Edward E.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Ralph B. D'Agostino</surname>
          </string-name>
          .
          <article-title>Factor analysis: An applied approach</article-title>
          . Psychology press,
          <year>2013</year>
          .” (
          <year>1967</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Sanger</surname>
            ,
            <given-names>Terence D.</given-names>
          </string-name>
          <article-title>"Optimal unsupervised learning in a single-layer linear feedforward neural network."</article-title>
          <source>Neural networks 2.6</source>
          (
          <year>1989</year>
          ):
          <fpage>459</fpage>
          -
          <lpage>473</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Specht</surname>
          </string-name>
          , Donald F.
          <article-title>"A general regression neural network."</article-title>
          <source>IEEE transactions on neural networks 2.6</source>
          (
          <year>1991</year>
          ):
          <fpage>568</fpage>
          -
          <lpage>576</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Höppner</surname>
          </string-name>
          ,
          <string-name>
            <surname>Frank</surname>
          </string-name>
          , et al.
          <article-title>Fuzzy cluster analysis: methods for classification, data analysis and image recognition</article-title>
          . John Wiley &amp; Sons,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11Zhou,
          <string-name>
            <surname>Jian</surname>
          </string-name>
          , et al.
          <article-title>"Credibilistic clustering: the model and algorithms."</article-title>
          <source>International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 23.04</source>
          (
          <year>2015</year>
          ):
          <fpage>545</fpage>
          -
          <lpage>564</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <surname>Yong</surname>
          </string-name>
          , et al.
          <article-title>"Possibilistic c-means clustering based on the nearest-neighbour isolation similarity</article-title>
          .
          <source>" Journal of Intelligent &amp; Fuzzy Systems 44.2</source>
          (
          <year>2023</year>
          ):
          <fpage>1781</fpage>
          -
          <lpage>1792</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <surname>Zhengbing</surname>
          </string-name>
          , et al.
          <article-title>"Fuzzy clustering of incomplete data by means of similarity measures." 2019 IEEE 2nd Ukraine Conference on Electrical and Computer Engineering (UKRCON)</article-title>
          . IEEE,
          <year>2019</year>
          . doi:
          <volume>10</volume>
          .1109/UKRCON.
          <year>2019</year>
          .8879844
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Shafronenko</surname>
          </string-name>
          ,
          <string-name>
            <surname>Alina</surname>
          </string-name>
          , et al.
          <article-title>"Online credibilistic fuzzy clustering of data using membership functions of special type."</article-title>
          <source>CMIS</source>
          .
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Bodyanskiy</surname>
            , Yevgeniy,
            <given-names>Alina</given-names>
          </string-name>
          <string-name>
            <surname>Shafronenko</surname>
            , and
            <given-names>Sergii</given-names>
          </string-name>
          <string-name>
            <surname>Mashtalir</surname>
          </string-name>
          .
          <article-title>"Online robust fuzzy clustering of data with omissions using similarity measure of special type." Lecture Notes in Computational Intelligence and Decision Making: Proceedings of the XV International Scientific Conference “Intellectual Systems of Decision Making and Problems of Computational Intelligence” (ISDMCI'</article-title>
          <year>2019</year>
          ), Ukraine, May
          <volume>21</volume>
          -25,
          <year>2019</year>
          15. Springer International Publishing,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Bodyanskiy</surname>
            ,
            <given-names>Ye V.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>A. Yu</given-names>
            <surname>Shafronenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and I. N.</given-names>
            <surname>Klymova</surname>
          </string-name>
          .
          <article-title>"Online fuzzy clustering of incomplete data using credibilistic approach and similarity measure of special type." Radio Electronics</article-title>
          ,
          <source>Computer Science, Control</source>
          <volume>1</volume>
          (
          <year>2021</year>
          ):
          <fpage>97</fpage>
          -
          <lpage>104</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>