<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Tirana, Albania
∗Corresponding author.
erind.bedalli@uniel.edu.al(E. Bedalli); shhajrulla@epoka.edu.al (S. Hajrulla)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Adaptive Reliability-Based Fuzzy Clustering with Modified Objective Function</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Erind Bedalli</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Shkelqim Hajrulla</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Engineering, Epoka University</institution>
          ,
          <addr-line>Tirana</addr-line>
          ,
          <country country="AL">Albania</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Informatics, University of Elbasan 'Aleksander Xhuvani'</institution>
          ,
          <addr-line>Elbasan</addr-line>
          ,
          <country country="AL">Albania</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Clustering is an unsupervised learning form which aims at revealing intrinsic patterns in data based on the distances or similarities of instances among each other. In real world datasets, clustering procedures may be significantly affected by noise, outliers and low-confidence data points. This paper presents a fuzzy clustering approach based on modified objective functions employing adaptive reliability measures, striving to enhance the robustness of the clustering results. The modification on the objective function integrates the notion of a point's influence into the clustering procedure. Therefore, the influence of each point will be controlled by a reliability score which will be evaluated adaptively. The root mean square propagation (RMSprop) algorithm is applied to dynamically assess the reliability score of the data points. Finally, this framework will be experimentally tested on several benchmark datasets to assess the quality of generated clusters and compare it to classical fuzzy clustering algorithms.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;fuzzy clustering</kwd>
        <kwd>reliability score</kwd>
        <kwd>objective function modifications</kwd>
        <kwd>RMSprop1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>In machine learning, clustering is a core unsupervised learning task which intends to group similar
data instances together, forming clusters (subsets) where elements share strong similarities with one
another and clear dissimilarities from those in other clusters. The clustering process is guided merely
by the inter-point similarity or distance, without any available external information about the latent
structure of the dataset. Clustering has proven to be a valuable and flexible technique due to its
wideranging applications, including recommendation systems in e-commerce, customer profiling in
marketing, topic discovery in text mining, behavioral pattern analysis in psychology, and species
categorization in biological sciences [1]. For any sizes of datasets, clustering constitutes a powerful
tool for exploring and summarizing data, but especially for larger volumes of data it remains essential
for uncovering patterns, revealing hidden structures, and often guiding downstream machine
learning tasks [2].</p>
      <p>There are multiple approaches to the clustering problem, among which can be distinguished hard
clustering and fuzzy clustering. In the hard clustering approach, each data instance is assigned to
exclusively one cluster, consequently the cluster boundaries are crisp and without overlapping. In
contrast, fuzzy clustering allows instances to belong to multiple clusters simultaneously with varying
degrees of membership (values between 0 and 1), making it a more flexible and realistic approach in
circumstances where data points are not clearly separable. The traditional fuzzy clustering algorithm
operates by implying an equal contribution of the data points to the clustering procedure,
nonetheless this makes the algorithm susceptible to presence of noise and outliers [3, 4].</p>
      <p>This paper presents a modification to the classical fuzzy clustering algorithm, incorporating adaptive
reliability scores in the definition of the objective function of the algorithm. The central idea of this
modification is to regulate the influence of the data points into the clustering process, based on a
dynamically assessed reliability score. More specifically, the proposed method computes the
reliability score of each point using entropy-based confidence metrics and the entire model is trained
using the RMSprop optimizer, in order to adaptively scale learning rates. This approach allows the
algorithm to down-weight the influence of uncertain or noisy data points during the clustering
procedure, avoiding distortions that they may induce to the generated clusters. As a result, the model
strives to be more resilient towards the presence of outliers and operates more effectively upon
realworld, imperfect datasets. Together, the use of reliability scoring and adaptive optimization are
expected to contribute to more stable, accurate, and interpretable clustering results. This approach
will be tested extensively on several benchmark datasets and slightly distorted variants of benchmark
datasets to assess quantitively the quality of the generated clusters.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>In the recent decades, significant progress has been made in the field of unsupervised learning,
designing various forms of clustering approaches and enhancements tailored to specific scenarios.
Nevertheless, clustering remains a challenging problem, there is no algorithm capable of learning
the patterns of every dataset. The idea of modifying the objective function and the idea of reliability
measures usage have been presented in different flavors in various research works, often separately,
but also occasionally intertwined together. In this section, the main approaches in these directions
will be summarized and the differences in our approach will be accentuated.</p>
      <p>F. Hoppner et al. have explored how the traditional fuzzy c-means objective function can be modified
to support different levels of fuzziness in cluster memberships. They have analyzed how these
modifications of the objective function allow for more flexible, partial membership values, enabling
the clustering algorithm to better capture different data distributions and overlapping between
clusters. These modifications were demonstrated to enhance the interpretability and robustness of
fuzzy clustering in practical applications [5].</p>
      <p>M. Menard et al. have proposed approaches to fuzzy clustering devising objective functions based on
the principle of Extreme Physical Information. In their work, authors have exhibited how this method
can systematically incorporate into the objective functions effectively minimal constraint terms.
Their work is very well semantically explained for the physical perspective compared to the
traditional algorithms [6].</p>
      <p>H. Timm and R. Kruse have addressed a drawback in classical possibilistic fuzzy clustering, where
the objective function is really minimized only if all cluster centers are equivalent and have proposed
a modification to the objective function to trigger mutual repulsion among clusters, thus enhancing
the clustering process [7].</p>
      <p>J. Kang et al. have proposed a modified fuzzy c-means (FCM) algorithm that augments spatial
neighborhood information into the conventional objective function. This approach was
demonstrated to enhance the robustness of fuzzy clustering, especially when applied for image
segmentation purposes [8].</p>
      <p>H. Wang et al. have presented an automated multiscale fuzzy c-means (MSFCM) method for magnetic
resonance images (MRI). The objective function of the conventional FCM method is modified to allow
multiscale classification processing, thus improving the robustness especially when operating on
low-contrast MR images [9].</p>
      <p>X. Xiong et al. have proposed a modified generalized objective function for prototype-based fuzzy
clustering incorporating a p-norm distance measure. Their approach induces cluster merging and
the key innovation is the integration of principal component analysis (PCA) into the objective
function. This methodology successfully captures the directional structure of the clusters utilizing
the principal components [10].</p>
      <p>K. Zhao et al. have introduced a generalized fuzzy c-means (FCM) clustering strategy that modifies
the objective function involving a mechanism to control the degree of fuzziness in clustering results.
Via this this mechanism, the algorithm can tune between hard and fuzzy clustering, making it more
adaptable to various datasets [11].</p>
      <p>A. Bagherinia et al. have presented a reliability-driven cluster indicator to assess the reliability of
fuzzy clusters within an ensemble framework. This methodology assigns weights to multiple
clustering outcomes based on their reliability, which achieved an overall higher clustering quality
and robustness [12].</p>
      <p>The approach presented in this paper revives partially ideas by F. Hoppner et al. and K. Zhao et al.,
but the execution strategy is vastly different as instead of Lagrange multipliers, the numerical root
mean squared propagation (RMSprop) method is employed.</p>
    </sec>
    <sec id="sec-3">
      <title>3. The classical Fuzzy C-Means (FCM) algorithm</title>
      <p>
        The classical Fuzzy C-Means algorithm (FCM) is the most significant algorithm in the field of fuzzy
cluster analysis. It generalizes the well-known K-Means algorithm, involving the concept of partial
degree of membership, thus allowing the instances of dataset to belong to several clusters
simultaneously, with different degrees of membership. Therefore, the outcome of this algorithm,
instead of being a set of clusters with their exclusive member points (like in the case of K-Means),
will be the fuzzy membership matrix (U) containing the respective membership values of the
instances into the clusters. The FCM algorithm operates as in iterative procedure aiming to
approximate the nonlinear optimization of the objective function formulated classically as:
n C
J ( X , U , V ; m , C ) = ∑ ∑ μimj d2 ( xi , c j)
i=1 j =1
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
The hyperparameter m is the fuzzy exponent, which controls the degree of fuzziness in the generated
clusters, i.e. the larger the value of m, the more distributed may be the instances into the clusters.
Parameter C represents the number of clusters and it may either given externally as a
hyperparameter, or tuned based on cluster validation measures. Additionally, n is the number of
instances in the dataset, xi is the i-th instance for 1 ≤ i ≤ n, c j is the centre of the jth cluster for
1 ≤ j ≤ C (i.e. the entries of the vector V) and μij is the entry of membership matrix U corresponding
to the ith element and the jth cluster. Furthermore, two other hyperparameters are the tolerance
level Tol and the distance norm (typically the Euclidean distance) [13].
      </p>
      <p>The algorithm runs in iterations where it updates the membership degree of each point into each
cluster, then adjusts the cluster centers based on the memberships. Eventually the convergence is
achieved, settling into a configuration that best fits the dataset. The hyperparameters collectively
influence the clustering results, so good choices of them typically assisted by tuning procedures are
helpful. The following pseudocode describes the FCM algorithm [14]:
1. Randomly initialize the centers of the clusters.
2. Initialize the fuzzy membership matrix with zero values.
3. Let k = 1 (iteration counter)
4. Evaluate the distances of data points from cluster centers (dij values).
5. Update the fuzzy membership matrix, according to: μij= c
∑ di-kφ2-1
k =1
n</p>
      <p>φ
∑ μij X j
6. Calculate the new centers of the clusters, according to: ci = j =1n
∑ μiφj
j =1
7. k = k+1 (increment the iteration counter)
8. If ‖U k -1 - U k -2‖&gt; Tol repeat at step 4.
9. END.
Despite the multiple successful applications of the FCM algorithm across a wide range of domains,
it struggles when operating in datasets characterized by complex structures such as overlapping
clusters, varying cluster sizes and shapes, presence of noise and or outliers. In such datasets, the FCM
algorithm typically underperform resulting in poorly constructed clusters [15]. In this work, is
proposed a modification (detailed in the next section) which modifies the objective function of FCM
incorporating the notion of influence of a data point in the cluster. Moreover, the optimization
approach in this work will be based on a numerical optimization method: the root mean squared
method.</p>
    </sec>
    <sec id="sec-4">
      <title>4. An entropy-based modified objective function FCM</title>
      <p>The key idea of the modification proposed in this work is to control the influence that each point
will have in the clustering process based on a reliability score that will be dynamically adapted. The
definition of the objective function is:</p>
      <p>n C
J ( X , U , V ; m , C ) = ∑ ∑ ri μimj d2 ( xi , c j) + λR ( r )</p>
      <p>i=1 j =1
In the above definition, X, U, V, m, C, μij, c j are the same as in the classical FCM, while ri is the
reliability score of each instance which will be dynamically updated, R(r) is a regularization term
upon the vector of reliabilities and λ is a coefficient controlling the weight of the regularization term.
The usage of the regularization term intends to avoid degenerate or extreme reliability assignments,
inducing stability in the clustering process. There are several possibilities how the regularization
term can be designated; in our case is used the L2 regularization, which discourages extreme values
in the reliability scores vector:
On the other hand, in order to evaluate the reliability scores of the data points, an entropy-based
approach is employed. For each data point will be evaluated the entropy value which quantifies how
well distributed are the point memberships into the clusters:</p>
      <p>C
R ( r ) = ∑ ri2</p>
      <p>j =1</p>
      <p>
        C
H i = - ∑ μij log μij
j =1
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
(
        <xref ref-type="bibr" rid="ref4">4</xref>
        )
Obviously, the reliability scores of the points will vary from 0 (the lowest reliability) to 1 (the highest
reliability).
      </p>
      <p>
        In order to approximate a solution to the non-linear optimization problem defined by the given
objective function, the root mean square propagation (RMSprop) method will be used. This method
is an improvement over the classical gradient descent optimization technique. The fundamental
principle of gradient descent is tracking the direction of the steepest descent, using a constant
learning rate, while RMSprop adapts the learning rate adjusting it to the current landscape of the
objective function. So generally, for a parameter θ, the update is handled as [16]:
Parameter η represents the learning rate, g(t) denotes the gradient at the k-th iteration, E[g2](k) denotes
the root mean square of the recent gradients and ɛ is a small constant to avoid division by zero.
Furthermore, the update of the root mean square will be handled based on a decay parameter β as [17,
18]:
The general iterative scheme of the modified fuzzy clustering algorithm will remain the same as in
the classical FCM, with the primary distinction that the updates for the cluster centres and the fuzzy
membership values will be carried out numerically, according to the equations (
        <xref ref-type="bibr" rid="ref8">8</xref>
        ) and (9):
A high value of entropy indicates that a point is ambiguously participating in many clusters, while a
low value of entropy (ideally zero) indicates a strong, reliable association of a point with a certain
cluster. So, the value of the entropy can vary from 0 to log C, where the value 0 indicates the highest
reliability and the value log C indicates the lowest reliability. In the light of these facts, the
evaluation of the reliability scores of the data points is handled as:
Finally, the entire adaptive reliability-based fuzzy clustering algorithm is described by the following
pseudocode:
1. Randomly initialize the centers of the clusters.
2. Initialize the fuzzy membership matrix with zero values.
3. Let k = 1 (iteration counter)
4. Update the gradients of the memberships, as: g(μkij) = ri ∙ m ∙ μimj -1 d2 ( xi , c j) + λ ∙
5. Update the weighted root mean square, as:
∂ R
∂ μij
      </p>
      <p>E [ g2μij ](k +1) = β ∙ E [ g2μij ](k ) +( 1 - β )∙( g(μkij))2
θ(k +1) = θ(k )</p>
      <p>η
√ E [ g2 ](k ) + ϵ</p>
      <p>∙ g(k )
E [ g2μ ](k +1) = β ∙ E [ g2μ ](k ) +( 1 - β )∙( g(μt ))2</p>
      <p>
        ij ij ij
g(ckj ) = ∂∂ cJj = 2( ∑i=N1 ri ∙ μimj ∙ ( xi - c j))+ λ ∙ ∂∂ Rcij
g(μkij) = ∂∂ μJij
= ri ∙ m ∙ μimj -1 d2 ( xi , c j) + λ ∙
∂ R
∂ μij
(
        <xref ref-type="bibr" rid="ref5">5</xref>
        )
(
        <xref ref-type="bibr" rid="ref6">6</xref>
        )
(
        <xref ref-type="bibr" rid="ref7">7</xref>
        )
(
        <xref ref-type="bibr" rid="ref8">8</xref>
        )
(9)
(k +1) = μikj
6. Update the fuzzy membership matrix: μij
      </p>
      <p>(k +1) =
7. Normalize the fuzzy membership matrix: μij
η
8. Update the gradients of the centers, as: g(ckj ) = 2( ∑i=N1 ri ∙ μimj ∙ ( xi - c j))+ λ ∙ ∂∂ Rcij
(k +1) = ckj
9. Update the centers, as c j</p>
      <p>η
√ E [ g2c ](t ) + ϵ
ij
∙ g(ck )
j
10. Update the reliability values: ri = 1 - H i</p>
      <p>log C
11. k = k+1 (increment the iteration counter)
12. If ‖U k - U k -1‖&gt; Tol jump to step 4.</p>
      <p>13. END.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Experimental results</title>
      <p>In order to assess the robustness and the quality of the generated clusters generated by the proposed
modified version of FCM algorithm, a series of experimental tests are conducted on several
benchmark datasets. In order to evaluate the stability of the algorithm, in addition to the original
versions of the benchmark datasets, two distorted versions are created for each benchmark dataset
by adding artificial noise at different levels.The employed benchmark datasets were: Breast Cancer,
Ionosphere, Vertebral Column, Dermatology, E. coli and Shuttle [19]. For each of the aforementioned
datasets, two distorted versions are also created, with an additional quantity of respectively 2% and
5% noise points being added. The noise points are randomly placed at a distance from the cluster
centres that is 8-10% larger than the average distance of the top-5 farthest genuine points from the
respective cluster centre. The details of the original datasets are displayed in Table 1 below:
Dataset
Although the class labels for the employed classes are known, this information is not provided to the
clustering procedures, instead it is utilized as the ground truth for the assessment of the clustering
results. The quality assessment of the generated clusters is done via the V-measure, which is
evaluated as the harmonic mean of homogeneity and completeness scores, so:
1 + 1
H C
(10)
The homogeneity score measures the degree to which each cluster contains data points of only one
particular label, while the completeness score measures how well all data points with the same label
are grouped into the same cluster. In order to be compatible with the fuzzy scenario, firstly the
conditional fuzzy entropy between the generated clusters and the ground truth is evaluated, and
afterwards these results are utilized to calculate the fuzzy homogeneity and fuzzy completeness
scores.</p>
      <p>The results of the experimental procedures are summarized in Table 2 in the following page:
As noticed from the table, the modified version of the FCM performs typically with a higher
Vmeasure, pointing out the effectiveness of this approach. Moreover, it can be noticed that the
difference in the V-measure values increases as the distortion level increases, which indicates a better
robustness of this methodology. However, a drawback of this method is the increased computational
complexity compared to the classical FCM.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>This paper presented a modified reliability-based fuzzy clustering algorithm devised by modifications
on the objective function of the classical FCM algorithm. The primary goal of this approach was to
construct a more robust clustering framework, less sensitive to the presence of noise, outliers and
uncertain instances. The proposed approach leverages entropy-based metrics to dynamically
evaluate a confidence value for each data point, in order to control their influence during the
clustering process. The optimization of the modified objective function is carried out by the RMSprop
optimization techniques, in order to achieve a more flexible and adaptive clustering process.
The experimental procedures applied on several benchmark datasets and their slightly distorted
variants demonstrated that the proposed algorithm generally performs with better fuzzy V-measure
scores compared to the classical FCM algorithm. These findings indicate the method’s improved
resilience to noise and uncertain data and its capability to distinguish inherent clusters in challenging
scenarios. Despite the natural computational overhead introduced by the adaptive modification, the
overall gains in clustering quality and stability suggest that this reliability-based framework offers
promising directions for robust fuzzy clustering in real-world applications.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.
[9] Wang, Hesheng, and Baowei Fei. "A modified fuzzy C-means classification method using a
multiscale diffusion filtering scheme." Medical image analysis 13, no. 2 (2009): 193-202.
[10] Xiong, Xuejian, Kap Chan, and Kian Lee Tan. "Similarity-driven cluster merging method for
unsupervised fuzzy clustering." arXiv preprint arXiv:1207.4155 (2012).
[11] Zhao, Kaixin, Yaping Dai, Zhiyang Jia, and Ye Ji. "General fuzzy c-means clustering strategy:
Using objective function to control fuzziness of clustering results." IEEE Transactions on Fuzzy
Systems 30, no. 9 (2021): 3601-3616.
[12] Bagherinia, Ali, Behrooz Minaei-Bidgoli, Mehdi Hosseinzadeh, and Hamid Parvin.
"Reliabilitybased fuzzy clustering ensemble." Fuzzy Sets and Systems 413 (2021): 1-28.
[13] Bedalli, Erind, and Ilia Ninka. "Exploring an educational system’s data through fuzzy cluster
analysis." In 11th Annual International Conference on Information Technology &amp; Computer
Science. 2014.
[14] Bedalli, Erind, Enea Mançellari, and Ozcan Asilkan. "A heterogeneous cluster ensemble model
for improving the stability of fuzzy cluster analysis." Procedia Computer Science 102 (2016):
129136.
[15] Liu, Zhe, and Sukumar Letchmunan. "Enhanced fuzzy clustering for incomplete instance with
evidence combination." ACM Transactions on Knowledge Discovery from Data 18, no. 3 (2024):
120.
[16] Zou, Fangyu, Li Shen, Zequn Jie, Weizhong Zhang, and Wei Liu. "A sufficient condition for
convergences of adam and rmsprop." In Proceedings of the IEEE/CVF Conference on computer
vision and pattern recognition, pp. 11127-11135. 2019.
[17] Liu, Jinlan, Dongpo Xu, Huisheng Zhang, and Danilo Mandic. "On hyper-parameter selection
for guaranteed convergence of RMSProp." Cognitive Neurodynamics 18, no. 6 (2024): 3227-3237.
[18] Babu, D. Vijendra, C. Karthikeyan, and Abhishek Kumar. "Performance analysis of cost and
accuracy for whale swarm and RMSprop optimizer." In IOP Conference Series: Materials Science
and Engineering, vol. 993, no. 1, p. 012080. IOP Publishing, 2020.
[19] Amarnath, B., S. Balamurugan, and Appavu Alias. "Review on feature selection techniques and
its impact for effective data classification using UCI machine learning repository
dataset." Journal of Engineering Science and Technology 11, no. 11 (2016): 1639-1646.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Oyewole</surname>
            ,
            <given-names>Gbeminiyi</given-names>
          </string-name>
          <string-name>
            <surname>John</surname>
          </string-name>
          , and George Alex Thopil.
          <article-title>"Data clustering: application and trends</article-title>
          .
          <source>" Artificial intelligence review 56</source>
          , no.
          <issue>7</issue>
          (
          <year>2023</year>
          ):
          <fpage>6439</fpage>
          -
          <lpage>6475</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Ruspini</surname>
            ,
            <given-names>Enrique H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>James</surname>
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Bezdek</surname>
          </string-name>
          , and James M. Keller.
          <article-title>"Fuzzy clustering: A historical perspective</article-title>
          .
          <source>" IEEE Computational Intelligence Magazine</source>
           
          <volume>14</volume>
          , no.
          <issue>1</issue>
          (
          <year>2019</year>
          ):
          <fpage>45</fpage>
          -
          <lpage>55</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Lu</surname>
            , Jie, Guangzhi Ma, and
            <given-names>Guangquan</given-names>
          </string-name>
          <string-name>
            <surname>Zhang</surname>
          </string-name>
          .
          <article-title>"Fuzzy machine learning: A comprehensive framework and systematic review." IEEE Transactions on Fuzzy Systems (</article-title>
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Rada</surname>
            , Rexhep, Erind Bedalli, Sokol Shurdhi, and
            <given-names>Betim</given-names>
          </string-name>
          <string-name>
            <surname>Çiço</surname>
          </string-name>
          .
          <article-title>"A comparative analysis on prototype-based clustering methods."</article-title>
          <source>In 2023 12th Mediterranean Conference on Embedded Computing (MECO)</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          . IEEE,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Höppner</surname>
          </string-name>
          , Frank, Frank Klawonn, Rudolf Kruse, and Thomas Runkler. 
          <article-title>Fuzzy cluster analysis: methods for classification, data analysis and image recognition</article-title>
          . John Wiley &amp; Sons (
          <year>1999</year>
          ):
          <fpage>71</fpage>
          -
          <lpage>78</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Menard</surname>
            ,
            <given-names>Michel</given-names>
            , and Michel
          </string-name>
          <string-name>
            <surname>Eboueya</surname>
          </string-name>
          .
          <article-title>"Extreme physical information and objective function in fuzzy clustering." Fuzzy Sets</article-title>
          and Systems 
          <volume>128</volume>
          , no.
          <issue>3</issue>
          (
          <year>2002</year>
          ):
          <fpage>285</fpage>
          -
          <lpage>303</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Timm</surname>
            , Heiko, and
            <given-names>Rudolf</given-names>
          </string-name>
          <string-name>
            <surname>Kruse</surname>
          </string-name>
          .
          <article-title>"A modification to improve possibilistic fuzzy cluster analysis."</article-title>
          <source>In 2002 IEEE World Congress on Computational Intelligence. IEEE International Conference on Fuzzy Systems. FUZZ-IEEE'02. Proceedings</source>
          , vol.
          <volume>2</volume>
          (
          <year>2002</year>
          ):
          <fpage>1460</fpage>
          -
          <lpage>1465</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Kang</surname>
            , Jiayin, Lequan Min, Qingxian Luan,
            <given-names>Xiao</given-names>
          </string-name>
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>and Jinzhu</given-names>
          </string-name>
          <string-name>
            <surname>Liu</surname>
          </string-name>
          .
          <article-title>"Novel modified fuzzy cmeans algorithm with applications</article-title>
          .
          <source>" Digital signal processing 19, no. 2</source>
          (
          <year>2009</year>
          ):
          <fpage>309</fpage>
          -
          <lpage>319</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>