<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Online Image Segmentation using Сredibilistic Fuzzy Clustering</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yevgeniy Bodyanskiy</string-name>
          <email>yevgeniy.bodyanskiy@nure.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alina Shafronenko</string-name>
          <email>alina.shafronenko@nure.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Diana Rudenko</string-name>
          <email>diana.rudenko@nure.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anton Pоlubiekhin</string-name>
          <email>anton.polubiekhin@nure.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dmytro Frolov</string-name>
          <email>dmytro.frolov@nure.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Kharkiv National University of Radio Electronics</institution>
          ,
          <addr-line>Nauky ave 14, Kharkiv, 61166</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Computational intelligence methods are widely used to solve many complex problems, including, of course, traditional: Data Mining and such new directions as Dynamic Data Mining, Data Stream Mining, Big Data Mining, Web Mining, Text Mining, etc. In the paper was proposed new adaptive on-line methods of fuzzy robust clustering-segmentation of data streams based on probabilistic, possibilistic and credibilistic approaches. Using proposed approach, it's possible to solve clustering task in on-line mode when data are fed to processing sequentially, possible in real time.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Segmentation</kwd>
        <kwd>clustering</kwd>
        <kwd>data stream</kwd>
        <kwd>credibilistic approach</kwd>
        <kwd>fuzzy image segmentation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>1. Introduction
The current state of technological development is inextricably linked with the development of
computerized tools, which, in turn, are dependent on the mathematical apparatus and practical
algorithms that use it. The development of computer tools, in particular hardware, acts as a
catalyst for the development of existing and the emergence of new scientific fields, such as Data
Science. Modern capabilities of computing environments allow the implementation of
algorithmically sufficiently complex methods that are the basis of intellectual analysis. And this
should become an impetus for the development of new hardware and software systems based on
the theoretical principles of artificial intelligence.</p>
      <p>Recently, in the tasks of analyzing and processing non-stationary signals of an arbitrary nature
under conditions of uncertainty, computational intelligence methods are increasingly being used,
among which hybrid neural networks can be distinguished.</p>
      <p>By the task of data segmentation, we will understand the division of the data sample into
homogeneous homomorphic segments based on the analysis of changes in the internal properties
of the data. Currently, several segmentation methods are known, namely using wavelet analysis
[1], fractal-wavelet technologies [2], neuro-fuzzy technologies [3-5], etc.</p>
      <p>Depending on the specifics of the problem being solved, two main types of forecasting and
segmentation methods can be applied: real-time and batch.</p>
      <p>Many neural network architectures, including hybrid structures, are used to solve this kind of
problems, but these systems are either cumbersome in their architecture or not sufficiently
adapted for real-time learning. In most cases, the activation functions of such networks are
sigmoidal functions, splines, polynomials, and radial basis functions.
2. Credibilistic fuzzy clustering
Traditionally, the initial information for the clustering problem is a sample of observations
consisting of N n - dimensional feature vectors:</p>
      <p>
        X = {x(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), x(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ),..., x(k ),..., x(N )}, x(k ) =),..., ( x1(k xn (k ))T ∈ Rn , k = 1, 2, …, N,
and the result of the algorithm is the distribution of the initial data set into mclasses with a certain
level wj(k) belonging to the k-th feature vector of the j-th cluster.
      </p>
      <p>At the same time, there is a wide class of problems when the initial information comes not in
vector, but in matrix form, i.e.</p>
      <p>x(k ) = {xi1i2 (k )};
where i1 = 1, 2, …, n1, i2 = 1, 2, …, n2, k = 1, 2, …, N. Such situation is characteristic, for example,
in image processing [6], when the initial (N1 × N2)-matrix is divided into N = N1N2(n1n2)-1 (n1 × n2)
fragment matrices that are subject to clustering, because of which the homogeneous in some
sense segments of this image. Traditionally, this problem is solved by preliminary vectorization
of fragments and the use of already known procedures, the most popular of which is the method
of clustering fuzzy C-means [6, 7].</p>
      <p>To process matrix data, it is necessary to introduce matrix methods of
clusteringsegmentation, for which it is advisable to introduce into consideration the matrix method of fuzzy
C - means, which is a generalization of FCM. This method will avoid unnecessary
vectorizationdevectorization operations when processing data given in the form of two-dimensional arrays
and provides information processing in online mode.</p>
      <p>So, let the sample of observations be given</p>
      <p>x(k ) ={xi1i2 (k )}∈ Rn1×n2 , k = 1, 2,..., N ,
at the same time, for the convenience of further processing, these data are pre-centered relative
to the average:
and normalized to its spherical norm (Frobenius norm):
x =
1 N</p>
      <p>∑ x(k )</p>
      <p>N k=1
x(k ) = Tr ( x(k )xT (k )).</p>
      <p>The matrix probabilistic criterion is used as the objective function of clustering:
E(wj (k ), c j )</p>
      <p>N m
=∑∑ wβj (k )D2 (x(k ), c j )
k=1 j=1</p>
      <p>N m
=∑∑ wβj (k )Tr ((x(k ) − c j )(x(k ) − c j )T ),
k=1 j=1
in presence of constraints
m m m
∑ wj (k ) = 1 ∨ ∑ wj (k ) −1 = 0, k = 1, 2,..., N , 0 &lt; ∑ wj (k ) &lt; N , j = 1, 2,..., m.</p>
      <p>j =1 j =1 j =1
By introducing the Lagrange function:</p>
      <p>L(wj (k ), c j , λ(k ))</p>
      <p>N m N  m 
=wβj(k ∑ ∑ )D2 (x(k ), c j ) + ∑ λ(k )  ∑ wj (k ) −1
k=1 j=1 k=1  j=1 
=</p>
      <p>
        N  m  m  
=wβj(k ∑  ∑ )D2 (x(k ), c j ) + λ(k )  ∑ wj (k ) −1 ,
k =1 j =1  j =1  
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
(
        <xref ref-type="bibr" rid="ref4">4</xref>
        )
 ∂L(wj (k ), c j , λ(k ))
 ∂wj (k )

 ∂L(wj (k), c j , λ(k ))

 ∂λ j (k)
= βwβj−1(k )D2 (x(k ), c j ) + λ(k ) = 0;
      </p>
      <p>m
=∑wj (k ) −1 =0;</p>
      <p>j=1
where λ(k) – uncertain Lagrange multiplier, and solving the system of Kuhn-Tucker equations:
 ∂L(wj (k), c j , λ(k))  N
 ∂c j (k)  =−2 ∑k=1 wβj (k)(x(k) − c j ) =O,
 ∂L(wj (k ), c j , λ(k )) 
where  ∂c j (k ) 
∂L(wj (k ), c j , λ(k ))
∂c ji1i2
–
(n1 × n2 )
matrix formed from
partial derivatives
; Ο – matrix of the same dimension formed by zeros, thus, we arrive at the
final form of the algorithm:
 1
 (D2 (x(k ), c j ))1−β
wj (k ) = ∑lm=1 (D2 (x(k), cl ))1−1β ;
λ(k) ∑=N−wβj∑l(m=k1)xβ(Dk)2 (x(k), cl )1−1β 1−β ;
c j = k=1∑kN=1 wβj (k ) .</p>
      <p>The resulting system gives rise to a wide class of clustering procedures. Thus, if we set β = 2,
we get a simple and effective matrix clustering algorithm [8], which is a generalization of the
popular procedure of J. Bezdek [6]:
 (Tr(x(k ) − c j )(x(k ) − c j )T )−1
wj (k ) = m
 N ∑l=1 (Tr(x(k ) − cl )(x(k ) − cl )T )−1
 ∑ w2j (k )x(k )
c j = k=1 N ,
 ∑ u 2j (k )
 k=1
;
where Tr – matrix trace symbol.</p>
      <p>The main difference between probabilistic and possibilistic approaches is that probabilistic
algorithms use relative similarities between objects and clusters, while probabilistic algorithms
use absolute similarities.</p>
      <p>
        Instead of the fuzzy partition matrix in the fuzzy C-means algorithm, the possible C-means
algorithm uses a (N × m) - matrix of possibilities or typicality matrix T = {tj(k)}, where tj(k) ∈ [0,
1] – the possibility that the object x(k) belongs to cluster j.
(
        <xref ref-type="bibr" rid="ref5">5</xref>
        )
(
        <xref ref-type="bibr" rid="ref6">6</xref>
        )
The possibilistic matrix has only the following limitations:
      </p>
      <p>m
0 &lt; ∑t j (k) ≤ m, k =1, 2,…, N.</p>
      <p>j=1</p>
      <p>This means that an object can have a feature vector that contains only values close to zero
(usually such objects are considered noise) or only ones.</p>
      <p>Krishnapuram, Keller et al proposed the probabilistic C-means (PCM) algorithm and two
algorithms that combine probabilistic and possibilistic approaches: the probabilistic-possibilistic
C-means algorithm (FPCM) and the possibilistic-probabilistic C-means algorithm (PFCM) [9-11].</p>
      <p>
        In the PCM algorithm, formula (
        <xref ref-type="bibr" rid="ref6">6</xref>
        ) was replaced by the expression:

t j (k ) =







c j = k=1





∑Nt βj (k ) x (k )
∑Nt βj (k )
k=1
,
1
γ j
1 +  Tr ( x (k ) − c j )( x (k ) − c j )T β−1


1 ;
(
        <xref ref-type="bibr" rid="ref7">7</xref>
        )
(
        <xref ref-type="bibr" rid="ref8">8</xref>
        )
where γj &gt; 0 – a constant determined empirically. It can be seen that the calculation of the
cluster prototype in formulas (
        <xref ref-type="bibr" rid="ref6">6</xref>
        ) and (
        <xref ref-type="bibr" rid="ref8">8</xref>
        ) is identical, with the only difference that the matrix of
fuzzy partitioning is changed to the matrix of possibilities. The calculation of the possibility of an
object belonging to a cluster in formula (
        <xref ref-type="bibr" rid="ref8">8</xref>
        ) can be justified as a bell-shaped function presented in
Figure 1.
Keller and Krishnapuram suggested choosing the parameter γj in the form:
γ j =K k=1
      </p>
      <p>N T
∑ wβj (k )Tr ( x (k ) − c j )( x (k ) − c j )</p>
      <p>
        N
∑ wβj (k )
k=1
where K &gt; 0 (most often K = 1). But calculations γj by formula (
        <xref ref-type="bibr" rid="ref9">9</xref>
        ) requires memory to store the
fuzzy partition matrix, as well as time for its use.
      </p>
      <p>The PCM algorithm does a good job of suppressing interference and can usually be applied
when it is necessary to improve the results obtained with the help of other algorithms. Also, this
algorithm can merge close clusters into one, from which it follows that the initial number of
clusters that was set in advance is too large (at the same time, the PCM algorithm can merge
clusters that should be separated).</p>
      <p>The FPCM and PFCM algorithms use both a fuzzy partition matrix and a feature matrix, trying
to take advantage of both approaches.</p>
      <p>The FPCM algorithm uses the following formulas:


wj (k ) =</p>
      <p>1
(Tr ( x (k ) − c j )( x (k ) − c j )T )1−β
m 1 ;
∑(Tr ( x (k ) − cl )( x (k ) − cl )T )1−β
l=1




t j (k ) =



 N
 ∑( wβj (k ) + t ηj (k )) x(k)
c j = k=1 N
 ∑k=1 ( wβj (k ) + t ηj (k ))</p>
      <p>1
(Tr ( x (k ) − c j )( x (k ) − c j )T )1−η
N 1 ;
∑(Tr ( x (l ) − c j )( x (l ) − c j )T )1−η
l=1
,
where η &gt; 0 (in most cases η = 2).</p>
      <p>The FPCM algorithm uses the standard procedure for calculating the fuzzy partition matrix,
but the possibility matrix is calculated using a new formula. Cluster prototypes are calculated
using the sum of both matrices.</p>
      <p>
        The PFCM method uses a standard procedure for calculating the fuzzy partition matrix (as in
formula (
        <xref ref-type="bibr" rid="ref6">6</xref>
        )). The procedure for calculating the possibility matrix was taken from PCM (
        <xref ref-type="bibr" rid="ref8">8</xref>
        ) and
slightly modified. Centroids are calculated as in the FPCM algorithm, but both matrices have their
own weights:
(
        <xref ref-type="bibr" rid="ref10">10</xref>
        )
(
        <xref ref-type="bibr" rid="ref11">11</xref>
        )


wj (k ) =





t j (k ) =
      </p>
      <p>1
(Tr ( x (k ) − c j )( x (k ) − c j )T )1−β
m 1 ;
∑(Tr ( x (k ) − cl )( x (k ) − cl )T )1−β
l=1
1</p>
      <p>
        1 ;



 ∑N(awβj (k ) + bt ηj (k )) x(k)
c j = k=1∑kN=1 (awβj (k ) + bt ηj (k ))
1 +  b Tr ( x (k ) − c j )( x (k ) − c j )T β−1
 γ j 
where a &gt; 0, b &gt; 0. The constants a and b determine the relative importance of the fuzzy
partition matrix and the capability matrix in the centroid calculation function. By setting a = 0,
algorithm (
        <xref ref-type="bibr" rid="ref11">11</xref>
        ) goes to PCM, and by setting b = 0, algorithm (
        <xref ref-type="bibr" rid="ref11">11</xref>
        ) goes to FCM.
      </p>
      <p>Analyzing all the presented methods, several conclusions can be drawn. First, the membership
function of the FCM algorithm with its limitations is too "strong", allowing outlier objects to be
assigned to one or more clusters, which, in turn, can greatly affect the underlying structure of the
data set. On the other hand, the PCM method's constraint on the features is too weak – it allows
to refer to a cluster independently of the rest of the data. Also, PCM is very sensitive to the
initialization of the capability matrix. The PFCM method is an efficient combination of the two
approaches, and the clustering results depend on the parameter setting a, b, β, η.</p>
      <p>
        Algorithm (
        <xref ref-type="bibr" rid="ref6">6</xref>
        ) can be extended to the case when data for processing are received sequentially
in on-line mode. To do this, by applying the Arrow-Hurwitz-Uzawa saddle point search procedure
to the Lagrangian (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ), when the (k+1)th observation is received, the estimates of the membership
levels and centroids can be refined using recurrence relations [12]
 1
 (D2 (x(k + 1), c j (k ))1−β )
wj (k + 1) =∑lm=1 (D2 (x(k + 1), cl (k ))1−1β ) ;


c j (k + 1) =c j (k ) − η(k )  ∂L(wj (k + 1), c j , λ(k + 1))  =
  ∂c j 
=c j (k ) + η(k )wβj (k + 1)(x(k + 1) − c j (k )),
for an arbitrary value of the fuzzifier β and
 (Tr(x(k + 1) − c j (k ))(x(k + 1) − c j (k ))T )−1
wj (k + 1) =m
 ∑ (Tr(x(k + 1) − cl (k ))(x(k + 1) − cl (k ))T )−1
 l=1
c j (k + 1) =c j (k ) + η(k )w2j (k + 1)(x(k + 1) − c j (k )),
;
for β = 2.
      </p>
      <p>
        It is easy to see that the expression (
        <xref ref-type="bibr" rid="ref13">13</xref>
        ) is an adaptive version of the procedure (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ), and (
        <xref ref-type="bibr" rid="ref13">13</xref>
        )
is, accordingly, (
        <xref ref-type="bibr" rid="ref6">6</xref>
        ).
      </p>
      <p>The matrix credibility criterion is used as the objective function of clustering:</p>
      <p>E (Cred j (k ), c j )</p>
      <p>N m
=∑Cred ∑ βj (k )D2 ( x(k ), c j )
k=1 j=1</p>
      <p>
        N m
=∑Cred ∑ βj (k )Tr ((x(k ) − c j )(x(k) − c j )T ) (
        <xref ref-type="bibr" rid="ref14">14</xref>
        )
k=1 j=1
in presence of constraints
0 ≤ Cred j (k ) ≤ 1∀j, k,

sup Cred j (k ) ≥ 0,5∀k,
Cred j (k ) + sup Credl (k ) =1
where Cred j (k ) - level of observation x(k ) credibility.
      </p>
      <p>
        In the procedures of credibilistic fuzzy clustering, the level of membership is determined by
the membership functions [13]:
wj (k) =ϕ j ( D ( x(k), c j )) =ϕ jTr ((x(k) − c j )(x(k) − c j )T )
(
        <xref ref-type="bibr" rid="ref12">12</xref>
        )
(
        <xref ref-type="bibr" rid="ref13">13</xref>
        )
(
        <xref ref-type="bibr" rid="ref15">15</xref>
        )
      </p>
      <p>
        It is easy to see that membership level (
        <xref ref-type="bibr" rid="ref16">16</xref>
        ) using the distance is based on similarity measure
[14]. As such a measure in [15], it was proposed to use a function:
wj (k) =
      </p>
      <p>1
1 + Tr ((x(k) − c j )(x(k) − c j )T ) .</p>
      <p>Thus, if the fuzzy clustering algorithm in a batch form can be written as [16]:</p>
      <p>1
1 + Tr ((x(k) − c j )(x(k) − c j )T ) ,</p>
      <p>wj (k)
sup wl (k)</p>
      <p>,

wj (k) =


w∗j (k) =


Cred j (k) = w∗j (k) + 1 − sup wl∗ (k)</p>
      <p>,



c j = k=1 N
 ∑Cred βj (k)
 k=1</p>
      <p>
        N
∑Cred βj (k)x(k)
2
,
in the online mode this procedure (
        <xref ref-type="bibr" rid="ref18">18</xref>
        ) has the form (19):
      </p>
      <p>m  1 −1
=∑Tr ((x(k + 1) − c j )(x(k + 1) − c j )T )1−β  ,
ll≠=1j  
(Tr ((x(k +1) − c j )(x(k +1) − c j )T ))β −1 −1</p>
      <p> ,
σ 2j (k + 1) 


σ 2j (k + 1)

 
wq (k + 1) = 1 +
 
 

</p>
      <p>
        wj (k +1)
w∗j (k +1) = ,
 sup wl (k +1)
 1
Cred j (k +1) =( w∗j (k +1) +1 − sup wl∗ (k +1)),
 2
c j (k + 1) =c j (k) +η (k + 1)Cred βj (k + 1)( x(k + 1) − c j (k))
or in case when β = 2
(
        <xref ref-type="bibr" rid="ref17">17</xref>
        )
(
        <xref ref-type="bibr" rid="ref18">18</xref>
        )
(19)

      </p>
      <p>It is easy to see that the recurrent fuzzy clustering algorithm is not more complex than the
online modifications of probabilistic, possibilistic, and robust procedures [17, 18].
3. Experimental research
Digital images, including satellite images of the city of Kharkiv, were used to test the implemented
matrix credibilistic modifications of the clustering algorithm. Samples have no missing attributes
and are numeric.</p>
      <p>The result of the algorithm is the final fuzzy partition matrix for all sample objects and class
prototypes.</p>
      <p>When processing digital images, objects (matrices or vectors of the same dimensions) are
formed from fragments of this image, and each pixel from the RGB (Red-Green-Blue) color model
is converted to the Grayscale model, where the brightness of a pixel is expressed as a scalar value
from the interval [0,1]. The conversion from the RGB model to the Grayscale model is performed
according to the formula:</p>
      <p>Y = (0.299R + 0.587G + 0.114B) / 255 ,
where Y is the brightness of the pixel glow, R, G, B are the brightness of the glow of red, green,
and blue tones, respectively, the values of which are in the interval [0, 255].</p>
      <p>Observation sets formed from digital images are processed according to the same principle
as standard quantitative samples. After image processing, each cluster is assigned the colors of
the Grayscale model, and each object is colored in the color of the nearest cluster.</p>
      <p>To evaluate the quality of the algorithm, the following criteria were used: Partition
Coefficient (PC), Classification Entropy (CE), Partition Index (PI) with the same initialized fuzzy
partition matrix U0.</p>
      <p>Table 1 shows the results of the accuracy and speed of the clustering algorithms on the Iris
sample, and Table 2 shows the results of the satellite digital image of the city of Kharkiv. The time
given is an average for one iteration, considering the vectorization-devectorization operation.</p>
      <p>On Figure 2 show the initial image, the resampled sample (20% of objects) and the result of
the cluster analysis and the process of the algorithm.</p>
      <p>On Figure 3 shows the result of digital image clustering by the adaptive matrix mrthod of fuzzy
credibilistic clustering.
The work is supported by the state budget scientific research project of Kharkiv National
University of Radio Electronics "Deep hybrid systems of computational intelligence for data
stream mining and their fast learning" (state registration number 0119U001403).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Chan</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <article-title>Efficient time series matching by wavelets</article-title>
          ,
          <source>in: Proceedings of 15th IEEE Int. Conf. on Data Engineering</source>
          ,
          <year>1999</year>
          , pp.
          <fpage>126</fpage>
          -
          <lpage>133</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Ruspini</surname>
            ,
            <given-names>Enrique H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>James</surname>
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Bezdek</surname>
          </string-name>
          , James M. Keller, Fuzzy clustering:
          <article-title>A historical perspective</article-title>
          ,
          <source>in: IEEE Computational Intelligence Magazine 14.1</source>
          ,
          <issue>2019</issue>
          , pp.
          <fpage>45</fpage>
          -
          <lpage>55</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.C.</given-names>
            <surname>Bezdek</surname>
          </string-name>
          ,
          <article-title>Pattern Recognition with Fuzzy Objective Function Algorithms</article-title>
          , N.Y.: Plenum Press,
          <year>1981</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Oyewole</surname>
            ,
            <given-names>Gbeminiyi</given-names>
          </string-name>
          <string-name>
            <surname>John</surname>
          </string-name>
          , and George Alex Thopil,
          <article-title>Data clustering: application and trends</article-title>
          ,
          <source>Artificial Intelligence Review 56.7</source>
          ,
          <issue>2023</issue>
          , pp.
          <fpage>6439</fpage>
          -
          <lpage>6475</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Agrawal</surname>
            ,
            <given-names>Anurag</given-names>
          </string-name>
          <string-name>
            <surname>Vijay</surname>
          </string-name>
          , et al.
          <article-title>"A probability-based fuzzy algorithm for multi-attribute decision-analysis with application to aviation disaster decision-making." Decision Analytics Journal 8 (</article-title>
          <year>2023</year>
          ):
          <fpage>100310</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Wongkhuenkaew</surname>
          </string-name>
          ,
          <string-name>
            <surname>Ritipong</surname>
          </string-name>
          , et al.
          <article-title>"Fuzzy K-nearest neighbor based dental fluorosis classification using multi-prototype unsupervised possibilistic fuzzy clustering via cuckoo search algorithm."</article-title>
          <source>International Journal of Environmental Research and Public Health 20.4</source>
          (
          <year>2023</year>
          ):
          <fpage>3394</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Bodyanskiy</surname>
            , Yevgeniy,
            <given-names>Valentyna</given-names>
          </string-name>
          <string-name>
            <surname>Volkova</surname>
            , and
            <given-names>Mark</given-names>
          </string-name>
          <string-name>
            <surname>Skuratov</surname>
          </string-name>
          .
          <article-title>"Matrix Neuro-Fuzzy SelfOrganizing Clustering Network."</article-title>
          <source>Computer Science</source>
          (
          <volume>1407</volume>
          -
          <fpage>7493</fpage>
          )
          <fpage>50</fpage>
          (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8] Zhang,
          <string-name>
            <surname>Yong</surname>
          </string-name>
          , et al.
          <article-title>"Possibilistic c-means clustering based on the nearest-neighbour isolation similarity</article-title>
          .
          <source>" Journal of Intelligent &amp; Fuzzy Systems 44.2</source>
          (
          <year>2023</year>
          ):
          <fpage>1781</fpage>
          -
          <lpage>1792</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Hashemi</surname>
            ,
            <given-names>Seyed</given-names>
          </string-name>
          <string-name>
            <surname>Emadedin</surname>
          </string-name>
          , Fatemeh Gholian-Jouybari, and
          <string-name>
            <surname>Mostafa</surname>
          </string-name>
          Hajiaghaei-Keshteli.
          <article-title>"A fuzzy C-means algorithm for optimizing data clustering</article-title>
          .
          <source>" Expert Systems with Applications</source>
          <volume>227</volume>
          (
          <year>2023</year>
          ):
          <fpage>120377</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <surname>Yong</surname>
          </string-name>
          , et al.
          <article-title>"Possibilistic c-means clustering based on the nearest-neighbour isolation similarity</article-title>
          .
          <source>" Journal of Intelligent &amp; Fuzzy Systems 44.2</source>
          (
          <year>2023</year>
          ):
          <fpage>1781</fpage>
          -
          <lpage>1792</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Hussain</surname>
            , Ishtiaq,
            <given-names>Kristina P.</given-names>
          </string-name>
          <string-name>
            <surname>Sinaga</surname>
          </string-name>
          , and
          <string-name>
            <surname>Miin-Shen Yang</surname>
          </string-name>
          .
          <article-title>"Unsupervised multiview fuzzy cmeans clustering algorithm</article-title>
          .
          <source>" Electronics</source>
          <volume>12</volume>
          .21 (
          <year>2023</year>
          ):
          <fpage>4467</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <surname>Miin-Shen</surname>
          </string-name>
          ,
          <article-title>and Josephine BM Benjamin. "Sparse possibilistic c-means clustering with Lasso."</article-title>
          <source>Pattern Recognition</source>
          <volume>138</volume>
          (
          <year>2023</year>
          ):
          <fpage>109348</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Bodyanskiy</surname>
            <given-names>Ye</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Shafronenko A.</given-names>
            ,
            <surname>Mashtalir</surname>
          </string-name>
          <string-name>
            <surname>S.</surname>
          </string-name>
          ,
          <article-title>Online robust fuzzy clustering of data with omissions using similarity measure of special type</article-title>
          ,
          <source>Lecture Notes in Computational Intelligence and Decision</source>
          Waking-Cham: Springer,
          <year>2020</year>
          , pp.
          <fpage>637</fpage>
          -
          <lpage>646</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Young</surname>
            <given-names>F.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hamer R.M.</surname>
          </string-name>
          <article-title>Theory and Applications of Multidimensional Scaling-</article-title>
          <string-name>
            <surname>Hillsdale</surname>
          </string-name>
          , N.J.: Erlbaum,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Dahiya</surname>
            , Sonika, and
            <given-names>Anjana</given-names>
          </string-name>
          <string-name>
            <surname>Gosain</surname>
          </string-name>
          .
          <article-title>"DOIFCM: An Outlier Efficient IFCM." Computational Intelligence in Analytics and Information Systems</article-title>
          . Apple Academic Press,
          <year>2023</year>
          .
          <fpage>135</fpage>
          -
          <lpage>149</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Kyrychenko</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ostapov</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Malyk</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <article-title>Cluster Analysis of Information in Complex Networks</article-title>
          , in:
          <source>International Journal of Computing</source>
          ,
          <volume>22</volume>
          (
          <issue>4</issue>
          ),
          <year>2023</year>
          , pp.
          <fpage>515</fpage>
          -
          <lpage>523</lpage>
          . https://doi.org/10.47839/ijc.22.4.
          <fpage>3360</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Linan</surname>
            ,
            <given-names>M. N.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Gerardo</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Medina</surname>
          </string-name>
          .
          <article-title>"Modified weight initialization in the self-organizing map using Nguyen-Widrow initialization algorithm</article-title>
          .
          <source>" Journal of Physics: Conference Series</source>
          . Vol.
          <volume>1235</volume>
          . No.
          <article-title>1</article-title>
          .
          <string-name>
            <given-names>IOP</given-names>
            <surname>Publishing</surname>
          </string-name>
          ,
          <year>2019</year>
          . https://doi.org/10.47839/ijc.19.1.
          <fpage>1694</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Sirola</surname>
            , Martti,
            <given-names>and John Einar Hulsund.</given-names>
          </string-name>
          <article-title>"Machine-learning methods in prognosis of ageing phenomena in nuclear power plant components."</article-title>
          <source>International Scientific Journal of Computing 20.1</source>
          (
          <year>2021</year>
          ):
          <fpage>11</fpage>
          -
          <lpage>21</lpage>
          .https://doi.org/10.47839/ijc.20.1.2086
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>