<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Online Image Segmentation using Сredibilistic Fuzzy Clustering</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Yevgeniy</forename><surname>Bodyanskiy</surname></persName>
							<email>yevgeniy.bodyanskiy@nure.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Kharkiv National University of Radio Electronics</orgName>
								<address>
									<addrLine>Nauky ave 14</addrLine>
									<postCode>61166</postCode>
									<settlement>Kharkiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Alina</forename><surname>Shafronenko</surname></persName>
							<email>alina.shafronenko@nure.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Kharkiv National University of Radio Electronics</orgName>
								<address>
									<addrLine>Nauky ave 14</addrLine>
									<postCode>61166</postCode>
									<settlement>Kharkiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Diana</forename><surname>Rudenko</surname></persName>
							<email>diana.rudenko@nure.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Kharkiv National University of Radio Electronics</orgName>
								<address>
									<addrLine>Nauky ave 14</addrLine>
									<postCode>61166</postCode>
									<settlement>Kharkiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Anton</forename><surname>Pоlubiekhin</surname></persName>
							<email>anton.polubiekhin@nure.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Kharkiv National University of Radio Electronics</orgName>
								<address>
									<addrLine>Nauky ave 14</addrLine>
									<postCode>61166</postCode>
									<settlement>Kharkiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Dmytro</forename><surname>Frolov</surname></persName>
							<email>dmytro.frolov@nure.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Kharkiv National University of Radio Electronics</orgName>
								<address>
									<addrLine>Nauky ave 14</addrLine>
									<postCode>61166</postCode>
									<settlement>Kharkiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<address>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Online Image Segmentation using Сredibilistic Fuzzy Clustering</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">6171DD68208ECC9158D8C758B3EB223F</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:24+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Segmentation, clustering, data stream, credibilistic approach, fuzzy image segmentation (D. Frolov) 0000-0001-5418-2143 (Ye. Bodyanskiy)</term>
					<term>0000-0002-8040-0279 (A. Shafronenko)</term>
					<term>0000-0002-1792-5080 (D. Rudenko)</term>
					<term>0009-0002-7492-3900 (A. Pоlubiekhin)</term>
					<term>0009-0009-3291-3561 (D. Frolov)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Computational intelligence methods are widely used to solve many complex problems, including, of course, traditional: Data Mining and such new directions as Dynamic Data Mining, Data Stream Mining, Big Data Mining, Web Mining, Text Mining, etc. In the paper was proposed new adaptive on-line methods of fuzzy robust clustering-segmentation of data streams based on probabilistic, possibilistic and credibilistic approaches. Using proposed approach, it's possible to solve clustering task in on-line mode when data are fed to processing sequentially, possible in real time.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The current state of technological development is inextricably linked with the development of computerized tools, which, in turn, are dependent on the mathematical apparatus and practical algorithms that use it. The development of computer tools, in particular hardware, acts as a catalyst for the development of existing and the emergence of new scientific fields, such as Data Science. Modern capabilities of computing environments allow the implementation of algorithmically sufficiently complex methods that are the basis of intellectual analysis. And this should become an impetus for the development of new hardware and software systems based on the theoretical principles of artificial intelligence.</p><p>Recently, in the tasks of analyzing and processing non-stationary signals of an arbitrary nature under conditions of uncertainty, computational intelligence methods are increasingly being used, among which hybrid neural networks can be distinguished.</p><p>By the task of data segmentation, we will understand the division of the data sample into homogeneous homomorphic segments based on the analysis of changes in the internal properties of the data. Currently, several segmentation methods are known, namely using wavelet analysis <ref type="bibr" target="#b0">[1]</ref>, fractal-wavelet technologies <ref type="bibr" target="#b1">[2]</ref>, neuro-fuzzy technologies <ref type="bibr" target="#b2">[3]</ref><ref type="bibr" target="#b3">[4]</ref><ref type="bibr" target="#b4">[5]</ref>, etc.</p><p>Depending on the specifics of the problem being solved, two main types of forecasting and segmentation methods can be applied: real-time and batch.</p><p>Many neural network architectures, including hybrid structures, are used to solve this kind of problems, but these systems are either cumbersome in their architecture or not sufficiently adapted for real-time learning. In most cases, the activation functions of such networks are sigmoidal functions, splines, polynomials, and radial basis functions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Credibilistic fuzzy clustering</head><p>Traditionally, the initial information for the clustering problem is a sample of observations consisting of N n -dimensional feature vectors:</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>{ }</head><p>(1), (2),..., ( ),..., ( ) ,</p><formula xml:id="formula_0">X x x x k x N = ( ) 1 ( ) ( ),..., ( ) , T n n x k x k x k R = ∈ k = 1, 2, …, N,</formula><p>and the result of the algorithm is the distribution of the initial data set into m classes with a certain level wj(k) belonging to the k-th feature vector of the j-th cluster. At the same time, there is a wide class of problems when the initial information comes not in vector, but in matrix form, i.e.</p><formula xml:id="formula_1">1 2 ( ) { ( )}; i i x k x k = where i1 = 1, 2, …, n1, i2 = 1, 2, …, n2, k = 1, 2, …, N.</formula><p>Such situation is characteristic, for example, in image processing <ref type="bibr" target="#b5">[6]</ref>, when the initial (N1 × N2)-matrix is divided into N = N1N2(n1n2) -1 (n1 × n2) fragment matrices that are subject to clustering, because of which the homogeneous in some sense segments of this image. Traditionally, this problem is solved by preliminary vectorization of fragments and the use of already known procedures, the most popular of which is the method of clustering fuzzy C-means <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref>.</p><p>To process matrix data, it is necessary to introduce matrix methods of clusteringsegmentation, for which it is advisable to introduce into consideration the matrix method of fuzzy C -means, which is a generalization of FCM. This method will avoid unnecessary vectorizationdevectorization operations when processing data given in the form of two-dimensional arrays and provides information processing in online mode.</p><p>So, let the sample of observations be given</p><formula xml:id="formula_2">1 2 1 2 ( ) { ( )} , n n i i x k x k R × = ∈ 1, 2,..., k N = ,</formula><p>at the same time, for the convenience of further processing, these data are pre-centered relative to the average:</p><formula xml:id="formula_3">1 1 ( ) N k x x k N = = ∑ (1)</formula><p>and normalized to its spherical norm (Frobenius norm):</p><p>( )</p><formula xml:id="formula_4">( ) ( ) ( ) . T x k Tr x k x k =<label>(2)</label></formula><p>The matrix probabilistic criterion is used as the objective function of clustering:</p><p>( )</p><formula xml:id="formula_5">2 1 1 1 1 ( ( ), ) ( ) ( ( ), ) ( ) ( ( ) )( ( ) ) , N m N m T j j j j j j j k j k j E w k c w k D x k c w k Tr x k c x k c β β = = = = = = − − ∑∑ ∑∑<label>(3)</label></formula><p>in presence of constraints</p><formula xml:id="formula_6">1 1 1 ( ) 1 ( ) 1 0, 1, 2,..., , 0 ( ) , 1, 2,..., . m m m j j j j j j w k w k k N w k N j m = = = = ∨ − = = &lt; &lt; = ∑ ∑ ∑</formula><p>By introducing the Lagrange function:</p><formula xml:id="formula_7">2 1 1 1 1 2 1 1 1 ( ( ), , ( )) ( ) ( ( ), ) ( ) ( ) 1 ( ) ( ( ), ) ( ) ( ) 1 , N m N m j j j j j k j k j N m m j j j k j j L w k c k w k D x k c k w k w k D x k c k w k β = = = = β = = =   λ = + λ − =         = + λ −           ∑∑ ∑ ∑ ∑ ∑ ∑<label>(4)</label></formula><p>where λ(k) -uncertain Lagrange multiplier, and solving the system of Kuhn-Tucker equations:</p><formula xml:id="formula_8">1 2 1 1 ( ( ), , ( )) ( ) ( ( ), ) ( ) 0; ( ) ( ( ), , ( )) ( ) 1 0; ( ) ( ( ), , ( )) 2 ( )( ( ) ) O, ( ) j j j j j m j j j j j N j j j j k j L w k c k w k D x k c k w k L w k c k w k k L w k c k w k x k c c k β− = β =  ∂ λ  = β + λ = ∂   ∂ λ  = − =  ∂λ     ∂ λ    = − − =    ∂      ∑ ∑ where ( ( ), ,<label>( ))</label></formula><p>( )</p><formula xml:id="formula_9">j j j L w k c k c k   ∂ λ     ∂     - 1 2 ( ) n n × -matrix formed from partial derivatives<label>1 2</label></formula><p>( ( ), , ( )) ;</p><formula xml:id="formula_10">j j ji i L w k c k c ∂ λ ∂ Ο</formula><p>-matrix of the same dimension formed by zeros, thus, we arrive at the final form of the algorithm:</p><formula xml:id="formula_11">1 2 1 1 2 1 1 1 1 2 1 1 1 1 ( ( ( ), )) ( ) ; (<label>( ( ), )) ( ) ( ( ), ) ;</label></formula><p>( ) ( ) . ( )</p><formula xml:id="formula_12">j j m l l m l l N j k j N j k D x k c w k D x k c k D x k c w k x k c w k −β −β = −β −β = β = β =   =             λ = − β                =     ∑ ∑ ∑ ∑ (5)</formula><p>The resulting system gives rise to a wide class of clustering procedures. Thus, if we set β = 2, we get a simple and effective matrix clustering algorithm <ref type="bibr" target="#b7">[8]</ref>, which is a generalization of the popular procedure of J. Bezdek <ref type="bibr" target="#b5">[6]</ref>:</p><formula xml:id="formula_13">1 1 1 2 1 2 1 ( ( ( ) )( ( ) ) ) ( ) ; ( ( ( ) )( ( ) ) ) ( ) ( ) , ( ) T j j j m T l l l N j k j N j k Tr x k c x k c w k Tr x k c x k c w k x k c u k − − = = =  − − =   − −      =    ∑ ∑ ∑ (6)</formula><p>where Tr -matrix trace symbol. The main difference between probabilistic and possibilistic approaches is that probabilistic algorithms use relative similarities between objects and clusters, while probabilistic algorithms use absolute similarities.</p><p>Instead of the fuzzy partition matrix in the fuzzy C-means algorithm, the possible C-means algorithm uses a ( ) N m × -matrix of possibilities or typicality matrix T = {tj(k)}, where tj(k) ∈ [0, 1] -the possibility that the object x(k) belongs to cluster j.</p><p>The possibilistic matrix has only the following limitations:</p><formula xml:id="formula_14">1 0 ( ) , 1, 2, , . m j j t k m k N = &lt; ≤ = … ∑ (7)</formula><p>This means that an object can have a feature vector that contains only values close to zero (usually such objects are considered noise) or only ones.</p><p>Krishnapuram, Keller et al proposed the probabilistic C-means (PCM) algorithm and two algorithms that combine probabilistic and possibilistic approaches: the probabilistic-possibilistic C-means algorithm (FPCM) and the possibilistic-probabilistic C-means algorithm (PFCM) <ref type="bibr" target="#b8">[9]</ref><ref type="bibr" target="#b9">[10]</ref><ref type="bibr" target="#b10">[11]</ref>.</p><p>In the PCM algorithm, formula <ref type="bibr" target="#b5">(6)</ref> was replaced by the expression:</p><formula xml:id="formula_15">( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 1 1 1 1 ; 1 , j T j j j N j k j N j k t k Tr x k c x k c t k x k c t k β− β = β = =   − −          +   γ         = ∑ ∑ (<label>8</label></formula><formula xml:id="formula_16">)</formula><p>where γj &gt; 0 -a constant determined empirically. It can be seen that the calculation of the cluster prototype in formulas ( <ref type="formula">6</ref>) and ( <ref type="formula" target="#formula_15">8</ref>) is identical, with the only difference that the matrix of fuzzy partitioning is changed to the matrix of possibilities. The calculation of the possibility of an object belonging to a cluster in formula ( <ref type="formula" target="#formula_15">8</ref>) can be justified as a bell-shaped function presented in Figure <ref type="figure" target="#fig_0">1</ref>. </p><formula xml:id="formula_17">( ) ( ) ( ) ( ) ( ) ( ) 1 1 T j j j N k N j k j k Tr x k c x k c K w w k = β = β − − γ = ∑ ∑ ,<label>(9)</label></formula><p>where K &gt; 0 (most often K = 1). But calculations γj by formula (9) requires memory to store the fuzzy partition matrix, as well as time for its use.</p><p>The PCM algorithm does a good job of suppressing interference and can usually be applied when it is necessary to improve the results obtained with the help of other algorithms. Also, this algorithm can merge close clusters into one, from which it follows that the initial number of clusters that was set in advance is too large (at the same time, the PCM algorithm can merge clusters that should be separated).</p><p>The FPCM and PFCM algorithms use both a fuzzy partition matrix and a feature matrix, trying to take advantage of both approaches.</p><p>The FPCM algorithm uses the following formulas:</p><formula xml:id="formula_18">( ) ( ) ( ) ( ) ( )<label>( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )</label></formula><p>( )</p><formula xml:id="formula_19">1 1 1 1 1 1 1 1 1 1 1 1 ; ; ( ) , T j j j m T l T j j j N T j j l N j j k j N j l j k l Tr x k c x k c k Tr x k c x k c Tr x k c x k c t k Tr x c x c w k t k x k c w l l w k t k −β −β = −η −η = β η = β η =  − −   =  − −     − −  =   − −    +   =  +   ∑ ∑ ∑ ∑ (10)</formula><p>where η &gt; 0 (in most cases η = 2). The FPCM algorithm uses the standard procedure for calculating the fuzzy partition matrix, but the possibility matrix is calculated using a new formula. Cluster prototypes are calculated using the sum of both matrices.</p><p>The PFCM method uses a standard procedure for calculating the fuzzy partition matrix (as in formula ( <ref type="formula">6</ref>)). The procedure for calculating the possibility matrix was taken from PCM (8) and slightly modified. Centroids are calculated as in the FPCM algorithm, but both matrices have their own weights:</p><formula xml:id="formula_20">( )<label>( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )</label></formula><p>( ) </p><formula xml:id="formula_21">1 1 1 1 1 1 1<label>1 1 ; 1 ; 1 ( ) ,</label></formula><formula xml:id="formula_22">−β −β = β− β η = β η =  − −   =  − −      =     − −    +    γ      +  =   +   ∑ ∑ ∑ (11)</formula><p>where a &gt; 0, b &gt; 0. The constants a and b determine the relative importance of the fuzzy partition matrix and the capability matrix in the centroid calculation function. By setting a = 0, algorithm <ref type="bibr" target="#b10">(11)</ref> goes to PCM, and by setting b = 0, algorithm <ref type="bibr" target="#b10">(11)</ref> goes to FCM.</p><p>Analyzing all the presented methods, several conclusions can be drawn. First, the membership function of the FCM algorithm with its limitations is too "strong", allowing outlier objects to be assigned to one or more clusters, which, in turn, can greatly affect the underlying structure of the data set. On the other hand, the PCM method's constraint on the features is too weak -it allows to refer to a cluster independently of the rest of the data. Also, PCM is very sensitive to the initialization of the capability matrix. The PFCM method is an efficient combination of the two approaches, and the clustering results depend on the parameter setting a, b, β, η.</p><p>Algorithm ( <ref type="formula">6</ref>) can be extended to the case when data for processing are received sequentially in on-line mode. To do this, by applying the Arrow-Hurwitz-Uzawa saddle point search procedure to the Lagrangian (4), when the (k+1) th observation is received, the estimates of the membership levels and centroids can be refined using recurrence relations <ref type="bibr" target="#b11">[12]</ref>  <ref type="formula" target="#formula_9">1</ref>), ( )) ) ( <ref type="formula" target="#formula_9">1</ref>) ;</p><formula xml:id="formula_23">1 2 1 1 2 1 1 ( ( (</formula><p>( ( ( <ref type="formula" target="#formula_9">1</ref>), ( )) )</p><p>( ( 1), , ( <ref type="formula" target="#formula_9">1</ref>)</p><formula xml:id="formula_24">) ( 1) ( ) ( ) ( ) ( ) ( 1)( ( 1) ( )), j j m l l j j j j j j j j D x k c k w k D x k c k L w k c k c k c k k c c k k w k x k c k −β −β = β  +  + =   +       ∂ + λ +    + = − η =    ∂       = + η + + −  ∑ (12)</formula><p>for an arbitrary value of the fuzzifier β and </p><p>,</p><formula xml:id="formula_26">T j j j m T l l l j j j j Tr x k c k x k c k w k Tr x k c k x k c k c k c k k w k x k c k − − =  + − + − + =   + − + −    + = + η + + −  ∑ (13) for β = 2.</formula><p>It is easy to see that the expression ( <ref type="formula">13</ref>) is an adaptive version of the procedure (4), and ( <ref type="formula">13</ref>) is, accordingly, <ref type="bibr" target="#b5">(6)</ref>.</p><p>The matrix credibility criterion is used as the objective function of clustering:</p><formula xml:id="formula_27">( ) ( ) ( )<label>2 1 1 1 1 ( ), ( ) ( ), ( ) ( ( ) )( ( )</label></formula><formula xml:id="formula_28">) N m N m T j j j j j j j k j k j E Cred k c Cred k D x k c Cred k Tr x k c x k c β β = = = = = = − − ∑∑ ∑∑<label>(14)</label></formula><p>in presence of constraints</p><formula xml:id="formula_29">0 ( ) 1 , ,<label>sup ( ) 0,5 , ( ) sup ( ) 1 j</label></formula><formula xml:id="formula_30">j j l Cred k j k Cred k k Cred k Cred k  ≤ ≤ ∀  ≥ ∀   + = <label>(15)</label></formula><p>where ( )</p><formula xml:id="formula_31">j</formula><p>Cred k -level of observation ( ) x k credibility. In the procedures of credibilistic fuzzy clustering, the level of membership is determined by the membership functions <ref type="bibr" target="#b12">[13]</ref>:</p><formula xml:id="formula_32">( )<label>( ) ( ) ( ) ( ), ( ( ) )( ( )</label></formula><formula xml:id="formula_33">) T j j j j j j w k D x k c Tr x k c x k c ϕ ϕ = = − −<label>(16)</label></formula><p>where j ϕ -decreases monotonically on the interval [0, ] ∞ and with condition (0) 1, ( ) 0</p><formula xml:id="formula_34">j j ϕ ϕ = ∞ → .</formula><p>It is easy to see that membership level (16) using the distance is based on similarity measure <ref type="bibr" target="#b13">[14]</ref>. As such a measure in <ref type="bibr" target="#b14">[15]</ref>, it was proposed to use a function:</p><formula xml:id="formula_35">( )<label>1 ( ) . 1 ( ( ) )( ( )</label></formula><formula xml:id="formula_36">) j T j j w k Tr x k c x k c = + − −<label>(17)</label></formula><p>Thus, if the fuzzy clustering algorithm in a batch form can be written as <ref type="bibr" target="#b15">[16]</ref>:</p><p>( )</p><formula xml:id="formula_37">1 1 1 ( ) , 1 ( ( ) )( ( ) ) ( ) ( ) , sup ( ) ( ) 1 sup ( ) ( ) ,<label>2</label></formula><formula xml:id="formula_38">( ) ( ) , ( ) j T j j j j l j l j N j k j N j k w k Tr x k c x k c w k w k w k w k w k Cred k Cred k x k c Cred k β β * * * = =  =  + − −    =    + −  =     =     ∑ ∑ (18)</formula><p>in the online mode this procedure <ref type="bibr" target="#b17">(18)</ref> has the form (19):</p><formula xml:id="formula_39">( ) ( ) ( ) ( ) 1 1 2 1 1 1 1 2 ( 1) (<label>( 1) )( ( 1) ) , ( ( 1)</label></formula><p>)( ( <ref type="formula" target="#formula_9">1</ref>)</p><formula xml:id="formula_40">) ( 1) 1 ,<label>( 1)</label></formula><p>( 1) <ref type="bibr" target="#b0">( 1)</ref> , sup ( 1) </p><formula xml:id="formula_41">1 ( 1) ( 1) 1 sup ( 1) , 2 ( 1) ( )<label>( 1)</label></formula><formula xml:id="formula_42">  + = + − + −       + − + −   + = +   +     + + = + + = + + − + + = + + ∑ ( ) ( 1) ( 1) ( ) j j d k x k c k β                   + + −  (19)</formula><p>or in case when</p><formula xml:id="formula_43">2 β = ( ) ( ) ( ) 1 2 2 1 1 2 2 ( 1) ( ( 1) )( ( 1) ) , (<label>( 1)</label></formula><p>)( ( <ref type="formula" target="#formula_9">1</ref>)</p><formula xml:id="formula_44">) ( 1) 1 ,<label>( 1)</label></formula><p>( 1) <ref type="bibr" target="#b0">( 1)</ref> , sup ( <ref type="formula" target="#formula_9">1</ref>)</p><formula xml:id="formula_45">1 ( 1) ( 1) 1 sup ( 1) , 2 ( 1) ( )<label>( 1)</label></formula><p>m T j j j l l j T j j q j q j l j j l</p><formula xml:id="formula_46">j j j k Tr x k c x k c Tr x k c x k c w k k w k w k w k Cred k w k w k c k c k k Cred σ σ η − − = ≠ − * * *     + = + − + −         + − + −   + = +   +   + + = + + = + + − + + = + + ∑ ( ) ( 1) ( 1) ( ) . j k x k c k                   + + −  (20)</formula><p>It is easy to see that the recurrent fuzzy clustering algorithm is not more complex than the online modifications of probabilistic, possibilistic, and robust procedures <ref type="bibr" target="#b16">[17,</ref><ref type="bibr" target="#b17">18]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Experimental research</head><p>Digital images, including satellite images of the city of Kharkiv, were used to test the implemented matrix credibilistic modifications of the clustering algorithm. Samples have no missing attributes and are numeric.</p><p>The result of the algorithm is the final fuzzy partition matrix for all sample objects and class prototypes.</p><p>When processing digital images, objects (matrices or vectors of the same dimensions) are formed from fragments of this image, and each pixel from the RGB (Red-Green-Blue) color model is converted to the Grayscale model, where the brightness of a pixel is expressed as a scalar value from the interval [0,1]. The conversion from the RGB model to the Grayscale model is performed according to the formula:</p><formula xml:id="formula_47">(0.299 0.587 0.114 ) / 255 Y R G B = + + ,</formula><p>where Y is the brightness of the pixel glow, R, G, B are the brightness of the glow of red, green, and blue tones, respectively, the values of which are in the interval [0, 255]. Observation sets formed from digital images are processed according to the same principle as standard quantitative samples. After image processing, each cluster is assigned the colors of the Grayscale model, and each object is colored in the color of the nearest cluster.</p><p>To evaluate the quality of the algorithm, the following criteria were used: Partition Coefficient (PC), Classification Entropy (CE), Partition Index (PI) with the same initialized fuzzy partition matrix U0.</p><p>Table <ref type="table" target="#tab_0">1</ref> shows the results of the accuracy and speed of the clustering algorithms on the Iris sample, and Table <ref type="table" target="#tab_1">2</ref> shows the results of the satellite digital image of the city of Kharkiv. The time given is an average for one iteration, considering the vectorization-devectorization operation. On Figure <ref type="figure" target="#fig_4">2</ref> show the initial image, the resampled sample (20% of objects) and the result of the cluster analysis and the process of the algorithm.  </p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: A bell function showing the dependence between membership distances Keller and Krishnapuram suggested choosing the parameter γj in the form:</figDesc><graphic coords="4,121.25,420.72,351.98,216.80" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 2 :</head><label>2</label><figDesc>The result of clustering: (a) -Initial digital image for clustering; (b) -Refined sample (20% of objects) for clustering; (c) -Output image of cluster analysis On Figure 3 shows the result of digital image clustering by the adaptive matrix mrthod of fuzzy credibilistic clustering.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Output image after adaptive cluster analysis</figDesc><graphic coords="9,152.30,425.13,304.60,330.96" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 Results of cluster analysis on the Iris sample</head><label>1</label><figDesc></figDesc><table><row><cell>Methods</cell><cell>PC</cell><cell>CE</cell><cell>PI</cell><cell>Time (c)</cell></row><row><cell>FCM</cell><cell>0.531</cell><cell>0.811</cell><cell>12.19</cell><cell>0.003</cell></row><row><cell>Matrix method of FCM</cell><cell>0.531</cell><cell>0.811</cell><cell>12.19</cell><cell>0.0025</cell></row><row><cell>Matrix method of credibilistic clustering</cell><cell>0.530</cell><cell>0.811</cell><cell>12.18</cell><cell>0.0022</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 The results of the cluster analysis on the digital image</head><label>2</label><figDesc></figDesc><table><row><cell>Methods</cell><cell>PC</cell><cell>CE</cell><cell>PI</cell><cell>Time (c)</cell></row><row><cell>FCM</cell><cell>0.697</cell><cell>0.419</cell><cell>8.23</cell><cell>1.9</cell></row><row><cell>Matrix method of FCM</cell><cell>0.697</cell><cell>0.419</cell><cell>8.23</cell><cell>1.8</cell></row><row><cell>Matrix method of credibilistic clustering</cell><cell>0.695</cell><cell>0.419</cell><cell>8.22</cell><cell>1.8</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>The work is supported by the state budget scientific research project of Kharkiv National University of Radio Electronics "Deep hybrid systems of computational intelligence for data stream mining and their fast learning" (state registration number 0119U001403).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Efficient time series matching by wavelets</title>
		<author>
			<persName><forename type="first">K</forename><surname>Chan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of 15th IEEE Int. Conf. on Data Engineering</title>
				<meeting>15th IEEE Int. Conf. on Data Engineering</meeting>
		<imprint>
			<date type="published" when="1999">1999</date>
			<biblScope unit="page" from="126" to="133" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Fuzzy clustering: A historical perspective</title>
		<author>
			<persName><forename type="first">Enrique</forename><forename type="middle">H</forename><surname>Ruspini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>James</surname></persName>
		</author>
		<author>
			<persName><forename type="first">James</forename><forename type="middle">M</forename><surname>Bezdek</surname></persName>
		</author>
		<author>
			<persName><surname>Keller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Computational Intelligence Magazine</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="45" to="55" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Pattern Recognition with Fuzzy Objective Function Algorithms</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Bezdek</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1981">1981</date>
			<publisher>Plenum Press</publisher>
			<pubPlace>N.Y.</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Data clustering: application and trends</title>
		<author>
			<persName><forename type="first">Gbeminiyi</forename><surname>Oyewole</surname></persName>
		</author>
		<author>
			<persName><forename type="first">George</forename><forename type="middle">Alex</forename><surname>John</surname></persName>
		</author>
		<author>
			<persName><surname>Thopil</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence Review</title>
		<imprint>
			<biblScope unit="volume">56</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="6439" to="6475" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A probability-based fuzzy algorithm for multi-attribute decision-analysis with application to aviation disaster decision-making</title>
		<author>
			<persName><forename type="first">Anurag</forename><surname>Agrawal</surname></persName>
		</author>
		<author>
			<persName><surname>Vijay</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Decision Analytics Journal</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page">100310</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Fuzzy K-nearest neighbor based dental fluorosis classification using multi-prototype unsupervised possibilistic fuzzy clustering via cuckoo search algorithm</title>
		<author>
			<persName><forename type="first">Ritipong</forename><surname>Wongkhuenkaew</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Environmental Research and Public Health</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page">3394</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Matrix Neuro-Fuzzy Self-Organizing Clustering Network</title>
		<author>
			<persName><forename type="first">Yevgeniy</forename><surname>Bodyanskiy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Valentyna</forename><surname>Volkova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mark</forename><surname>Skuratov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computer Science</title>
		<imprint>
			<biblScope unit="page">50</biblScope>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Possibilistic c-means clustering based on the nearest-neighbour isolation similarity</title>
		<author>
			<persName><forename type="first">Yong</forename><surname>Zhang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Intelligent &amp; Fuzzy Systems</title>
		<imprint>
			<biblScope unit="volume">44</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="1781" to="1792" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">A fuzzy C-means algorithm for optimizing data clustering</title>
		<author>
			<persName><forename type="first">Seyed</forename><surname>Hashemi</surname></persName>
		</author>
		<author>
			<persName><surname>Emadedin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mostafa</forename><surname>Fatemeh Gholian-Jouybari</surname></persName>
		</author>
		<author>
			<persName><surname>Hajiaghaei-Keshteli</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Expert Systems with Applications</title>
		<imprint>
			<biblScope unit="volume">227</biblScope>
			<biblScope unit="page">120377</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Possibilistic c-means clustering based on the nearest-neighbour isolation similarity</title>
		<author>
			<persName><forename type="first">Yong</forename><surname>Zhang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Intelligent &amp; Fuzzy Systems</title>
		<imprint>
			<biblScope unit="volume">44</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="1781" to="1792" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Unsupervised multiview fuzzy cmeans clustering algorithm</title>
		<author>
			<persName><surname>Hussain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kristina</forename><forename type="middle">P</forename><surname>Ishtiaq</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Miin-Shen</forename><surname>Sinaga</surname></persName>
		</author>
		<author>
			<persName><surname>Yang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Electronics</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page">4467</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Sparse possibilistic c-means clustering with Lasso</title>
		<author>
			<persName><forename type="first">Miin-</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Josephine Bm</forename><surname>Shen</surname></persName>
		</author>
		<author>
			<persName><surname>Benjamin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pattern Recognition</title>
		<imprint>
			<biblScope unit="volume">138</biblScope>
			<biblScope unit="page">109348</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Online robust fuzzy clustering of data with omissions using similarity measure of special type</title>
		<author>
			<persName><forename type="first">Bodyanskiy</forename><surname>Ye</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Shafronenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mashtalir</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="s">Lecture Notes in Computational Intelligence and Decision Waking-Cham</title>
		<imprint>
			<biblScope unit="page" from="637" to="646" />
			<date type="published" when="2020">2020</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Theory and Applications of Multidimensional Scaling-Hillsdale</title>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">W</forename><surname>Young</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">M</forename><surname>Hamer</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1994">1994</date>
			<publisher>Erlbaum</publisher>
			<pubPlace>N.J.</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">DOIFCM: An Outlier Efficient IFCM</title>
		<author>
			<persName><forename type="first">Sonika</forename><surname>Dahiya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Anjana</forename><surname>Gosain</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computational Intelligence in Analytics and Information Systems</title>
				<imprint>
			<publisher>Apple Academic Press</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="135" to="149" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Cluster Analysis of Information in Complex Networks</title>
		<author>
			<persName><forename type="first">O</forename><surname>Kyrychenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ostapov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Malyk</surname></persName>
		</author>
		<idno type="DOI">10.47839/ijc.22.4.3360</idno>
		<ptr target="https://doi.org/10.47839/ijc.22.4.3360" />
	</analytic>
	<monogr>
		<title level="j">International Journal of Computing</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="515" to="523" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Modified weight initialization in the self-organizing map using Nguyen-Widrow initialization algorithm</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">N</forename><surname>Linan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Gerardo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Medina</surname></persName>
		</author>
		<idno type="DOI">10.47839/ijc.19.1.1694</idno>
		<ptr target="https://doi.org/10.47839/ijc.19.1.1694" />
	</analytic>
	<monogr>
		<title level="j">Journal of Physics: Conference Series</title>
		<imprint>
			<biblScope unit="volume">1235</biblScope>
			<biblScope unit="issue">1</biblScope>
			<date type="published" when="2019">2019</date>
			<publisher>IOP Publishing</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Machine-learning methods in prognosis of ageing phenomena in nuclear power plant components</title>
		<author>
			<persName><forename type="first">Martti</forename><surname>Sirola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">John</forename><forename type="middle">Einar</forename><surname>Hulsund</surname></persName>
		</author>
		<idno type="DOI">10.47839/ijc.20.1.2086</idno>
		<ptr target="https://doi.org/10.47839/ijc.20.1.2086" />
	</analytic>
	<monogr>
		<title level="j">International Scientific Journal of Computing</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="11" to="21" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
