<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Online Stacking Credibilistic Fuzzy Clustering for Data Stream Mining</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Yevgeniy</forename><surname>Bodyanskiy</surname></persName>
							<email>yevgeniy.bodyanskiy@nure.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Kharkiv National University of Radio Electronics</orgName>
								<address>
									<addrLine>Nauky ave 14</addrLine>
									<postCode>61166</postCode>
									<settlement>Kharkiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Alina</forename><surname>Shafronenko</surname></persName>
							<email>alina.shafronenko@nure.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Kharkiv National University of Radio Electronics</orgName>
								<address>
									<addrLine>Nauky ave 14</addrLine>
									<postCode>61166</postCode>
									<settlement>Kharkiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Diana</forename><surname>Rudenko</surname></persName>
							<email>diana.rudenko@nure.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Kharkiv National University of Radio Electronics</orgName>
								<address>
									<addrLine>Nauky ave 14</addrLine>
									<postCode>61166</postCode>
									<settlement>Kharkiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Oleksii</forename><surname>Tanianskyi</surname></persName>
							<email>oleksii.tanianskyi@nure.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Kharkiv National University of Radio Electronics</orgName>
								<address>
									<addrLine>Nauky ave 14</addrLine>
									<postCode>61166</postCode>
									<settlement>Kharkiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Online Stacking Credibilistic Fuzzy Clustering for Data Stream Mining</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">AB6733B5693BF07201EF5686D048D9B2</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:22+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Stack learning, data compression, fuzzy clustering, credibilistic fuzzy clustering, data stream mining1 0000-0001-5418-2143 (Ye. Bodyanskiy)</term>
					<term>0000-0002-8040-0279 (A. Shafronenko)</term>
					<term>0000-0002-1792-5080 (D. Rudenko)</term>
					<term>0009-0005-3491-4470 (O. Tanianskyi)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>An important problem that arises when processing large amounts of observations is data compression to highlight the most essential information and identify certain latent factors that implicitly determine the nature of the phenomenon being studied. One of the most effective approaches to solving this problem is the apparatus of factor analysis, which has found wide application in problems of processing empirical data in various fields. Fuzzy clustering is a popular approach for soft data partitioning, its use always encounters difficulties in solving the problems of processing high-dimensional real data with complex hidden distributions. This paper proposes a disclosure of a kind of stack fuzzy clustering method where the data is represented in a new feature space created by a staking neural network. This approach aims to overcome the challenges associated with processing complex data and can bring significant improvements in the quality of clustering.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Clustering is a technique in machine learning and data analysis that involves grouping a set of data points into subsets, or clusters, based on the similarity between them. Fuzzy clustering is a variation of traditional clustering methods that allows for more flexible and nuanced assignments of data points to clusters <ref type="bibr" target="#b0">[1]</ref><ref type="bibr">[2]</ref><ref type="bibr" target="#b1">[3]</ref><ref type="bibr" target="#b2">[4]</ref><ref type="bibr" target="#b3">[5]</ref>. In contrast, fuzzy clustering allows data points to belong to multiple clusters simultaneously with varying degrees of membership. This reflects the inherent uncertainty or ambiguity present in real-world data.</p><p>The Fuzzy C-Means (FCM) algorithm, proposed by James Bezdek in 1973, is a prominent method in fuzzy clustering <ref type="bibr" target="#b4">[6]</ref>. FCM assigns membership degrees to data points, indicating the likelihood of each point belonging to different clusters. This flexibility makes fuzzy clustering particularly useful in scenarios where data points may exhibit overlapping characteristics or uncertainty in their categorization.</p><p>Over the years, fuzzy clustering has found applications in diverse fields, including pattern recognition, image processing, and Data Mining. Researchers have developed various extensions and enhancements to the original FCM algorithm, addressing specific challenges and improving its adaptability to different data patterns.</p><p>The validity of fuzzy clustering solutions became a key focus, leading to the introduction of indices to assess the quality of clustering results. These indices help researchers and practitioners evaluate the effectiveness of fuzzy clustering algorithms in capturing meaningful patterns within datasets.</p><p>The evolution of fuzzy clustering has seen ongoing advancements, with researchers exploring sophisticated membership functions and integrating fuzzy clustering with other machine learning techniques. This integration has expanded the capabilities of fuzzy clustering, making it applicable to complex problems in large-scale data analysis.</p><p>The era of big data has significantly influenced the field of clustering, including traditional clustering methods and the development of fuzzy clustering techniques. Deep learning in big data represents a powerful combination that has transformed various fields by enabling more sophisticated analysis, pattern recognition, and decision-making capabilities.</p><p>Recently, there has been significant research into leveraging deep learning to uncover meaningful data representations through neural networks. A notable area of exploration involves the integration of unsupervised clustering algorithms with stack neural networks. This synergy has become a vibrant field of research, aiming to jointly optimize the performance of deep learning models and clustering algorithms.</p><p>The goal of the work is propose the stack neuro-fuzzy system for Data Stream Mining using credibilistic approach and designed to work both in batch and its recurrent online version.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Neural Network Data Compression</head><p>An important problem that arises when processing large amounts of observations is data compression to highlight the most essential information and identify certain latent factors that implicitly determine the nature of the phenomenon being studied. One of the most effective approaches to solving this problem is the apparatus of factor analysis <ref type="bibr" target="#b5">[7]</ref>, which has found wide application in problems of processing empirical data in various fields: psychology, sociology, technology, economics, medicine, criminology, etc.</p><p>The basic idea of factor analysis, which allows for the presence of a priori unknown hidden factors, leads to the following informal task: by observing a large number of measured parameters (indicators), identify a small number of parameter-factors that mainly determine the behavior of the measured parameters, or in other words: knowing the values of a large number of measured functions parameters, set the appropriate values of the factor arguments common to all functions and restore the form of these functions.</p><p>The initial information for factor analysis is the ( )</p><formula xml:id="formula_0">N n</formula><p>× observation matrix:</p><p>( )</p><formula xml:id="formula_1">1 2 1 2 1 2 1 2<label>(1) (1)</label></formula><p>(1) ... </p><formula xml:id="formula_3">T n T n T n T n x x x x x x x x X N x k x k x k x k x N x N x N x N                   = =                      <label>(1)</label></formula><p>that formed by an array of N n-th dimensional vectors ( )</p><formula xml:id="formula_4">1 2 ( ) , ,..., T N x k x x x =</formula><p>and autocorrelation matrix ( )</p><formula xml:id="formula_5">n n × ( )( ) 1 1 1 1 ( ) ( ) ( ) ( ) ( ) ( ) ( ), N N T T k k R N x k x N x k x N x k x k N N = = = − − = ∑ ∑   (2)</formula><p>where</p><formula xml:id="formula_6">1 1 ( ) ( ), ( ) ( ) ( ) N k x k x k x k x k x k N = = = − ∑ <label>(3)</label></formula><p>vectors of measured indicators centered relative to the average of the data array. One of the most common and effective methods for finding factors is the principal component method or component analysis, which is widely used in problems of data compression, pattern recognition, coding, image processing, spectral analysis, etc. and known in pattern recognition theory as the Karhunen-Loeve transform.</p><p>The task of component analysis is to project data vectors from the original n-dimensional space into a m-dimensional one ( ) m n &lt; space of principal components and reduces to searching for a system λ , etc. In other words, the problem comes down to finding solutions to the matrix equation:</p><p>( )</p><formula xml:id="formula_7">( ) 0 j n j R N I w λ − = such, that 1 2 ... 0 m λ λ λ ε ≥ ≥ ≥ ≥ ≥ and 1 j w = .</formula><p>The dimension of the space of principal components m is determined, as a rule, from empirical considerations and the required degree of compression of the data array.</p><p>Thus, in algebraic terms, solving a factor problem is closely related to the problem of eigenvalues and finding the rank of the correlation matrix; in a geometric sense, this is the problem of moving to a lower-dimensional space with minimal loss of information; in a statistical sense, this is the problem of finding a set of orthonormal vectors in the input space that "accept" the maximum possible variation of the data, and finally, in an algorithmic sense, this is the problem of sequentially determining a set of eigenvectors 1 2 , ,..., m w w w by optimizing a set of local criteria that form a global objective function (</p><p>)</p><formula xml:id="formula_8">2 1 1 1 1 1 ( ) m m k k k T j j j j p E E w x p k k = = = = = ∑ ∑∑  with constraints 0, ,<label>1.</label></formula><p>T T j l j j w w j l w w =≠ =</p><p>The first principal component 1 w can be found by maximizing the criterion:</p><p>( )</p><formula xml:id="formula_9">2 1 1 1 1 ( ) k k T p E w x p k = = ∑  (4)</formula><p>by solving a nonlinear programming problem using uncertain Lagrange multipliers.</p><p>However, if data processing must be carried out in real time, neural network technologies come to the fore, among which the self-learning rule and E. Oya's neuron should be noted.</p><p>It is with the help of Oya's rule in the form:</p><p>( )</p><formula xml:id="formula_10">1 1 1 1 1 1 1 1 ( 1) ( ) ( ) ( ) ( ) ( ) ( ) , ( ) ( ) ( ), (0) 0 T w k w k k y k x k w k y k y k w k x k w η  + = + −   = ≠     (5)</formula><p>the first principal component can be isolated.</p><p>Next, following the procedure of standard principal component analysis, from each vector ( ), 1, 2,...,</p><formula xml:id="formula_11">x k k N = </formula><p>its projection onto the first principal component is subtracted and the first principal component of the differences is calculated, which is the second principal component of the original data and the orthonormal first. The third principal component is calculated by projecting each original vector ( )</p><p>x k  into the first two components, subtracting this projection from ( )</p><p>x k  and finding the first principal component of the differences, which is the third principal component of the original data array. The remaining principal components are calculated recursively according to the described strategy.</p><p>It is this idea of recursive calculation of principal components that forms the basis of the algorithm proposed by T. Sanger <ref type="bibr" target="#b6">[8]</ref> and in a modified form having the form [9] </p><formula xml:id="formula_12"> + = +  = −   = ≠   = =   = = − + ≤ ≤     (6)</formula><p>It is easy to see that the first principal component is calculated using the Oya algorithm, then the projection of the input vectors onto 1 ( ) w k are subtracted from the inputs and the differences are processed by the next neuron, etc. In Fig. <ref type="figure" target="#fig_1">1</ref> shows a diagram of a modified artificial T. Sanger's neural network, composed of E. Oya's neurons and implementing the algorithm <ref type="bibr" target="#b4">(6)</ref>.</p><p>The first layer of the network is formed by encoder neurons that pre-process signals by centering and normalizing them. Further signals 1 2 ( ), ( ),..., ( )</p><formula xml:id="formula_13">n x k x k x k   </formula><p>are processed in the second hidden layer formed by E. Oya's neurons, after which they are sent to the output layer formed by elements with activation rectifier functions with a dead zone , , ( ) 0,</p><formula xml:id="formula_14">u if u u otherwise θ ψ ≥  =  <label>(7)</label></formula><p>which allows you to highlight informative signals ( )</p><formula xml:id="formula_15">j</formula><p>y k and filter out the noise.</p><p>The Sanger neural network is an effective means of compressing information with minimal loss of accuracy, but its capabilities are limited by the fact that, implementing essentially the standard technique of factor analysis, it solves a linear problem, while the main advantage of neural network technologies is the ability to work in purely nonlinear situations.</p><p>The problem of nonlinear factor analysis can be effectively solved using credibilistic theory and clustering analysis.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Fuzzy credibilistic clustering</head><p>Alternatively, to probabilistic and possibilistic procedures <ref type="bibr" target="#b8">[10]</ref> it was introduced credibilistic fuzzy clustering approach using as its basis the credibility theory <ref type="bibr">[11]</ref> and is largely devoid of the drawbacks of known methods.</p><p>The most common approach within the framework of probabilistic fuzzy clustering is associated with minimizing the goal function <ref type="bibr" target="#b9">[12]</ref><ref type="bibr" target="#b10">[13]</ref><ref type="bibr" target="#b11">[14]</ref>.</p><formula xml:id="formula_16">( ) ( ) 2 1 1 ( ), ( ) ( ), N m q q q q k q E u k c u k d x k c β = = = ∑∑<label>(8)</label></formula><p>with constraints</p><formula xml:id="formula_17">1 1 ( ) 1, 0 ( ) . m q q N q k u k u k N = =  =     &lt; &lt;   ∑ ∑ (9)</formula><p>Solution of nonlinear programming problem using the method of Lagrange indefinite multipliers leads to the well-known result <ref type="bibr" target="#b7">[9,</ref><ref type="bibr">11,</ref><ref type="bibr" target="#b12">15]</ref>:</p><formula xml:id="formula_18">( ) ( ) ( ) ( ) ( ) ( ) 1 2 1 1 1 2 1 1 1 ( ),<label>( ) , ( ), ( ) ( ) ( )</label></formula><formula xml:id="formula_19">q q m l l N q k q N q k d x k c u k d x k c u k x k c u k β β β β − − = = =   =        =     ∑ ∑ ∑ (10)</formula><p>coinciding with 2 β = a popular method of Fuzzy C-Means of J. Bezdek (FCM) <ref type="bibr" target="#b4">[6]</ref>. If the data are fed to processing sequentially, the solution of the nonlinear programming problem ( <ref type="formula" target="#formula_16">8</ref>), ( <ref type="formula">9</ref>) using the Arrow-Hurwitz-Uzawa algorithm leads to an online procedure:</p><formula xml:id="formula_20">( ) ( ) ( ) ( ) ( ) 1 2 1 1 2 1 1 ( 1), ( )<label>( 1)</label></formula><p>, ( 1), ( )</p><formula xml:id="formula_21">( 1) ( ) ( 1) ( 1) ( 1) ( ) . q q m l l q q q d x k c k u k d x k c k c k c k k u k x k c k β β β η − − =  +  + =    +   + = + + + + −   ∑ (11)</formula><p>The goal function of credibilistic fuzzy clustering has the form <ref type="bibr" target="#b4">[6,</ref><ref type="bibr">11]</ref> close to (8)</p><formula xml:id="formula_22">( ) ( ) 2 1 1 ( ), ( ) ( ), N m q q q q k q E Cred k c Cred k d x k c β = = = ∑∑<label>(12)</label></formula><p>with "softer" than ( <ref type="formula">9</ref>) constraints: 0 ( ) 1, for all and , sup ( ) 0,5, for all , ( ) sup ( ) 1, for any and ,for which ( ) 0,5.</p><formula xml:id="formula_23">q q q l q Cred k q k Cred k k Cred k Cred k q k Cred k ≤ ≤   ≥   + =   ≥  (13)</formula><p>It should be noted that the goal functions ( <ref type="formula" target="#formula_16">8</ref>) and ( <ref type="formula" target="#formula_22">12</ref>) are similar and that there are no rigid probabilistic constraints in <ref type="bibr" target="#b10">(13)</ref> on the sum of the membership in <ref type="bibr" target="#b7">(9)</ref>.</p><p>In the procedures of credibilistic clustering, there is also the concept of fuzzy membership, which is calculated using the neighborhood function of the form:</p><formula xml:id="formula_24">( ) ( ) ( ) ( ), q q q u k d x k c ϕ = (<label>14</label></formula><formula xml:id="formula_25">)</formula><p>monotonically decreasing on the interval [0, ] ∞ so that (0) 1, ( ) 0.</p><formula xml:id="formula_26">q q ϕ ϕ = ∞ →</formula><p>Such a function is essentially an empirical similarity measure of <ref type="bibr" target="#b10">[13,</ref><ref type="bibr" target="#b12">15,</ref><ref type="bibr" target="#b13">16]</ref> related to distance by the relation:</p><formula xml:id="formula_27">( ) 2 1 ( ) . 1 ( ), q q u k d x k c = +<label>(15)</label></formula><p>Note also that earlier it was shown in <ref type="bibr" target="#b11">[14]</ref> that the first relation <ref type="bibr" target="#b8">(10)</ref> for 2 β = can be rewritten as ( )</p><formula xml:id="formula_28">1 2 2 ( ), ( ) 1 , q q q d x k c u k σ −     = +    <label>(16)</label></formula><p>where ( )</p><formula xml:id="formula_29">1 2 2 1 ( ), m q l l l d x k c ε σ − = ≠     =       ∑ (17)</formula><p>which is a generalization of the function (15) (for 2 1 q σ = (15) coincides with (17)) and satisfies all the conditions for <ref type="bibr" target="#b11">(14)</ref>.</p><p>In batch form the algorithm of credibilistic fuzzy clustering in the accepted notation can be written as</p><formula xml:id="formula_30">q q q q l q q l l q N q k q N q k u k d x k c u k u k u k Cred k u k u k Cred k x k c Cred k β β − − * * * ≠ = =  = +   =      = + −          =     ∑ ∑ (18)<label>( ) ( ) ( ) ( ) ( ) 1 2 1 1 1 ( ) 1 ( ), , ( ) ( ) sup ( ) , 1 ( ) ( ) 1 sup ( ) , 2 ( ) ( ) ( )</label></formula><p>and in the online mode, considering ( <ref type="formula" target="#formula_28">16</ref>), ( <ref type="formula">17</ref>):</p><formula xml:id="formula_31">( ) ( ) ( ) 2 2 1 1 2 2 1 ( 1) ,<label>( 1), ( ) ( 1), ( ) ( 1) 1 , ( 1) ( 1) ( 1) , sup ( 1) 1 ( 1)</label></formula><p>( 1) 1 sup ( ) , 2</p><p>q m l l l q q q q q q l q q l l q</p><formula xml:id="formula_33">q q q q k d x k c k d x k c k u k k u k u k u k Cred k u k u k c k c k k Cred k x k c k β σ σ η = ≠ − * * * ≠  + =  +       +    + = +    +     + + = +   + = + + −     + = + + + + − ∑         <label>(19)</label></formula><p>From the point of view of computational implementation, algorithm (19) is not more complicated than procedure (11) and, in the general case, is its generalization to the case of credibilistic approach to fuzzy clustering.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experimental research</head><p>Conducting experimental studies and comparative analysis of the quality of data clustering using various metrics allows you to objectively assess the effectiveness of the developed method in accordance with analogues. To estimate the quality of the method we used quality criteria partitioning into clusters such as <ref type="bibr" target="#b1">[3,</ref><ref type="bibr" target="#b4">6]</ref> Separation Index (S): on the contrary of partition index (SC), the separation index uses a minimum-distance separation for partition validity.</p><p>Xie and Beni's Index (XB): it aims to quantify the ratio of the total variation within clusters and the separation of clusters. The optimal number of clusters should minimize the value of the index.</p><p>Dunn's Index (DI): this index is originally proposed to use at the identification of "compact and well separated clusters".</p><p>So the result of the clustering has to be recalculated as it was a hard partition method. The specific information of the data sets is shown in Table <ref type="table" target="#tab_2">1</ref>.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Discussions</head><p>Upon analyzing the results acquired, it can be inferred that irrespective of the volume of the initial data provided, the processing through the proposed method exhibits comparable speed and clustering quality when contrasted with established clustering algorithms and methodologies.</p><p>The obtained results confirm that the performance stack neuro-fuzzy system is better than other network structures, and it can be a viable structure for Data Stream Mining.</p><p>The results of accuracy that demonstrated in Table <ref type="table">4</ref> confirm that proposed method online stack fuzzy credibilistic clustering for Data Stream Mining time more superiority regardless of the number observations that are fed on clustering process.</p><p>Based on the experimental findings, it is advisable to endorse the proposed system for practical application in addressing the challenges associated with automatic clustering of large datasets.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion</head><p>The problem of fuzzy clustering of data streams by stack neuro-fuzzy system is considered. In the paper was proposed the stack neuro-fuzzy system for Data Stream Mining using credibilistic approach and designed to work both in batch and its recurrent online version.</p><p>The network shows that stack structures based on fuzzy models can be applicable in data clustering. The proposed stack neuro-fuzzy system is quite simple in numerical implementation and can use the well-known online fuzzy clustering methods intended for solving Data Stream Mining problems.</p><p>Future research endeavors could explore the potential of employing stack neuro-fuzzy systems for fuzzy clustering of data streams, aiming to address the complexities inherent in automatic clustering of big data.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: T. Sanger neural network</figDesc><graphic coords="4,85.25,220.99,424.08,544.50" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>: − Partition Coefficient (PC); − Classification Entropy (CE); − Partition Index (SC); − Separation Index (S); − Xie and Beni's Index (XB); − Dunn's Index (DI). Partition Coefficient (PC): measures the amount of "overlapping" between clusters. Classification Entropy (CE): it measures the fuzzyness of the cluster partition only, which is similar to the Partition Coefficient. Partition Index (SC): is the ratio of the sum of compactness and separation of the clusters. It is a sum of individual cluster validity measures normalized through division by the fuzzy cardinality of each cluster. SC is useful when comparing different partitions having equal number of clusters. A lower value of SC indicates a better partition.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 1 Information of the data dets</head><label>1</label><figDesc></figDesc><table><row><cell>Data set</cell><cell cols="2">Observations</cell><cell></cell><cell></cell><cell></cell><cell cols="2">Clusters</cell></row><row><cell>Parkinson's telemonitoring</cell><cell></cell><cell>5875</cell><cell></cell><cell></cell><cell></cell><cell>21</cell><cell></cell></row><row><cell>Superconductor temperature prediction</cell><cell></cell><cell>10000</cell><cell></cell><cell></cell><cell></cell><cell>81</cell><cell></cell></row><row><cell>Table 2</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell cols="3">Results of experiments for Parkinson's telemonitoring data set</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Methods</cell><cell></cell><cell cols="3">PC CE SC</cell><cell>S</cell><cell cols="3">XB DI PC</cell></row><row><cell cols="2">Online Stack Fuzzy Credibilistic Clustering for Data Stream Mining</cell><cell>9.1249е-07</cell><cell>-2.3308e-04</cell><cell>0.3733</cell><cell>48.7067</cell><cell>24.4028</cell><cell>0.2005</cell><cell>0.582e-13</cell></row><row><cell>SOM based on possibilistic fuzzy clustering</cell><cell></cell><cell>0.3808</cell><cell>0.21415</cell><cell>0.00715</cell><cell>0.7473e-04</cell><cell>1.92845</cell><cell>0.01375</cell><cell>0.3954</cell></row><row><cell>SOM based on probabilistic fuzzy clustering</cell><cell></cell><cell>0.4731</cell><cell>0.05725</cell><cell>0.2395</cell><cell>0.0032</cell><cell>1.7309</cell><cell>0.1699</cell><cell>0.27535</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 3 Results of experiments for Superconductor data set</head><label>3</label><figDesc></figDesc><table><row><cell>Methods</cell><cell cols="2">CE SC</cell><cell>S</cell><cell cols="4">XB DI PC CE</cell></row><row><cell>Online Stack Fuzzy Credibilistic Clustering for Data Stream Mining</cell><cell>-0.00022</cell><cell>369360</cell><cell>271 150 000</cell><cell>135 900 000</cell><cell>0.0109</cell><cell>4.56245е-07</cell><cell>-0.000228</cell></row><row><cell>SOM based on possibilistic fuzzy clustering</cell><cell>0.1903</cell><cell>0.000366</cell><cell>0.0000034</cell><cell>2.8555</cell><cell>0.00585</cell><cell>0.38085</cell><cell>0.1903</cell></row><row><cell>SOM based on probabilistic fuzzy clustering</cell><cell>0.31965</cell><cell>4.29665</cell><cell>0.02415</cell><cell>0.5375</cell><cell>0.05075</cell><cell>0.4731</cell><cell>0.31965</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>The work is supported by the state budget scientific research project of Kharkiv National University of Radio Electronics "Deep hybrid systems of computational intelligence for data stream mining and their fast learning" (state registration number 0119U001403).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">On Genetic-Fuzzy Data-Mining Techniques</title>
		<author>
			<persName><surname>Gas ; Hongfu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining</title>
				<meeting>the 22nd ACM SIGKDD international conference on knowledge discovery and data mining</meeting>
		<imprint>
			<date type="published" when="2016">2023. 2016</date>
		</imprint>
	</monogr>
	<note>Infinite ensemble for image clustering</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A fuzzy C-means clustering algorithm based on spatial context model for image segmentation</title>
		<author>
			<persName><forename type="first">Jindong</forename><surname>Xu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Fuzzy Systems</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="816" to="832" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Fuzzy clustering with a modified MRF energy function for change detection in synthetic aperture radar images</title>
		<author>
			<persName><forename type="first">Maoguo</forename><surname>Gong</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Fuzzy Systems</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="98" to="109" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Dissimilarity measure between intuitionistic Fuzzy sets and its applications in pattern recognition and clustering analysis</title>
		<author>
			<persName><forename type="first">V</forename><surname>Rani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kumar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Applied Mathematics, Statistics and Informatics</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="61" to="77" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">FCM: The fuzzy c-means clustering algorithm</title>
		<author>
			<persName><forename type="first">James</forename><forename type="middle">C</forename><surname>Bezdek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Robert</forename><surname>Ehrlich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">William</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computers &amp; geosciences</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="issue">2-3</biblScope>
			<biblScope unit="page" from="191" to="203" />
			<date type="published" when="1984">1984</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Factor analysis: An applied approach</title>
		<author>
			<persName><forename type="first">Edward</forename><forename type="middle">E</forename><surname>Cureton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ralph</forename><forename type="middle">B</forename><surname>D'agostino</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1967">2013. 1967</date>
			<publisher>Psychology press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Optimal unsupervised learning in a single-layer linear feedforward neural network</title>
		<author>
			<persName><forename type="first">Terence</forename><forename type="middle">D</forename><surname>Sanger</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural networks</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="459" to="473" />
			<date type="published" when="1989">1989</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A general regression neural network</title>
		<author>
			<persName><forename type="first">Donald</forename><forename type="middle">F</forename><surname>Specht</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE transactions on neural networks</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="568" to="576" />
			<date type="published" when="1991">1991</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Fuzzy cluster analysis: methods for classification, data analysis and image recognition</title>
		<author>
			<persName><forename type="first">Frank</forename><surname>Höppner</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="issue">04</biblScope>
			<biblScope unit="page" from="545" to="564" />
			<date type="published" when="1999">1999. 2015</date>
			<publisher>John Wiley &amp; Sons</publisher>
		</imprint>
	</monogr>
	<note>Credibilistic clustering: the model and algorithms</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Possibilistic c-means clustering based on the nearest-neighbour isolation similarity</title>
		<author>
			<persName><forename type="first">Yong</forename><surname>Zhang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Intelligent &amp; Fuzzy Systems</title>
		<imprint>
			<biblScope unit="volume">44</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="1781" to="1792" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Fuzzy clustering of incomplete data by means of similarity measures</title>
		<author>
			<persName><forename type="first">Zhengbing</forename><surname>Hu</surname></persName>
		</author>
		<idno type="DOI">10.1109/UKRCON.2019.8879844</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 2nd Ukraine Conference on Electrical and Computer Engineering (UKRCON)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2019">2019. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Online credibilistic fuzzy clustering of data using membership functions of special type</title>
		<author>
			<persName><forename type="first">Alina</forename><surname>Shafronenko</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">CMIS</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Online robust fuzzy clustering of data with omissions using similarity measure of special type</title>
		<author>
			<persName><forename type="first">Yevgeniy</forename><surname>Bodyanskiy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alina</forename><surname>Shafronenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sergii</forename><surname>Mashtalir</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Lecture Notes in Computational Intelligence and Decision Making: Proceedings of the XV International Scientific Conference &quot;Intellectual Systems of Decision Making and Problems of Computational Intelligence</title>
				<meeting><address><addrLine>Ukraine</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2020">May 21-25, 2019 15. 2020</date>
		</imprint>
	</monogr>
	<note>ISDMCI&apos;2019)</note>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Online fuzzy clustering of incomplete data using credibilistic approach and similarity measure of special type</title>
		<author>
			<persName><forename type="first">Ye</forename><forename type="middle">V</forename><surname>Bodyanskiy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">Yu</forename><surname>Shafronenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">N</forename><surname>Klymova</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Radio Electronics, Computer Science, Control</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="97" to="104" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
