<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Processing measure uncertainty into fuzzy classifier</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Thomas</forename><surname>Monrousseau</surname></persName>
							<email>thomas.monrousseau@laas.fr</email>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">CNRS</orgName>
								<orgName type="institution" key="instit2">LAAS</orgName>
								<address>
									<addrLine>7 avenue du colonel Roche</addrLine>
									<postCode>F-31400</postCode>
									<settlement>Toulouse</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Louise</forename><surname>Travé-Massuyès</surname></persName>
							<email>louise@laas.fr</email>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">CNRS</orgName>
								<orgName type="institution" key="instit2">LAAS</orgName>
								<address>
									<addrLine>7 avenue du colonel Roche</addrLine>
									<postCode>F-31400</postCode>
									<settlement>Toulouse</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Marie-Véronique</forename><surname>Le Lann</surname></persName>
							<email>mvlelann@laas.fr</email>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">CNRS</orgName>
								<orgName type="institution" key="instit2">LAAS</orgName>
								<address>
									<addrLine>7 avenue du colonel Roche</addrLine>
									<postCode>F-31400</postCode>
									<settlement>Toulouse</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution" key="instit1">Univ de Toulouse</orgName>
								<orgName type="institution" key="instit2">INSA</orgName>
								<orgName type="institution" key="instit3">LAAS</orgName>
								<address>
									<postCode>F-31400</postCode>
									<settlement>Toulouse</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Processing measure uncertainty into fuzzy classifier</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">863E49F43A025C02FDAFE2D8FB46B3F7</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-19T15:59+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Machine learning such as data based classification is a diagnosis solution useful to monitor complex systems when designing a model is a long and expensive process. When used for process monitoring the processed data are available thanks to sensors. But in many situations it is hard to get an exact measure from these sensors. Indeed measure is done with a lot of noise that can be caused by the environment, a bad use of the sensor or even the conversion from analogic to numerical measure. In this paper we propose a framework based on a fuzzy logic classifier to model the uncertainty on the data by the use of crisp (non fuzzy) or fuzzy intervals. Our objective is to increase the number of good classification results in the presence of noisy data. The classifier is named LAMDA (Learning Algorithm for Multivariate Data Analysis) and can perform machine learning and clustering on different kind of data like numerical values, symbols or interval values.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Data classification is the process of dividing pattern space using hard, fuzzy or probabilistic partitions into a number of regions <ref type="bibr" target="#b0">[1]</ref>. Classification algorithms are more and more used nowadays in a world where it is not always simple to get a model of complex process. On the opposite it is easier to get data on systems by monitoring and store it. Different types of classifiers can be used depending on the situation. The principal ones described in the literature are artificial neural networks, k-nearest neighbors, support vector machine, decision trees, fuzzy classifiers and statistical methods.</p><p>Most of the time, data are issued from sensor measurements and are corrupted by noise. This noise can have different origins, for example environment disturbances, bad use of the sensor, hysteresis effect or numerical conversion and representation of the data. Many domains of application have to deal with noise problems like medical diagnosis <ref type="bibr" target="#b1">[2]</ref>, biologic identifications <ref type="bibr" target="#b2">[3]</ref> or image recognition <ref type="bibr" target="#b3">[4]</ref>. Uncertainty can be understood in two ways: the first is the uncertainty directly present in the data like noise and the second can be assimilated as the reliability of a feature inside a class. In this paper we consider only the first case. To avoid noise problems in classification some solutions have been provided previously, for example the transformation of data <ref type="bibr" target="#b4">[5]</ref> [6] <ref type="bibr" target="#b6">[7]</ref>, the use of fuzzy logic type-1 or type-2 <ref type="bibr" target="#b2">[3]</ref> or statistical models.</p><p>Fuzzy logic is a multi-valued logic framework introduced by Zadeh <ref type="bibr" target="#b7">[8]</ref> that is known to be more efficient for representating uncertainty and impreciseness than binary logic. In previous work, a fuzzy classifier named Learning Algorithm for Multivariate Data Analysis (LAMDA) has been proposed by Aguilar <ref type="bibr" target="#b8">[9]</ref>. This classifier can originally process simultaneously two different types of data: quantitative data and qualitative data. A real number contains an infinite amount of precision whereas human knowledge is finite and discrete, thus LAMDA is interesting because there is no solution proposed in the literature to process in a uniform way heterogeneous data and to handle in a same problem quantitative data and qualitative data is often a complex subject. A new type of data, the interval, has been introduced by Hedjazi <ref type="bibr" target="#b9">[10]</ref> to model uncertainties by means of crisp intervals. In this paper we propose an extention to fuzzy intervals in order to improve its application to process noisy data measurements but with the capacity to handle others features types like "clean" data or qualitative features. Moreover the algorithm should stay low cost in term of memory and computation time to enable the method to be embedded on small systems.</p><p>In the first part of the paper the LAMDA algorithm is shortly presented then in a second time a method to use the algorithm to classify noisy data is introduced. This method is in two parts: the first presents a general solution to model uncertainty on data with crisp intervals based on confidence intervals and the second shows an improvement to model Gaussian noise with fuzzy intervals. In both cases examples of application are introduced to show the improvement of the method compared to the use of the data without transformation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">LAMDA algorithm (Learning Algorithm for Multivariate Data Analysis)</head><p>This section presents the principle of the LAMDA algorithm.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">General principle</head><p>LAMDA is a classification algorithm based on fuzzy logic created on an original idea of Aguilar <ref type="bibr" target="#b8">[9]</ref> and can achieve machine learning and clustering on large data sets. The algorithm takes as input a sample x made up of N features. The first step is to compute for each feature of x, an Figure <ref type="figure">1</ref>: Summarized scheme of the LAMDA algorithm adequacy degree to each class C j , j = 1..J where J is the total number of class. This is obtained by the use of a fuzzy adequacy function. So J vectors of N adequacy degrees are computed, these vectors are called Marginal Adequacy Degree vectors (MAD). At this point, all the features are in a common space. Then the second step is to take all the MADs and aggregate them into one global adequacy degree (GAD) by means of a fuzzy aggregation function. Thus the J MAD vectors (composed of N MADs) become J scalar GADs, the higher the GAD, the better the adequacy to the class. The simplest way to assign the sample x to a class is to keep as result the class with the biggest GAD.</p><p>All the process is summarized in Fig. <ref type="figure">1</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Fuzzy membership computation</head><p>During the learning step, the algorithm creates prototype data for each class and for each feature. These data are called classe descriptors or prototypes; they can be for example means or variances. We define as C j,n the class prototype of the n-th feature for the class j.</p><p>As previously mentioned the first step of the algorithm is a comparison between the sample vector x and all the C j,n . This operation is performed with membership functions and gives as result a membership adequacy degree. Thus M AD j,n is the MAD for the j-th class and the nth feature. As the framework is based on fuzzy logic, all memberships are numbers in the [0,1] interval. The general membership function is:</p><formula xml:id="formula_0">M AD j,n = f (C j,n , x n ) (1)</formula><p>The class prototype C j,n depends on two things: the type of data and the function used. Some functions may require only one data into C j,n whereas others need a list of parameters.</p><p>In the following section, some examples of membership functions are presented.</p><p>• Quantitative data: Many functions are available for this kind of data. For example the Gaussian:</p><formula xml:id="formula_1">f (x n ) = e − (x n − ρ j,n ) 2 2σ 2 j,n<label>(2)</label></formula><p>or the binomial function:</p><formula xml:id="formula_2">f (x n ) = ρ xn j,n .(1 − ρ j,n ) 1−xn (3)</formula><p>Where x n is the n-th feature of the sample x, ρ j,n is the mean of the n-th feature for the class j and σ j,n is the standard deviation of the n-th feature for the class j.</p><p>• Qualitative data:</p><p>Qualitative can take values in a set of modalities. The membership function of qualitative data returns the frequency of modality taken by the feature into the class during the learning phase. We introduce a qualitative variable with K modality {Q 1 , ..., Q K } and the frequency Φ k j of the modality Q k for the class j. The membership is described by:</p><formula xml:id="formula_3">f (x n ) = (Φ 1 j,n ) q1 * ... * (Φ K j,n ) q K (4) with q k = 0 if x n = Q k q k = 1 if x n = Q k • Intervals:</formula><p>The membership function for interval data is a function which tests the similarity between two fuzzy intervals.</p><p>In this case similarity is defined by two components: the distance between the intervals and the surface that these intervals have in common. Indeed the class prototype for crisp interval data is a mean interval. The similarity function is:</p><formula xml:id="formula_4">S(A, B) = 1 2 ( V µ A∩B (ξ)dξ V µ A∪B (ξ)dξ + 1 − ∂[A, B] [V ] )<label>(5)</label></formula><p>where µ X (x) is the value of x in the fuzzy set X, ∂[A, B] is the distance between intervals A = [a − , a + ] and B = [b − , b + ]and [X] is the size of a fuzzy set into a V universe. This is described by:</p><formula xml:id="formula_5">[X] = V µ X (ξ)dξ<label>(6)</label></formula><p>In the case of crisp intervals and in a universe between 0 and 1:</p><formula xml:id="formula_6">S(A, B) = 1 2 ( [A ∩ B] [A ∪ B] + 1 − ∂[A, B]) (7)</formula><p>where [X] in this case can be replaced by the length of the interval:</p><formula xml:id="formula_7">[X] = upperbound(X)-lowerbound(X)<label>(8)</label></formula><p>and distance ∂[A, B] is defined as:</p><formula xml:id="formula_8">∂[A, B] = max[0, max(a − , b − )−min(a + , b + )]<label>(9)</label></formula><p>In the case where an interval feature is used the prototype for a class j is given by [ρ n− j , ρ n+ j ] where ρ n− j , respectively ρ n+ j represents the mean value of lower bounds (respectively upper bounds) of all the elements belonging to class j for this feature. Once the MAD are computed whatever the feature type, it is possible to perform any type of processing as described on Fig. <ref type="figure">2</ref> Figure <ref type="figure">2</ref>: Projection principle for heterogeneous feature types</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Marginal adequacy degree merging</head><p>Once all the features are grouped into the membership space the next step of the algorithm is to transform the MAD vectors into a set of single value which depicts the global membership of the sample to a class. These values were introduced in section 2.1 and are called GAD. To perform this transformation a fuzzy aggregation function Ψ is used.</p><p>The aggregation function is the following:</p><formula xml:id="formula_9">Ψ(M AD) = α.γ(M AD) + (1 − α).β(M AD) (<label>10</label></formula><formula xml:id="formula_10">)</formula><p>where γ is a fuzzy T-norm and β is a fuzzy T-conorm. α parameter is called exigency indicator. It enables to give more or less significance to the union operation and the intersection operation. Two fuzzy T-norm and T-conorm are currently implemented in the algorithm, the min-max and the probabilistic. For example if min-max is used, (10) becomes:</p><formula xml:id="formula_11">Ψ(M AD) = α.min(M AD)+(1−α).max(M AD) (11)</formula><p>When all GAD are computed they give the membership of the data x to each class. The final result depends on the application but the simplest way to give a result is to class the sample in the class which has the highest GAD. A limit membership can also be fixed: if no GAD is higher than the limit, the sample is defined as unclassifiable.</p><p>3 Uncertainty modeled with crisp intervals</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Method presentation</head><p>Every data measurement is performed with noise. In some cases noise has enough bad effect to increase the error of classification. Thus the point is to model the imprecision of the data to decrease the number of bad classifications.</p><p>A technique used in several fields of application is the use of intervals to symbolize data uncertainty <ref type="bibr" target="#b10">[11]</ref>  <ref type="bibr" target="#b11">[12]</ref>. So we are suggesting a framework where numerical data are transformed into intervals to model imprecision.</p><p>In a situation where the probability law followed by the noise on a variable is unknown, it may be possible to obtain a confidence interval. It is an interval in which the real value of the measure is present with a certain amount of confidence (for example a confidence interval of 95% is an interval in which the exact value of the measure can be found with a probability of 95%). Introducing x the measured value and l the length of a centered on zero confidence interval based on the measurement error, the interval used by the algorithm is calculated: X = [x − l 2 ; x + l 2 ]. The main aim of the transformation is to improve the classification on the transition zones where data is really sensitive to noise and a small change can modify the output of the classifier. The use of intervals to model uncertainty is effective only if the "clean" data is relevant for the classification problem. If it is not the case a better solution is to remove the irrelevant feature. It will in most cases provide better output results. This expresses the fact that if the "clean" data is difficult to classify it is not improved by using confidence intervals.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Experiments</head><p>A set of data has been created for an application test which can be interpreted as sensors time evolution of a continuous process. This set of data is composed by three quantitative (numerical) features of 101 samples that are shown on the Fig. <ref type="figure" target="#fig_0">3</ref>. Three classes are specified and used as targets for the classifier. These classes are chosen arbitrarily to represent different behaviors of a system that could be healthy or failure modes. Nevertheless the classes are built to make all the data relevant for the system monitoring which means the three features do not have a global negative impact on the classification results.</p><p>The three features x, y and z are defined by the following time functions:  The experiment has been performed with these conditions: α parameter of ( <ref type="formula" target="#formula_9">10</ref>) is set at 0.8 with the [min,max] functions to compute the fuzzy aggregation and the membership function used for quantitative data is the binomial.[min, max] aggregation is chosen because experiments on the algorithm showed that this kind of aggregation provides better results on noisy data that the probabilistic one. A first classification without any noise gives a result of 91% of good classification. Then the experiment is repeated a great many times to avoid statistical mistakes. In this case, the experiment has been run fifty thousand times, x is recomputed at each new run. Results are given on table As it can be seen, this method provides an improvement on the results in the two first cases where noise deteriorates the classification with the quantitative method but when the data is still globally consistent. In these cases, the intervals method gives better results than binomial method 82% of the time. But when noise amplitude is much higher than the data like in the [−2; +2] error interval, the interval method does worse in general than the binomial function. </p><formula xml:id="formula_12">• x = e −t 2 • y = 1 2 • e t 4 − 1 • z = tanh(t − 5)</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Fuzzy interval method presentation</head><p>Most of the time, noise on physical measure follows a Gaussian distribution centered on the real value. Thus it is interesting to model this specific kind of uncertainty. Nevertheless, it is difficult to handle fuzzy intervals with an exact Gaussian shape. That is why we suggest approximating the Gaussian with a triangular fuzzy interval. This interval is described with a lower boundary x − and an upper boundary x + : X = [x − ; x + ] which leads to a similar description as crisp intervals. So:</p><formula xml:id="formula_13">µ X (x − ) = 0 and µ X (x + ) = 0 and µ X ( x + +x − 2 ) = 1</formula><p>with µ X (x) the fuzzy value of x into the fuzzy set X. As a Gaussian of ρ mean is centered on the true measure value the maximum fuzzy value of the triangle x + +x − 2 is equal to ρ. To compute x − and x + we propose to use the full width at half maximum (FWHM) that can be calculated this way:</p><formula xml:id="formula_14">F W HM = 2 2ln(2) • σ<label>(12)</label></formula><p>with σ that is the standard deviation of the measure. Thus for a Gaussian function that has a mean value ρ and a standard deviation σ the approximated interval X is defined by</p><formula xml:id="formula_15">X = [ρ − 2 2ln(2) • σ; ρ + 2 2ln(2) • σ].</formula><p>An example of this approximation is given on Fig. <ref type="figure" target="#fig_2">5</ref>.</p><p>Until now all the implementations of the LAMDA algorithm were using only crisp intervals despite the fact that the general method was introduced. The class prototype is now a triangle interval computed with the means of upper and lower boundaries of the data used to train the algorithm. Thus the membership function is still a similarity measure between two fuzzy intervals like in (5) but it is necessary to redefine the distance function between the intervals. A solution has been proposed to measure a distance with the center of gravity of triangular fuzzy intervals <ref type="bibr" target="#b12">[13]</ref>. In the present situation:</p><formula xml:id="formula_16">∂[A, B] = | a + + a − 2 − b + + b − 2 | (<label>13</label></formula><formula xml:id="formula_17">)</formula><p>with A = [a − ; a + ] and B = [b − ; b + ], A and B being triangular fuzzy intervals like described in this section.</p><p>The intersection A ∩ B needed in ( <ref type="formula" target="#formula_4">5</ref>) is calculated with an analytical solution based on geometry and trigonometry. It avoids numerical integration that could be less precise and longer to compute.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Experiments</head><p>As we did previously with the crisp method, a test is performed with a Gaussian noise on the same data set (Fig. <ref type="figure" target="#fig_0">3</ref>). The test is done in the same conditions as in the previous section. The difference is on the construction of the noisy data x = x + Y . Y is now a random variable that follows a normal distribution of standard deviation σ and centered on 0. Results of the simulation are given on the table <ref type="table" target="#tab_1">2</ref> Similarly to the previous test, the interval method increases the rate of good classifications until the standard deviation σ becomes too high and the binomial function provides better results. This point is reached here for σ = 0.7 which corresponds to a signal to noise ratio (SNR) of 6 dB for the signal with the smallest amplitude. Also it is important to notify that in all cases the fuzzy interval provides better results than the crisp interval method.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Experiments on iris dataset</head><p>As a second example we use the classical iris dataset <ref type="bibr" target="#b13">[14]</ref>. This dataset contains four features: sepal length in cm, sepal width in cm, petals length in cm and petal width in cm. All these features are measured for three types of flower: iris Setosa, iris Versicolour and iris Virginica which constitute three classes. It is easy to classify without any error the iris dataset by using only the petals information that are in general most relevant that the sepals ones. Thus only the sepal sizes are kept in this test to simulate the noise. The figure <ref type="figure" target="#fig_3">6</ref> shows the repartition of the data in the 2D space of the sepal features.</p><p>We assume that the data follow a normal distribution centered on a mean µ j,n and with a standard-deviation σ j,n . This hypothesis can be verified by using a statistical test. The Kolmogorov-Smirnov test has been used for each class with a 5% significance level, it shows that the hypothesis is true for the iris Setosa and the iris Versicolour but not for the iris Virginica. Nevertheless all the data are processed as if they follow a normal distribution. The classifications are performed using the crossvalidation method. The percentages of well classified data for the two methods are:</p><p>• using binomial function (scalar): 81.3%</p><p>• using fuzzy triangular intervals: 94.0% Once again the classification rate is increased by the use of the fuzzy interval method instead of the binomial one.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusion</head><p>We presented in this article two methods to model uncertainty for classification applications. An example showed that these methods can improve classification results even when the signal to noise ratio is high. The second method based on fuzzy intervals demonstrated that try to model more precisely the probability law of the noise can provide better results than use confidence intervals modelled by crisp intervals. However this process to model uncertainty reveals limits when the SNR reaches a low level. A future important work is to limit the classification error of the interval method at the level of the numerical method.</p><p>These methods will now be tested on data out coming from a real industrial process.</p><p>Another way to manage uncertainty on classifiers like LAMDA could be to use type-2 fuzzy functions <ref type="bibr" target="#b14">[15]</ref>. This is an expansion of classical fuzzy logic where the membership functions give in output a fuzzy interval which can be used to model variance of the data.</p><p>To provide a better solution to manage uncertainty in the LAMDA classifier it can be useful to extend the problem to the qualitative features. It is often difficult to determine if a qualitative element is close to another, for example the color "orange" is closer to "red" than "blue". But on small training dataset consider this kind of information can improve final classification results. This could be done by using similarity matrix which are already used in some artificial intelligence problems.</p><p>LAMDA algorithm can work with a feature selection algorithm named MEMBAS (Membership Margin Based Feature Selection) <ref type="bibr" target="#b15">[16]</ref>. This algorithm uses LAMDA classes definitions and its membership functions to provide an analytical solution for the feature selection. A future work will be to measure the impact of the interval use on MEMBAS algorithm to perform selection on noisy data.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Data used to test the intervals method This example is used to measure the improvement in the classification results in the case of all data are noisy. Artificial noise is added by the following: x is the ideal variable without noise and x the noisy variable, x = x + Y with</figDesc><graphic coords="3,311.62,463.50,237.40,207.24" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: An example of data corrupted with a noise in the interval [-0.5 ; 0.5]</figDesc><graphic coords="4,52.96,49.28,237.41,207.68" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Example of approximation of a Gaussian fuzzy interval by a triangular fuzzy interval 4 Modeling Gaussian noise with fuzzy intervals 4.1 Fuzzy interval method presentation</figDesc><graphic coords="4,311.62,49.28,237.41,189.25" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Representation of iris data by class</figDesc><graphic coords="5,311.62,49.28,237.40,158.17" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>1. Table of results for the crisp intervals method</figDesc><table><row><cell>Interval for ran-dom data Mean success percentage with binomial function Mean success percentage with interval function</cell><cell>[-0.3 ; 0.3] [-0.5 ; 0.5] [-2 ; 2] 89.9% 84.7% 79.6% 91.9% 89.8% 70.3%</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 :</head><label>2</label><figDesc>. Table of results for the fuzzy intervals method</figDesc><table><row><cell>σ Mean percentage success with binomial function Mean success percentage with crisp interval function Mean success percentage with fuzzy function interval</cell><cell>0.2 83.2% 79.8% 79.8% 79.6% 0.5 0.7 1 86.8% 82.5% 77.2% 71.3% 93.1% 84.5% 79.3% 74.8%</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_0">Proceedings of the 26 th International Workshop on Principles of Diagnosis</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_1">Proceedings of the 26 th International Workshop on Diagnosis</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A review of probabilistic, fuzzy, and neural models for pattern recognition</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Bezdek</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Intelligent and Fuzzy Systems</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="25" />
			<date type="published" when="1993">1993</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Gene selection in cancer classification using pso/svm and ga/svm hybrid algorithms</title>
		<author>
			<persName><forename type="first">E</forename><surname>Alba</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Garcia-Nieto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Jourdan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Talbi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Evolutionary Computation, CEC 2007. IEEE Congress on</title>
				<imprint>
			<date type="published" when="2007-09">Sept. 2007</date>
			<biblScope unit="page" from="284" to="290" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Using fuzzy intervals to represent measurement error and scientific uncertainty in endangered species classification</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">Resit</forename><surname>Scott Ferson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Amy</forename><surname>Akqakaya</surname></persName>
		</author>
		<author>
			<persName><surname>Dunham</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">NAFIPS. 18th International Conference of the North American on</title>
				<imprint>
			<date type="published" when="1999-07">1999. Jul 1999</date>
			<biblScope unit="page" from="690" to="694" />
		</imprint>
	</monogr>
	<note>Fuzzy Information Processing Society</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Power svm: Generalization with exemplar classification uncertainty</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">X</forename><surname>Zhang Weiyu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Shang-Hua</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><surname>Teng</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computer Vision and Pattern Recognition (CVPR)</title>
				<imprint>
			<date type="published" when="2012-06">2012. June 2012</date>
			<biblScope unit="page" from="2144" to="2151" />
		</imprint>
	</monogr>
	<note>IEEE Conference on</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Classification of coronary artery disease stress ecgs using uncertainty modeling</title>
		<author>
			<persName><forename type="first">Arafat</forename><surname>Samer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dohrmann</forename><surname>Mary</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Skubic</forename><surname>Marjorie</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computational Intelligence Methods and Applications</title>
				<imprint>
			<publisher>ICSC Congress</publisher>
			<date type="published" when="2005">2005. 2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Uncertainty estimation using fuzzy measures for multiclass classification</title>
		<author>
			<persName><forename type="first">E</forename><surname>Graves</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Romesh</forename><surname>Nagarajah</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="page" from="128" to="140" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
	<note>Neural Networks</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Fuzzy c-means clustering based uncertainty measure for sample weighting boosts pattern classification efficiency</title>
		<author>
			<persName><forename type="first">Prabha</forename><surname>Verma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">D S</forename><surname>Yadava</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computational Intelligence and Signal Processing (CISP), 2012 2nd National Conference on</title>
				<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="31" to="35" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Fuzzy sets</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Zadeh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information and Control</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="338" to="353" />
			<date type="published" when="1965-06">June 1965</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Controlling selectivity in nonstandard pattern recognition algorithms</title>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">P</forename><surname>Carrete</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Aguilar-Martin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Systems, Man and Cybernetics</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="page" from="71" to="82" />
			<date type="published" when="1991-02">Jan/Feb 1991</date>
			<publisher>IEEE</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Towards a unfined principle for reasoning about heterogeneous data: a fuzzy logic framework</title>
		<author>
			<persName><forename type="first">L</forename><surname>Hedjazi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Aguilar-Martin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">V</forename><surname>Le Lann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kempowsky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Uncertainty, Fuzzyness and Knowledge-Based Systems</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="281" to="302" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Qualitative Reasoning: Modeling and Simulation with Incomplete Knowledge</title>
		<author>
			<persName><forename type="first">B</forename><surname>Kuipers</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1994">1994</date>
			<publisher>The MIT Press</publisher>
			<pubPlace>Cambridge, Massachusetts</pubPlace>
		</imprint>
	</monogr>
	<note>london edition</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Some analyses of interval data</title>
		<author>
			<persName><forename type="first">Lynne</forename><surname>Billard</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Computing and Information Technology</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="page" from="225" to="233" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
	<note>CIT</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Similarity of generalized fuzzy numbers with graded mean integration representation</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">H</forename><surname>Hsieh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">H</forename><surname>Chen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Eighth International Fuzzy Systems Association World Congress</title>
				<meeting>the Eighth International Fuzzy Systems Association World Congress<address><addrLine>Taipei, Taiwan, Republic of China</addrLine></address></meeting>
		<imprint>
			<date type="published" when="1999">1999</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="551" to="555" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">{UCI} machine learning repository</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Fisher</surname></persName>
		</author>
		<ptr target="http://archive.ics.uci.edu/ml" />
		<imprint>
			<date type="published" when="1936">1936</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Interval type-2 fuzzy logic systems made simple. Fuzzy Systems</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Mendel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">I</forename><surname>John</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Liu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. on</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="808" to="821" />
			<date type="published" when="2006-12">Dec. 2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Similarity-margin based feature selection for symbolic interval data</title>
		<author>
			<persName><forename type="first">L</forename><surname>Hedjazi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Aguilar-Martin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">V</forename><surname>Le Lann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pattern Recognition Letters</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="page" from="578" to="585" />
			<date type="published" when="2012-03">March 2012</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
