<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Processing measure uncertainty into fuzzy classifier</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Thomas Monrousseau</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Louise Travé-Massuyès</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marie-Véronique Le Lann</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>avenue du colonel Roche</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Toulouse</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>France</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Univ de Toulouse</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Toulouse</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>France e-mails: thomas.monrousseau@laas.fr</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>louise@laas.fr</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>mvlelann@laas.fr</string-name>
        </contrib>
      </contrib-group>
      <fpage>269</fpage>
      <lpage>274</lpage>
      <abstract>
        <p>Machine learning such as data based classification is a diagnosis solution useful to monitor complex systems when designing a model is a long and expensive process. When used for process monitoring the processed data are available thanks to sensors. But in many situations it is hard to get an exact measure from these sensors. Indeed measure is done with a lot of noise that can be caused by the environment, a bad use of the sensor or even the conversion from analogic to numerical measure. In this paper we propose a framework based on a fuzzy logic classifier to model the uncertainty on the data by the use of crisp (non fuzzy) or fuzzy intervals. Our objective is to increase the number of good classification results in the presence of noisy data. The classifier is named LAMDA (Learning Algorithm for Multivariate Data Analysis) and can perform machine learning and clustering on different kind of data like numerical values, symbols or interval values.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Data classification is the process of dividing pattern space
using hard, fuzzy or probabilistic partitions into a number
of regions [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Classification algorithms are more and more
used nowadays in a world where it is not always simple to
get a model of complex process. On the opposite it is easier
to get data on systems by monitoring and store it.
Different types of classifiers can be used depending on the
situation. The principal ones described in the literature are
artificial neural networks, k-nearest neighbors, support
vector machine, decision trees, fuzzy classifiers and statistical
methods.
      </p>
      <p>
        Most of the time, data are issued from sensor
measurements and are corrupted by noise. This noise can have
different origins, for example environment disturbances, bad
use of the sensor, hysteresis effect or numerical conversion
and representation of the data. Many domains of
application have to deal with noise problems like medical
diagnosis [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], biologic identifications [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] or image recognition [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
Uncertainty can be understood in two ways: the first is the
uncertainty directly present in the data like noise and the
second can be assimilated as the reliability of a feature
inside a class. In this paper we consider only the first case. To
avoid noise problems in classification some solutions have
been provided previously, for example the transformation of
data [5] [6] [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], the use of fuzzy logic type-1 or type-2 [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]
or statistical models.
      </p>
      <p>
        Fuzzy logic is a multi-valued logic framework
introduced by Zadeh [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] that is known to be more efficient for
representating uncertainty and impreciseness than binary
logic. In previous work, a fuzzy classifier named Learning
Algorithm for Multivariate Data Analysis (LAMDA)
has been proposed by Aguilar [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. This classifier can
originally process simultaneously two different types
of data: quantitative data and qualitative data. A real
number contains an infinite amount of precision whereas
human knowledge is finite and discrete, thus LAMDA is
interesting because there is no solution proposed in the
literature to process in a uniform way heterogeneous data
and to handle in a same problem quantitative data and
qualitative data is often a complex subject. A new type
of data, the interval, has been introduced by Hedjazi [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]
to model uncertainties by means of crisp intervals. In this
paper we propose an extention to fuzzy intervals in order to
improve its application to process noisy data measurements
but with the capacity to handle others features types like
“clean” data or qualitative features. Moreover the algorithm
should stay low cost in term of memory and computation
time to enable the method to be embedded on small systems.
      </p>
      <p>In the first part of the paper the LAMDA algorithm is
shortly presented then in a second time a method to use the
algorithm to classify noisy data is introduced. This method
is in two parts: the first presents a general solution to model
uncertainty on data with crisp intervals based on confidence
intervals and the second shows an improvement to model
Gaussian noise with fuzzy intervals. In both cases examples
of application are introduced to show the improvement of
the method compared to the use of the data without
transformation.
2</p>
    </sec>
    <sec id="sec-2">
      <title>LAMDA algorithm (Learning Algorithm for Multivariate Data Analysis)</title>
      <p>This section presents the principle of the LAMDA
algorithm.
2.1</p>
      <sec id="sec-2-1">
        <title>General principle</title>
        <p>
          LAMDA is a classification algorithm based on fuzzy logic
created on an original idea of Aguilar [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] and can achieve
machine learning and clustering on large data sets.
        </p>
        <p>The algorithm takes as input a sample x made up of N
features. The first step is to compute for each feature of x, an
adequacy degree to each class Cj , j = 1..J where J is the
total number of class. This is obtained by the use of a fuzzy
adequacy function. So J vectors of N adequacy degrees
are computed, these vectors are called Marginal Adequacy
Degree vectors (MAD). At this point, all the features are in
a common space. Then the second step is to take all the
MADs and aggregate them into one global adequacy degree
(GAD) by means of a fuzzy aggregation function. Thus the
J MAD vectors (composed of N MADs) become J scalar
GADs, the higher the GAD, the better the adequacy to the
class. The simplest way to assign the sample x to a class is
to keep as result the class with the biggest GAD.</p>
        <p>All the process is summarized in Fig. 1.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2 Fuzzy membership computation</title>
        <p>During the learning step, the algorithm creates prototype
data for each class and for each feature. These data are
called classe descriptors or prototypes; they can be for
example means or variances. We define as Cj,n the class
prototype of the n-th feature for the class j.</p>
        <p>
          As previously mentioned the first step of the algorithm
is a comparison between the sample vector x and all the
Cj,n. This operation is performed with membership
functions and gives as result a membership adequacy degree.
Thus M ADj,n is the MAD for the j-th class and the
nth feature. As the framework is based on fuzzy logic, all
memberships are numbers in the [
          <xref ref-type="bibr" rid="ref1">0,1</xref>
          ] interval. The general
membership function is:
        </p>
        <p>M ADj,n = f (Cj,n, xn) (1)</p>
        <p>The class prototype Cj,n depends on two things: the type
of data and the function used. Some functions may require
only one data into Cj,n whereas others need a list of
parameters.</p>
        <p>In the following section, some examples of membership
functions are presented.</p>
        <p>• Quantitative data:</p>
        <p>Many functions are available for this kind of data. For
example the Gaussian:
or the binomial function:
f (xn) = e
−
(xn − ρj,n)2</p>
        <p>2σj2,n
f (xn) = ρjx,nn.(1 − ρj,n)1−xn
(2)
(3)</p>
        <p>Where xn is the n-th feature of the sample x, ρj,n is
the mean of the n-th feature for the class j and σj,n is
the standard deviation of the n-th feature for the class j.
• Qualitative data:</p>
        <p>Qualitative can take values in a set of modalities. The
membership function of qualitative data returns the
frequency of modality taken by the feature into the class
during the learning phase. We introduce a qualitative
variable with K modality {Q1, ..., QK } and the
frequency Φjk of the modality Qk for the class j. The
membership is described by:
f (xn) = (Φj1,n)q1 ∗ ... ∗ (ΦjK,n)qK
(4)
with
qk = 0 if xn 6= Qk
qk = 1 if xn = Qk
• Intervals:</p>
        <p>The membership function for interval data is a function
which tests the similarity between two fuzzy intervals.
In this case similarity is defined by two components:
the distance between the intervals and the surface that
these intervals have in common. Indeed the class
prototype for crisp interval data is a mean interval. The
similarity function is:</p>
        <p>S(A, B) =
1 ( RV μA∩B(ξ)dξ
2 RV μA∪B(ξ)dξ
+ 1 −
∂[A, B]
$[V ]
) (5)
where μX (x) is the value of x in the fuzzy set X,
∂[A, B] is the distance between intervals A = [a−, a+]
and B = [b−, b+]and $[X] is the size of a fuzzy set
into a V universe. This is described by:</p>
        <p>Z
$[X] =</p>
        <p>μX (ξ)dξ</p>
        <p>V
In the case of crisp intervals and in a universe between
0 and 1:
(6)
S(A, B) =
1 $[A ∩ B]</p>
        <p>(
2 $[A ∪ B]
+ 1 − ∂[A, B])
(7)
where $[X] in this case can be replaced by the length
of the interval:
$[X] = upperbound(X)-lowerbound(X)
(8)
and distance ∂[A, B] is defined as:
∂[A, B] = max[0, max(a−, b−)−min(a+, b+)] (9)
In the case where an interval feature is used the
prototype for a class j is given by [ρjn−, ρjn+] where ρjn−,
respectively ρn+ represents the mean value of lower
j
bounds (respectively upper bounds) of all the elements
belonging to class j for this feature.</p>
        <p>Once the MAD are computed whatever the feature
type, it is possible to perform any type of processing
as described on Fig. 2
Once all the features are grouped into the membership space
the next step of the algorithm is to transform the MAD
vectors into a set of single value which depicts the global
membership of the sample to a class. These values were
introduced in section 2.1 and are called GAD. To perform this
transformation a fuzzy aggregation function Ψ is used.</p>
        <p>The aggregation function is the following:
Ψ(M AD) = α.γ(M AD) + (1 − α).β(M AD)
(10)
where γ is a fuzzy T-norm and β is a fuzzy T-conorm.
α parameter is called exigency indicator. It enables to give
more or less significance to the union operation and the
intersection operation. Two fuzzy T-norm and T-conorm are
currently implemented in the algorithm, the min-max and
the probabilistic. For example if min-max is used, (10)
becomes:
Ψ(M AD) = α.min(M AD)+(1−α).max(M AD) (11)
When all GAD are computed they give the membership
of the data x to each class. The final result depends on the
application but the simplest way to give a result is to class
the sample in the class which has the highest GAD. A limit
membership can also be fixed: if no GAD is higher than the
limit, the sample is defined as unclassifiable.
3
3.1</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Uncertainty modeled with crisp intervals</title>
      <sec id="sec-3-1">
        <title>Method presentation</title>
        <p>Every data measurement is performed with noise. In some
cases noise has enough bad effect to increase the error of
classification. Thus the point is to model the imprecision of
the data to decrease the number of bad classifications.</p>
        <p>
          A technique used in several fields of application is the
use of intervals to symbolize data uncertainty [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. So
we are suggesting a framework where numerical data are
transformed into intervals to model imprecision.
        </p>
        <p>In a situation where the probability law followed by the
noise on a variable is unknown, it may be possible to
obtain a confidence interval. It is an interval in which the
real value of the measure is present with a certain amount
of confidence (for example a confidence interval of 95% is
an interval in which the exact value of the measure can be
found with a probability of 95%). Introducing xˆ the
measured value and l the length of a centered on zero
confidence interval based on the measurement error, the interval
l ; xˆ + 2l ].
used by the algorithm is calculated: X = [xˆ − 2</p>
        <p>The main aim of the transformation is to improve the
classification on the transition zones where data is really
sensitive to noise and a small change can modify the output of the
classifier. The use of intervals to model uncertainty is
effective only if the “clean” data is relevant for the classification
problem. If it is not the case a better solution is to remove
the irrelevant feature. It will in most cases provide better
output results. This expresses the fact that if the “clean”
data is difficult to classify it is not improved by using
confidence intervals.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Experiments</title>
        <p>A set of data has been created for an application test which
can be interpreted as sensors time evolution of a continuous
process. This set of data is composed by three quantitative
(numerical) features of 101 samples that are shown on the
Fig. 3. Three classes are specified and used as targets for
the classifier. These classes are chosen arbitrarily to
represent different behaviors of a system that could be healthy
or failure modes. Nevertheless the classes are built to make
all the data relevant for the system monitoring which means
the three features do not have a global negative impact on
the classification results.</p>
        <p>The three features x, y and z are defined by the following
time functions:</p>
        <p>−t
• x = e 2</p>
        <p>t
• y = 12 · e 4 − 1
• z = tanh(t − 5)</p>
        <p>This example is used to measure the improvement in the
classification results in the case of all data are noisy.
Artificial noise is added by the following: x is the ideal variable
without noise and xˆ the noisy variable, xˆ = x + Y with
Y a random variable following a uniform distribution on an
interval I.</p>
        <p>The experiment has been performed with these
conditions: α parameter of (10) is set at 0.8 with the [min,max]
functions to compute the fuzzy aggregation and the
membership function used for quantitative data is the
binomial.[min, max] aggregation is chosen because experiments
on the algorithm showed that this kind of aggregation
provides better results on noisy data that the probabilistic one.
A first classification without any noise gives a result of 91%
of good classification. Then the experiment is repeated a
great many times to avoid statistical mistakes. In this case,
the experiment has been run fifty thousand times, xˆ is
recomputed at each new run. Results are given on table 1.</p>
        <p>Interval for
random data
Mean success
percentage
with binomial
function
Mean success
percentage with
interval function
As it can be seen, this method provides an improvement
on the results in the two first cases where noise deteriorates
the classification with the quantitative method but when the
data is still globally consistent. In these cases, the intervals
method gives better results than binomial method 82% of
the time. But when noise amplitude is much higher than the
data like in the [−2; +2] error interval, the interval method
does worse in general than the binomial function.</p>
      </sec>
      <sec id="sec-3-3">
        <title>4.1 Fuzzy interval method presentation</title>
        <p>Most of the time, noise on physical measure follows a
Gaussian distribution centered on the real value. Thus it is
interesting to model this specific kind of uncertainty.
Nevertheless, it is difficult to handle fuzzy intervals with an exact
Gaussian shape. That is why we suggest approximating the
Gaussian with a triangular fuzzy interval. This interval is
described with a lower boundary x− and an upper boundary
x+: X = [x−; x+] which leads to a similar description as
crisp intervals. So:
μX (x−) = 0 and μX (x+) = 0 and μX ( x++2 x− ) = 1
with μX (x) the fuzzy value of x into the fuzzy set X. As
a Gaussian of ρ mean is centered on the true measure value
the maximum fuzzy value of the triangle x++x− is equal to
2
ρ. To compute x− and x+ we propose to use the full width
at half maximum (FWHM) that can be calculated this way:</p>
        <p>F W HM = 2p2ln(2) · σ (12)
with σ that is the standard deviation of the measure.
Thus for a Gaussian function that has a mean value ρ and a
standard deviation σ the approximated interval X is defined
by X = [ρ − 2p2ln(2) · σ; ρ + 2p2ln(2) · σ]. An example
of this approximation is given on Fig. 5.</p>
        <p>
          Until now all the implementations of the LAMDA
algorithm were using only crisp intervals despite the fact that
the general method was introduced. The class prototype is
now a triangle interval computed with the means of upper
and lower boundaries of the data used to train the algorithm.
Thus the membership function is still a similarity measure
between two fuzzy intervals like in (5) but it is necessary to
redefine the distance function between the intervals. A
solution has been proposed to measure a distance with the center
of gravity of triangular fuzzy intervals [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. In the present
situation:
∂[A, B] = |
a+ + a−
2
−
b+ + b−
2
|
(13)
with A = [a−; a+] and B = [b−; b+], A and B being
triangular fuzzy intervals like described in this section.
        </p>
        <p>The intersection A ∩ B needed in (5) is calculated with an
analytical solution based on geometry and trigonometry. It
avoids numerical integration that could be less precise and
longer to compute.
4.2
As we did previously with the crisp method, a test is
performed with a Gaussian noise on the same data set (Fig. 3).
The test is done in the same conditions as in the previous
section. The difference is on the construction of the noisy
data xˆ = x + Y . Y is now a random variable that follows a
normal distribution of standard deviation σ and centered on
0. Results of the simulation are given on the table 2.
σ
Mean success
percentage
with binomial
function
Mean success
percentage with
crisp interval
function
Mean success
percentage with
fuzzy interval
function
0.2
0.5
0.7
1
83.2%
79.8%
79.8%</p>
        <p>79.6%
86.8%
82.5%
77.2%</p>
        <p>71.3%
93.1%
84.5%
79.3%
74.8%
Similarly to the previous test, the interval method
increases the rate of good classifications until the standard
deviation σ becomes too high and the binomial function
provides better results. This point is reached here for σ = 0.7
which corresponds to a signal to noise ratio (SNR) of 6 dB
for the signal with the smallest amplitude. Also it is
important to notify that in all cases the fuzzy interval provides
better results than the crisp interval method.
4.3</p>
      </sec>
      <sec id="sec-3-4">
        <title>Experiments on iris dataset</title>
        <p>
          As a second example we use the classical iris dataset[
          <xref ref-type="bibr" rid="ref14">14</xref>
          ].
This dataset contains four features: sepal length in cm,
sepal width in cm, petals length in cm and petal width
in cm. All these features are measured for three types of
flower: iris Setosa, iris Versicolour and iris Virginica which
constitute three classes. It is easy to classify without any
error the iris dataset by using only the petals information
that are in general most relevant that the sepals ones. Thus
only the sepal sizes are kept in this test to simulate the
noise. The figure 6 shows the repartition of the data in the
2D space of the sepal features.
        </p>
        <p>We assume that the data follow a normal distribution
centered on a mean μj,n and with a standard-deviation σj,n.
This hypothesis can be verified by using a statistical test.
The Kolmogorov-Smirnov test has been used for each class
with a 5% significance level, it shows that the hypothesis is
true for the iris Setosa and the iris Versicolour but not for
the iris Virginica. Nevertheless all the data are processed as
if they follow a normal distribution.</p>
        <p>The classifications are performed using the
crossvalidation method. The percentages of well classified data
for the two methods are:
• using binomial function (scalar): 81.3%
• using fuzzy triangular intervals: 94.0%</p>
        <p>Once again the classification rate is increased by the use
of the fuzzy interval method instead of the binomial one.
5</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusion</title>
      <p>We presented in this article two methods to model
uncertainty for classification applications. An example showed
that these methods can improve classification results even
when the signal to noise ratio is high. The second method
based on fuzzy intervals demonstrated that try to model
more precisely the probability law of the noise can
provide better results than use confidence intervals modelled
by crisp intervals. However this process to model
uncertainty reveals limits when the SNR reaches a low level. A
future important work is to limit the classification error of
the interval method at the level of the numerical method.</p>
      <p>These methods will now be tested on data out coming
from a real industrial process.</p>
      <p>
        Another way to manage uncertainty on classifiers like
LAMDA could be to use type-2 fuzzy functions [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. This
is an expansion of classical fuzzy logic where the
membership functions give in output a fuzzy interval which can be
used to model variance of the data.
      </p>
      <p>To provide a better solution to manage uncertainty in the
LAMDA classifier it can be useful to extend the problem to
the qualitative features. It is often difficult to determine if a
qualitative element is close to another, for example the color
"orange" is closer to "red" than "blue". But on small training
dataset consider this kind of information can improve final
classification results. This could be done by using similarity
matrix which are already used in some artificial intelligence
problems.</p>
      <p>
        LAMDA algorithm can work with a feature selection
algorithm named MEMBAS (Membership Margin Based
Feature Selection) [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. This algorithm uses LAMDA classes
definitions and its membership functions to provide an
analytical solution for the feature selection. A future work will
be to measure the impact of the interval use on MEMBAS
algorithm to perform selection on noisy data.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Bezdek</surname>
          </string-name>
          .
          <article-title>A review of probabilistic, fuzzy, and neural models for pattern recognition</article-title>
          .
          <source>Journal of Intelligent and Fuzzy Systems</source>
          , Vol.
          <volume>1</volume>
          , No. 1:pp
          <fpage>1</fpage>
          -
          <lpage>25</lpage>
          ,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>E.</given-names>
            <surname>Alba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Garcia-Nieto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jourdan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>E.</given-names>
            <surname>Talbi</surname>
          </string-name>
          .
          <article-title>Gene selection in cancer classification using pso/svm and ga/svm hybrid algorithms</article-title>
          . In Evolutionary Computation,
          <string-name>
            <surname>CEC</surname>
          </string-name>
          <year>2007</year>
          .
          <article-title>IEEE Congress on</article-title>
          , pages
          <fpage>284</fpage>
          -
          <lpage>290</lpage>
          , Sept.
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Scott</given-names>
            <surname>Ferson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. Resit</given-names>
            <surname>Akqakaya</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Amy</given-names>
            <surname>Dunham</surname>
          </string-name>
          .
          <article-title>Using fuzzy intervals to represent measurement error and scientific uncertainty in endangered species classification</article-title>
          .
          <source>In Fuzzy Information Processing Society</source>
          ,
          <year>1999</year>
          . NAFIPS. 18th International Conference of the North American on, pages pp
          <fpage>690</fpage>
          -
          <lpage>694</lpage>
          ,
          <year>Jul 1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Zhang</given-names>
            <surname>Weiyu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.X.</given-names>
            <surname>Yu</surname>
          </string-name>
          , and
          <string-name>
            <surname>Shang-Hua Teng</surname>
          </string-name>
          .
          <article-title>Power svm: Generalization with exemplar classification uncertainty</article-title>
          .
          <source>In Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <source>2012 IEEE Conference on</source>
          , pages pp
          <fpage>2144</fpage>
          -
          <lpage>2151</lpage>
          ,
          <year>June 2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <given-names>Arafat</given-names>
            <surname>Samer</surname>
          </string-name>
          , Dohrmann Mary, and
          <string-name>
            <given-names>Skubic</given-names>
            <surname>Marjorie</surname>
          </string-name>
          .
          <article-title>Classification of coronary artery disease stress ecgs using uncertainty modeling</article-title>
          .
          <source>In Computational Intelligence Methods and Applications</source>
          ,
          <source>2005 ICSC Congress</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          18:pp.
          <fpage>128</fpage>
          -
          <lpage>140</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Prabha</given-names>
            <surname>Verma</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.D.S.</given-names>
            <surname>Yadava. Fuzzy</surname>
          </string-name>
          c
          <article-title>-means clustering based uncertainty measure for sample weighting boosts pattern classification efficiency</article-title>
          .
          <source>In Computational Intelligence and Signal Processing (CISP)</source>
          ,
          <year>2012</year>
          2nd National Conference on, pages
          <fpage>31</fpage>
          -
          <lpage>35</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>L.A.</given-names>
            <surname>Zadeh</surname>
          </string-name>
          .
          <article-title>Fuzzy sets</article-title>
          .
          <source>Information and Control</source>
          , vol.
          <volume>8</volume>
          :pp.
          <fpage>338</fpage>
          -
          <lpage>353</lpage>
          ,
          <year>June 1965</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Carrete</surname>
            <given-names>N.P.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Aguilar-Martin</surname>
            <given-names>J</given-names>
          </string-name>
          .
          <article-title>Controlling selectivity in nonstandard pattern recognition algorithms</article-title>
          .
          <source>In IEEE Transactions on Systems, Man and Cybernetics</source>
          , volume
          <volume>21</volume>
          , pages
          <fpage>71</fpage>
          -
          <lpage>82</lpage>
          . IEEE, Jan/Feb
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Hedjazi</surname>
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aguilar-Martin</surname>
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Le Lann</surname>
            <given-names>M.V.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Kempowsky</surname>
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Towards</surname>
          </string-name>
          <article-title>a unfined principle for reasoning about heterogeneous data: a fuzzy logic framework</article-title>
          .
          <source>International Journal of Uncertainty, Fuzzyness and Knowledge-Based Systems</source>
          , Vol.
          <volume>20</volume>
          , No. 2:pp.
          <fpage>281</fpage>
          -
          <lpage>302</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>B.</given-names>
            <surname>Kuipers</surname>
          </string-name>
          .
          <article-title>Qualitative Reasoning: Modeling and Simulation with Incomplete Knowledge</article-title>
          . The MIT Press,Cambridge, Massachusetts, london edition,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Lynne</given-names>
            <surname>Billard</surname>
          </string-name>
          .
          <article-title>Some analyses of interval data</article-title>
          .
          <source>Journal of Computing and Information Technology, CIT</source>
          <volume>16</volume>
          :pp
          <fpage>225</fpage>
          -
          <lpage>233</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Hsieh</surname>
            <given-names>C. H.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Chen</surname>
            <given-names>S. H.</given-names>
          </string-name>
          <article-title>Similarity of generalized fuzzy numbers with graded mean integration representation</article-title>
          .
          <source>In Proceedings of the Eighth International Fuzzy Systems Association World Congress</source>
          , volume vol.
          <volume>2</volume>
          , pages pp.
          <fpage>551</fpage>
          -
          <lpage>555</lpage>
          , Taipei, Taiwan, Republic of China,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Fisher</surname>
            <given-names>R.A. {UCI</given-names>
          </string-name>
          <article-title>} machine learning repository</article-title>
          ,
          <year>1936</year>
          . http://archive.ics.uci.edu/ml.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>J.M. Mendel</surname>
            ,
            <given-names>R.I. John</given-names>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Liu</surname>
          </string-name>
          .
          <article-title>Interval type2 fuzzy logic systems made simple</article-title>
          .
          <source>Fuzzy Systems, IEEE Trans. on</source>
          , Vol.
          <volume>14</volume>
          , No. 6:pp
          <fpage>808</fpage>
          -
          <lpage>821</lpage>
          , Dec.
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>L.</given-names>
            <surname>Hedjazi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Aguilar-Martin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and M.V. Le</given-names>
            <surname>Lann</surname>
          </string-name>
          .
          <article-title>Similarity-margin based feature selection for symbolic interval data</article-title>
          .
          <source>Pattern Recognition Letters</source>
          , Vol.
          <volume>32</volume>
          ,
          <issue>No4</issue>
          :pp.
          <fpage>578</fpage>
          -
          <lpage>585</lpage>
          ,
          <year>March 2012</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>