<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Towards Continuous Monitoring of Environment under Uncertainty: A Fuzzy Granular Decision Tree Approach</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Preetham</forename><forename type="middle">N</forename><surname>Reddy</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of EEE BITS Pilani</orgName>
								<address>
									<addrLine>Campus</addrLine>
									<postCode>403726</postCode>
									<settlement>Goa, Goa</settlement>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Sahith</forename><forename type="middle">N</forename><surname>Dambekodi</surname></persName>
							<affiliation key="aff1">
								<orgName type="department">Department of EEE BITS Pilani</orgName>
								<address>
									<addrLine>Campus</addrLine>
									<postCode>403726</postCode>
									<settlement>Goa, Goa</settlement>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author role="corresp">
							<persName><forename type="first">Tirtharaj</forename><surname>Dash</surname></persName>
							<email>tirtharaj@goa.bits-pilani.ac.in</email>
							<affiliation key="aff2">
								<orgName type="department">Data Science Research Group BITS Pilani</orgName>
								<address>
									<addrLine>Goa Campus</addrLine>
									<postCode>403726</postCode>
									<settlement>Goa</settlement>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Towards Continuous Monitoring of Environment under Uncertainty: A Fuzzy Granular Decision Tree Approach</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">9BDCCAC9CCE0D4918C9EF2CEF816A16E</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T10:04+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Environment Monitoring</term>
					<term>Decision Tree</term>
					<term>Fuzzy Logic</term>
					<term>Gas Sensors</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Smart monitoring of environment has been an essential area of research where decision-making process is inevitable. Reliability of the whole system depends on the stability and consistency of its decision-making unit. Real-time decision making is another challenge in the eld on which the research community has been focusing on improving the performance of the underlying models.</p><p>e underlying models are usually the learning models, that act as a smart engine a er being su ciently trained for the process. In this paper, we propose to use a decision tree model that has the capability of handling uncertainty in the acquired data from the environment. e resulting model is called as Fuzzy Granular Decision Tree (FGDT). Series of evaluation of FGDT shows that the model is stable and powerful for the presently considered problem.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>Smart monitoring of environment is increasingly becoming popular because of the obvious fact that very li le human intervention is required for such systems to perform. Moreover, it has been a very challenging and yet interesting area of research in the last several years <ref type="bibr" target="#b0">[1]</ref>. is eld has motivated the research community to design automated and intelligent models (or systems) towards continuous monitoring of an environment in industrial plants, medical environment or biological processes. However, designing an almost-accurate system has been a challenge given the background complexity of the environment or availability of the required equipment for the process. In the past, this has led the theoretical and computational computer scientists to develop mathematical models that could almost simulate the actual target environment.</p><p>Towards advancing the eld forward, a research was carried out very recently by Huerta et al. <ref type="bibr" target="#b1">[2]</ref> that focused on online decorrelation of humidity and temperature in chemical sensors for continuous monitoring. e work focused on automated processing of data from simultaneous and continuous readings of the variations in humidity and temperature in the targeted environment. eir work included eight <ref type="bibr" target="#b7">(8)</ref> metal-oxide sensors (MOX) sensors that were continuously sensing the environment for 537 days with a sampling rate of 1 sample per second <ref type="bibr" target="#b1">[2]</ref>. ey estimated the e ects of variations in air humidity and temperature on the chemical sensors signals by using standard energy band model with an assumption that the variations in sensor conductivity can be expressed as a nonlinear function of changes in the energy bands in the presence of external humidity and temperature.</p><p>eir study showed that the major factors that were a ecting the environment were the changes in humidity and the correlated changes in temperature and humidity. To visualize the process, they used a gas discrimination system that could discriminate among banana, wine, and baseline response. ey had used a variant of Support Vector Machine (SVM) algorithm to build the discriminatory model for the process.</p><p>In the process of continuous environment monitoring, there lies a certain degree of uncertainty in the acquired data. An uncertainty in the acquired data could lead to the failure or adverse functioning of the system. e uncertainty could arise due to the following situations such as failures of sensors, a sudden change in environment due to additional and unknown factors. In such a case, the discriminatory model that was built may not be trustworthy and hence the results should, in turn, be imprecise. is impreciseness or degree of uncertainty in the acquired data or desired outcome is called 'fuzziness'. An adaptive system that is meant to serve for the purpose of continuous environment monitoring should not only just adapt to the environmental changes with time but also should be robust with respect to the uncertainties as read through its sensors. To solve the purpose of handling uncertainty or the fuzziness present in the acquired data, and to make a robust decision based on the data, we propose a granular method -that speci cally captures the uncertain inputs -to be induced within an adaptive system. e research carried out by <ref type="bibr" target="#b1">[2]</ref> was experimentally sound and our present work mainly serves the purpose of embedding the uncertainty aspect of the work. In our present work, we redesign a discriminatory model that also incorporates possible uncertainty in the acquired data. It should be noted that our work is validated with the experimental data from <ref type="bibr" target="#b1">[2]</ref>, further information of which has been presented in the following sections.</p><p>e rest of the paper is organized as follows.</p><p>e proposed and implemented model has been discussed in the section 2. e results that have been obtained for the present problem has been elaborated in section 3. e obtained results are discussed in the section 4 and the paper has been concluded in the section 5.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">METHODOLOGY</head><p>Our proposed work has following major phases: data acquisition phase, uncertainty handling phase, discrimination phase.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Data acquisition phase</head><p>In the data acquisition phase, the sensors acquire the data from the environment for further processing in later phases. For more information, one could refer <ref type="bibr" target="#b1">[2]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Handling uncertainty in acquired data</head><p>In this work, uncertainty in the acquired data has been taken care with the use of feature transformation using a special fuzzy membership function <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4]</ref>.</p><p>e membership function transforms a single sensor reading (datum) to granules containing the information of di erent belongingness. For example, a temperature reading of 38°can be called as a low temperature, medium temperature or high temperature. Hence, it can have some degree of belongingness to each of low, medium and high temperature. In this work, we chose to use the Gaussian fuzzy membership function that is given in equation ( <ref type="formula" target="#formula_0">1</ref>) where a is the maximum value of the membership function, σ is the standard deviation of the readings obtained by a sensor. e value of x is di erent for each of the low, medium and high belongingness. For transformation to low degree of belongingness, x = min{x}; for transformation to medium degree of belongingness x = mean{x}; and for transformation to higher degree of belongingness x = max {x}. Please note that x is a vector that contains all the sensor readings for any particular feature. For ith feature (alternatively, for ith sensor), x should be represented as x i .</p><formula xml:id="formula_0">µ(x) = a exp − (x − x) 2 2σ 2<label>(1)</label></formula><p>e full transformation can be visualized in Figure <ref type="figure" target="#fig_0">1</ref> that represents the curve for low, medium and high membership function (from le to right). e belongingness can have a maximum value of 1 (a = 1) and a minimum value of 0. So, for a single datum, the transformation function generates three di erent granules. At a particular time t, if there are n sensors, there can be n sensor readings, called features. A er the transformation, the feature vector gets transformed to a higher dimension of 3n.</p><p>A major intuition behind such a high dimensional transformation is that a discriminatory model is expected to perform be er with high dimension than in low dimension.</p><p>is is as per Cover's theorem on the separability of pa erns <ref type="bibr" target="#b4">[5]</ref>. Since this work is focused on discrimination of the sensor readings into di erent classes, the transformation should make the pa erns separable in higher dimensional space, if they are not easily separable in a lower dimension. In a further section, we shall see that this is indeed the case for this speci c problem. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Discrimination phase</head><p>Discrimination is the process of assigning a class level to a set of inputs i.e. a set of sensor readings at a particular time. Consider a set of sensor readings at any particular time be represented as {x 1 , x 2 , . . . , x n } whose class is unknown. It should be noted that there are three classes in the present work: banana, wine, and baseline response. A er the fuzzy transformation, the vector can be represented as</p><formula xml:id="formula_1">x = {x low 1 , x medium 1 , x hi h 1 , x low 2 , x medium 2 , x hi h 2 , . . . , x low n , x medium n , x hi h n }<label>(2)</label></formula><p>e discrimination phase works with a model that has been built with prior knowledge, called training data, to discriminate the sensor readings in the real time, called test data. In this work, we implement a decision tree (DT) classi er that works with the transformed fuzzy granular feature space that has been obtained from the second phase. erefore, the whole model can be known as Fuzzy Granular Decision Tree (FGDT). e principal reason behind choosing DT over other machine learning models is that it is nonparametric that can learn from the training data in a supervised manner. More speci cally, we implement CART that is very similar to C4.5 decision tree <ref type="bibr" target="#b5">[6]</ref>. e FGDT predicts the class of a set of sensor readings by learning simple decision rules inferred from the features. For completeness and clarity of the readers, we present the working principle of the FGDT as follows.</p><p>Given a training dataset in fuzzy granular input feature space where each of the features is represented as x i ∈ R 3n . Note that each x i is a vector in 3n dimension represented as equation <ref type="bibr" target="#b1">(2)</ref>. Let the class labels be represented as d. e FGDT recursively partitions the training pa ern space such that the pa erns with same class labels are grouped together. e following Algorithm 1 explains the generic steps of the FGDT that take the fuzzy granular sensor readings training instances ( D t r ain ) and generates a decision tree from that ( T). Returns a leaf labeled with the most frequent class or the disjunction of all the classes) end while Until stopping condition is not met do Find the feature j ∈ D t r ain with highest informational gain or lowest impurity; Based on feature j, Split the present node of T to form sub-trees T l e f t , T r i ht ; Repeat for T l e f t , T r i ht ; end Algorithm 1: FGDT algorithm ( e rst three if statements are called base cases. An impurity function can also be used for the information gain: lower the impurity, higher the information gain and vice versa.) e stopping conditions while generating the T from D t r ain could be one of the following. A fully grown most generalized tree has been obtained, the training pa ern space has been well partitioned into multiple sub-spaces based on classes, the generated tree returns lowest error on validation data 2 . For measuring the impurity, an 'entropy' estimate is used which is given in equation <ref type="bibr" target="#b2">(3)</ref>.</p><formula xml:id="formula_2">H (X, T) = − m i=1 P(X i ) log 2 (P(X i ))<label>(3)</label></formula><p>Here, P(•) is the probability estimate. Based on the estimation in equation ( <ref type="formula" target="#formula_2">3</ref>), the entropy before split and entropy a er spli ing a node is computed for each feature. e feature which would provide lowest entropy a er split is considered to be split up based on some condition, usually a threshold. Entropy a er split is computed from the entropy estimate of each of the new sub-trees as</p><formula xml:id="formula_3">H af t er Spl it (X, T) = − k i=1 P(X) log 2 (H i (X i ), T i ),<label>(4)</label></formula><p>where, k is number of splits at a node of the tree, H i (X i ), T i ) is the entropy measure for the sub-tree that would be produced a er split. By using these entropy estimates that is entropy before split and entropy a er split, the information gain which is denoted as G can be computed as</p><formula xml:id="formula_4">G = H be f or eSpl it (X, T) − H af t er Spl it (X, T)<label>(5)</label></formula><p>e feature that would produce the highest information gain should be used to split at a particular node in the tree T. It should be noted that the discussion and algorithm implemented in this work splits a 2 A set of instances that were not used during the training process but are used to check the performance of the generated model a er training. ese are not test data. node of the tree into two sub-trees (binary split). It is also possible to split a node into multiple sub-trees (multiple splits) based on more than one thresholds at the node <ref type="bibr" target="#b5">[6]</ref>.</p><p>E ect on computational complexity? -It is quite obvious that the size of the feature set is increased by the transformation into the fuzzy feature space under the three membership functions such as low, medium and high. If we consider that there are n input number of features in the original data, a er the transformation into the fuzzy pa ern space, the size becomes 3 ×n input . If the time complexity for the classical decision tree learning is T (n input ), the FGDT has a time complexity of T (3 × n input ). However, it should be noted that a fuzzy granular decision tree learner would have to learn once with the available training data; and the learned model is just deployed for its testing. So, this cost over time would occur just once. Moreover, with the availability of high-performance computing architectures, this complexity could be lowered and scalability of the proposed model could be improved. Moreover, although a decision tree learner learns from the data by partitioning the space into multiple subspaces with conditions, it is already shown that the decision tree scales well with higher dimensional data <ref type="bibr" target="#b6">[7]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">RESULTS</head><p>An adaptive system should not only just adapt to the environmental changes but also should be robust with respect to the uncertainties as reading through the sensors. is could be done via robust modeling of the environment or via robust decision making withing the system. However, our present work demonstrates the former category where the implemented model adapts to the changes in the environment by also incorporating the possible uncertainties by transforming the problem into an imprecise problem. To support our claim about the improvement, we used the experimental data available from the work of Huerta et al. <ref type="bibr" target="#b1">[2]</ref> to study the performance of our proposed fuzzy granular model, FGDT. To elaborate, to evaluate the impact on discrimination performance of our proposed FGDT due to decorrelation of signals from temperature and humidity sensors, four di erent feature sets were used. e rst set of features is a set of raw sensor time series (RS); the second feature set is a set of raw sensor data with humidity and temperature (RS, T, H); the third set of features is a set of ltered data (FS) by decorrelating sensors; the fourth set of features is a set of raw sensor data with ltered sensor data (RS, FS). We used these same feature set for a proper evaluation of our proposed model. However, additionally, we also use a new dataset that also contains 't0' and 'dt' along with ltered sensor data (FS) for evaluation.</p><p>To properly estimate the generalization ability of the proposed FGDT model, we used the standard procedures in machine learning to evaluate the performance of FGDT when discriminating samples not used for training the classi er so as to remove evaluation bias during the testing phase. ere are 919438 number of sensorreading instances in the dataset. e available data was randomly partitioned into 80% and 20%. e partition with 20% of data was used for the independent test. e 80% partition was used for 5-fold cross validation (5CV). For a fair comparison, we used accuracy as the performance evaluation measure for this work as the same was also used in <ref type="bibr" target="#b1">[2]</ref>, although accuracy should not be always considered reliable <ref type="bibr" target="#b7">[8]</ref>. However, as the number of instances in the dataset is very large, at the order of 9 × 10 5 , it would not have a large e ect on the accuracy measure. We considered the fold-wise performance in 5CV along with independent test performance for the evaluation purpose. e results have been noted and depicted in Table <ref type="table" target="#tab_1">1</ref>. To have a be er evaluation of the model, the 20% partition was not made xed from the beginning, rather, with each fold evaluation, the test partition has been chosen.</p><p>A comparison of the result obtained by our proposed FGDT with the results obtained by Huerta et al. <ref type="bibr" target="#b1">[2]</ref>, as shown in Table <ref type="table" target="#tab_2">2</ref>, suggests that the proposed FGDT model is superior to the ISVM model for this problem for most of the datasets with selected features. However, for the dataset with only FS as feature set, the performance of FGDT is far lower than the ISVM model. is is probably because of the fact that the FS dataset is quite di cult to be separable which could be revealed from a sample plot between F1 vs F2 of the FS feature set (see Figure <ref type="figure" target="#fig_1">2</ref>). </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">DISCUSSION</head><p>e FGDT algorithm does handle uncertainty in the acquired data by transforming it to a feature space that is higher in dimension than that of the original data.</p><p>is could led the decision tree classi er capture the underlying relationship between the inputs and outputs in a be er way.</p><p>is is evident from the fact that the higher dimension of the transformed data makes the visibility sparse and hence drawing a clear boundary between two di erent classes becomes easy <ref type="bibr" target="#b4">[5]</ref>, which could not have been possible by using SVM classi er as used in <ref type="bibr" target="#b1">[2]</ref>. More speci cally, when a lower dimensional input space is projected onto a higher dimensional space, the sparsity in the projected space increases and thereby increasing the chance of learning a possibly-optimal hyperplane that would be serving as the boundary among various resulting groups or classes of pa erns. Technically, the distance between any pa ern in the project space and the learned hyperplane does improve which makes a strong impact on the training of the decision tree learner. is interpretation could probably be true because of the fact that a more generalized model that is built a er training and validation would possess a higher capability of generating accurate results during independent real-time tests. Moreover, the number of instances in a training set and the number of features do also play crucial roles in the process. With a higher number of instances with a small set of features might not generate an adequate function that would generate output from the inputs. A er transformation to fuzzy space and hence increasing the dimension of the input space, such a function would be possible for the purpose. is could lead to the proper generalization of the FGDT and hence the independent test performance. One should note that the present work uses the CART algorithm available in Scikit-learn <ref type="bibr" target="#b8">[9]</ref>. However, one could try many other machine learning approaches for the same task a er incorporating our proposed fuzzy granular transformation approach as a step in feature engineering before training of the learning model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">CONCLUSIONS</head><p>e uncertainty arising in the acquired data from the environment in the process of continuous home monitoring has been handled by developing a new fuzzy granular approach that has been combined with a decision tree for decision making. e proposed model FGDT has been evaluated with regard to experimentally validated data from continuous monitoring environment. e performance of our FGDT model is found to be superior to a recently published work on the same problem. Moreover, the statistical results also show that FGDT has not only be er discrimination capability but also it is quite stable and consistent.</p><p>Although the present work discusses the applicability of fuzzy modeling towards the uncertainty handling in the aspect of environmental monitoring systems; there are numerous possible applications of such techniques in real-world problems such as medical diagnosis [10], robotic navigation control mechanisms <ref type="bibr">[11,</ref><ref type="bibr">12]</ref>, handling uncertainty in so ware testing <ref type="bibr">[13]</ref>. All these mentioned real-world problems may not always be subject to precise computation, and hence uncertainty handling methods such as fuzzy granular decision tree (FGDT) approach would aid to achieve greater performance of the intended adaptive systems. </p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: A sample Gaussian fuzzy membership transformation function (X-axis: value of a sample sensor reading, Yaxis: degree of belongingness, µ)</figDesc><graphic coords="2,337.20,82.69,201.76,158.16" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Scatter plot between F1 and F2</figDesc><graphic coords="4,73.04,284.71,201.76,154.24" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>1</head><label></label><figDesc>Data: A transformed training set, D t r ain Result: A decision tree, T if All the instances belong to same class then Return a T with a leaf node labelled with the class; end if D t r ain = {ϕ} then Return T = ϕ with warning; end if If there is no feature in D t r ain then</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1 :</head><label>1</label><figDesc>Performance (accuracy) of FGDT with di erent feature set for 5CV and independent test</figDesc><table><row><cell>Features→</cell><cell>RS</cell><cell></cell><cell cols="2">RS, T, H</cell><cell>FS</cell><cell></cell><cell cols="2">FS, RS</cell><cell cols="2">FS, t0, dt</cell></row><row><cell>Fold</cell><cell>CV</cell><cell>Test</cell><cell>CV</cell><cell>Test</cell><cell>CV</cell><cell>Test</cell><cell>CV</cell><cell>Test</cell><cell>CV</cell><cell>Test</cell></row><row><cell>1</cell><cell cols="10">96.99 97.01 97.03 97.03 37.30 37.32 96.96 96.94 99.10 99.09</cell></row><row><cell>2</cell><cell cols="10">96.95 96.99 96.96 97.03 37.20 37.32 96.95 96.97 99.06 99.09</cell></row><row><cell>3</cell><cell cols="10">96.91 97.01 96.96 97.04 37.27 37.31 96.99 96.96 99.06 99.09</cell></row><row><cell>4</cell><cell cols="10">96.97 97.00 96.99 97.04 37.36 37.32 97.01 96.98 99.11 99.09</cell></row><row><cell>5</cell><cell cols="10">96.93 97.01 96.96 97.04 37.38 37.32 96.90 96.98 99.11 99.09</cell></row><row><cell>average</cell><cell>95.95</cell><cell>97.01</cell><cell>96.98</cell><cell>97.04</cell><cell>37.30</cell><cell>37.33</cell><cell>96.96</cell><cell>96.97</cell><cell>99.10</cell><cell>99.09</cell></row><row><cell>±std.dev.</cell><cell>±0.03</cell><cell>±0.01</cell><cell>±0.03</cell><cell>±0.01</cell><cell>±0.01</cell><cell>±0.01</cell><cell>±0.04</cell><cell>±0.01</cell><cell>±0.02</cell><cell>±0.01</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2 :</head><label>2</label><figDesc>Comparison of FGDT with results obtained by Huerta et al. [2] .5 95.95 ± 0.03 97.01 ± 0.01 RS, T, H 73.3 71.1 96.98 ± 0.03 96.98 ± 0.03 FS 72.4 71.2 37.30 ± 0.01 37.33 ± 0.01 RS, FS 82.6 80.9 96.96 ± 0.04 96.97 ± 0.01 FS, t0, dt --96.96 ± 0.04 96.97 ± 0.01 Vincent Dubourg, et al. Scikit-learn: Machine learning in python. Journal of Machine Learning Research, 12(Oct):2825-2830, 2011. [10] Tanistha Nayak, Tirtharaj Dash, D Chandrasekhar Rao, and Prabhat K Sahu. Evolutionary neural networks versus adaptive resonance theory net for breast cancer diagnosis. In Proceedings of the International Conference on Informatics and Analytics, page 97. ACM, 2016. [11] Tirtharaj Dash, Tanistha Nayak, and Rakesh Ranjan Swain. Controlling wall following robot navigation based on gravitational search and feed forward neural network. In Proceedings of the 2nd international conference on perception and machine intelligence, pages 196-200. ACM, 2015. [12] Tirtharaj Dash. Automatic navigation of wall following mobile robot using adaptive resonance theory of type-1. Biologically Inspired Cognitive Architectures, 12:1-8, 2015. [13] Salman Abdul Moiz. Uncertainty in so ware testing. In Trends in So ware Testing, pages 67-87. Springer, 2017.</figDesc><table><row><cell>Feature Set</cell><cell>ISVM [2] CV Test</cell><cell>CV</cell><cell>FGDT</cell><cell>Test</cell></row><row><cell>RS</cell><cell>78.5 76</cell><cell></cell><cell></cell><cell></cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">e • symbol is used to represent uncertainty in input data and the tree is generated from this fuzzy granular data.</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Environment monitoring system</title>
		<author>
			<persName><forename type="first">Joey</forename><forename type="middle">F</forename><surname>Boatman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bryan</forename><forename type="middle">S</forename><surname>Reichel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">US Patent</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page">690</biblScope>
			<date type="published" when="1999-04-06">April 6 1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Online decorrelation of humidity and temperature in chemical sensors for continuous monitoring</title>
		<author>
			<persName><forename type="first">Ramon</forename><surname>Huerta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Iago</forename><surname>Mosqueiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jordi</forename><surname>Fonollosa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nikolai</forename><forename type="middle">F</forename><surname>Rulkov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Irene</forename><surname>Rodriguez-Lujan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Chemometrics and Intelligent Laboratory Systems</title>
		<imprint>
			<biblScope unit="volume">157</biblScope>
			<biblScope unit="page" from="169" to="176" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Knowledge-based fuzzy mlp for classi cation and rule generation</title>
		<author>
			<persName><forename type="first">Sushmita</forename><surname>Mitra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rajat</forename><forename type="middle">K</forename><surname>De</surname></persName>
		</author>
		<author>
			<persName><surname>Sankar</surname></persName>
		</author>
		<author>
			<persName><surname>Pal</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Neural Networks</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="1338" to="1350" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Hybrid gravitational search and particle swarm based fuzzy mlp for medical data classi cation</title>
		<author>
			<persName><forename type="first">Tirtharaj</forename><surname>Dash</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sanjib</forename><surname>Kumar Nayak</surname></persName>
		</author>
		<author>
			<persName><surname>Behera</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computational Intelligence in Data Mining</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="35" to="43" />
			<date type="published" when="2015">2015</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Geometrical and statistical properties of systems of linear inequalities with applications in pa ern recognition</title>
		<author>
			<persName><surname>Cover</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE transactions on electronic computers</title>
		<imprint>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="326" to="334" />
			<date type="published" when="1965">1965</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Classication and regression trees</title>
		<author>
			<persName><forename type="first">Leo</forename><surname>Breiman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jerome</forename><surname>Friedman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Charles</forename><forename type="middle">J</forename><surname>Stone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Richard</forename><forename type="middle">A</forename><surname>Olshen</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1984">1984</date>
			<publisher>CRC press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Fast supervised hashing with decision trees for high-dimensional data</title>
		<author>
			<persName><forename type="first">Guosheng</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Chunhua</forename><surname>Shen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Qinfeng</forename><surname>Shi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Anton</forename><surname>Van Den</surname></persName>
		</author>
		<author>
			<persName><forename type="first">David</forename><surname>Hengel</surname></persName>
		</author>
		<author>
			<persName><surname>Suter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE Conference on Computer Vision and Pa ern Recognition</title>
				<meeting>the IEEE Conference on Computer Vision and Pa ern Recognition</meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="1963" to="1970" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Reliable evaluation of neural network for multiclass classi cation of real-world data</title>
		<author>
			<persName><forename type="first">Siddharth</forename><surname>Dinesh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tirtharaj</forename><surname>Dash</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1612.00671</idno>
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">Fabian</forename><surname>Pedregosa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gaël</forename><surname>Varoquaux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alexandre</forename><surname>Gramfort</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Vincent</forename><surname>Michel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Olivier</forename><surname>Bertrand Irion</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mathieu</forename><surname>Grisel</surname></persName>
		</author>
		<author>
			<persName><surname>Blondel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ron</forename><surname>Peter Pre Enhofer</surname></persName>
		</author>
		<author>
			<persName><surname>Weiss</surname></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
