<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">The research of fuzzy decision trees building based on entropy and the theory of fuzzy sets</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">S</forename><forename type="middle">B</forename><surname>Begenova</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">Novosibirsk State Technical University</orgName>
								<address>
									<addrLine>Karla Marks ave 20</addrLine>
									<postCode>630073</postCode>
									<settlement>Novosibirsk</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">T</forename><forename type="middle">V</forename><surname>Avdeenko</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">Novosibirsk State Technical University</orgName>
								<address>
									<addrLine>Karla Marks ave 20</addrLine>
									<postCode>630073</postCode>
									<settlement>Novosibirsk</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="department">IV International Conference on &quot;Information Technology and Nanotechnology&quot; (ITNT</orgName>
								<address>
									<postCode>2018</postCode>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">The research of fuzzy decision trees building based on entropy and the theory of fuzzy sets</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">667727E7E6E16B16EB22EE8D23B65FC1</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T20:25+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Decision trees are widely used in the field of machine learning and artificial intelligence. Such popularity is due to the fact that with the help of decision trees graphic models, text rules can be built and they are easily understood by the final user. Because of the inaccuracy of observations, uncertainties, the data, collected in the environment, often take an unclear form. Therefore, fuzzy decision trees are becoming popular in the field of machine learning. This article presents a method that includes the features of the two above-mentioned approaches: a graphical representation of the rules system in the form of a tree and a fuzzy representation of the data. The approach uses such advantages as high comprehensibility of decision trees and the ability to cope with inaccurate and uncertain information in fuzzy representation. The received learning method is suitable for classifying problems with both numerical and symbolic features. In the article, solution illustrations and numerical results are given.Also the comparison of fuzzy logic approaches for building fuzzy rules and classification trees are given.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Nowadays, in the era of big data, the extraction of knowledge is a bottleneck in the field of knowledge engineering. Computer programs that extract knowledge from data successfully try to solve this problem.Among these programs, systems for building decision trees for decision-making and classification tasks are very popular. The knowledge acquired in the form of decision trees and inference procedures is highly valued for the clarity and visibility of the data. Such assessment, at one time, aroused interest of scientists, which led to a number of methodological and empirical achievements. However, initially decision trees were popularized by Quinlan and his ID3 algorithm <ref type="bibr">[1]</ref>.</p><p>One of the extensions of the classical construction of decision trees is an approach based on fuzzy logic. Fuzzy approach is becoming increasingly popular in solving problems of uncertainty, noise and inaccurate data. It is successfully applied to problems in many industrial spheres. Most studies on the application of this representative framework to existing methodologies are focused mainly on new areas, such as neural networks and genetic algorithms. Nowadays, the fuzzy approach that integrates the concepts of fuzzy sets and entropy is becoming popular.</p><p>This article presents a method that includes the features of the two above-mentioned approaches: a graphical representation of the rules system in the form of a tree and the fuzzy representation of the data. Section 2 describes the principle of the decision trees, their advantages and disadvantages, algorithms for their construction. Section 3 shows the principle of constructing fuzzy decision trees, introduces the concepts of fuzzy logic. Section 4 describes the results of the study and the last section gives a conclusion.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Decisiontrees</head><p>A decision tree (DT) is a common formalization for mapping the transitions of attribute values to classes in the form of a map, which consists of attribute nodes or so-called tests that can have two or more subtrees, leaves, or decision nodes that are labeled with a class indicating the solution. The main advantage of this approach is the visualization of the solution. One of the most commonly used algorithms for constructing decision trees is the ID3 method, formalized by Quinlan in 1986 <ref type="bibr">[1]</ref>.</p><p>Decision trees create efficient models for machine learning <ref type="bibr" target="#b6">[11,</ref><ref type="bibr" target="#b7">12]</ref>. Let us give the following characteristics of decision trees:</p><p> they are easily interpretable and visible;  the model can be expressed both graphically and with text rules;  they are competitive in comparison with more expensive approaches;  decisiontrees are scalable;  they can process discrete and continuous data;  decision trees can be applied to different sizes of data sets, including large sample sets.</p><p>In the process of tree constructing, the pattern is represented by a set of features that are expressed in some descriptive language. Samples whose characteristics are known are called examples. The purpose of constructing a tree is to solve the problem of classification or regression.</p><p>ID3 and CART are the two most important discriminating learning algorithms that work by recursive partitioning. Their basic ideas are approximately the same: splitting the incoming sample into subsets and representing the partitions as a tree. An important property of these algorithms is that they simultaneously try to minimize the size of the tree with the optimization of some quality measure. Subsequently, they use the same logical inference.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Fuzzy decisiontrees</head><p>To construct a fuzzy decision tree, the following procedure is proposed [4]:</p><p>1. Define the fuzzy data base, i.e., the fuzzy granulation for the domains of the continuous features.  The entropy and information gain formulas remain the same for the classical version of the ID3 algorithm <ref type="bibr" target="#b5">[10]</ref>. Let us introduce the following notation:</p><p>set of data samples; set of attributes; a singleton set with a solution attribute or class attribute. Let this attribute have m different values, thens i is the number of samples of set U in class d i .</p><p>Information gain I relative to subset S j is equal to: , , where the number of samples in a subset of S. Entropy E(c i ) is: ; accordingly, the criterion for selecting an attribute is the increase in information:</p><p>. The difference between common algorithm ID3 and the fuzzy version of algorithm ID3 is that the attributes of objects have degrees of belonging to a particular node, and it is quite possible that an attribute with certain probabilities belongs to several nodes.</p><p>Figure <ref type="figure" target="#fig_2">2</ref> shows two decision trees that were built using the above-mentioned algorithms.</p><p>As an example, a classic data set was taken, Fisher's iris <ref type="bibr" target="#b4">[9]</ref>, which has 4 attributes: the length and width of the cup, the length and width of the petal and the three resultant classes -setosa, versicolor and virginica. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Theresearch results</head><p>As a research object, just as in the previous example, a Fisher's iris data set was used.To construct a fuzzy decision tree at the first stage, it is necessary to perform a fuzzification procedure.</p><p>While performing the fuzzification procedure, the definition set of fuzzy attributes is divided into fuzzy subsets. The value of the fuzzy attribute is put in correspondence with the term, and this correspondence is found using the membership function. The division of the definition set into fuzzy subsets can be made evenly, that is, the definition set is divided into equal intervals. However, in most real data sets obtained from the environment, it is preferable to perform the partitioning taking into account the features of the original sample. For example, it may happen that most of the sampling objects lie in the first third of the definition set and, in this case, uniform partitioning will not give the desired effect.The results of fuzzification for attributes SepalLength, SepalWidth, PetalLength and PetalWidthare shown in Figures <ref type="figure" target="#fig_5">3, 4</ref>     To study the hypothesis that with a decrease of the sample size, the accuracy of the classification of fuzzy decision trees is better than classifying with classical ones, the dependence of the classification accuracy on the number of instances in the data set was constructed.</p><p>In this study, the trees were constructed for 3 randomly selected N samples and the table shows the averaged values obtained (the sum of the values of the attributes / 3).</p><p>According to the data presented in Table <ref type="table">1</ref>, it can be seen that when the sample is reduced from 150 to 90, the accuracy of classification using fuzzy decision trees is three percent higher than results of classifying with classical decision trees, and when the sample is reduced to 60, the accuracy is higher by 0.82 percent.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1. Comparison of classification results obtained using fuzzy decision trees and classical decision</head><p>trees.</p><p>The number of instances in the data set (N) Table <ref type="table" target="#tab_0">2</ref> shows the dependence of the accuracy of data classification on the number of terms.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Fuzzy decision trees</head><p>According to the data in the table, it is clear that the optimal number of terms for the test set of data is 5. Such quantity gave a higher percentage of correctly classified data compared to 3 terms.</p><p>Table <ref type="table" target="#tab_1">3</ref> shows the dependence of the accuracy of the data classification on the value of the information gain. In this method, the increment of information will be the breakpoint of the algorithm, that is, when the specified value is reached, further building of the tree stops. According to the data it is clear that the lower the information gain, the more accurate and "deeper" the tree will be built. As a part of fuzzy decision trees research, we compare two methods of classification based on fuzzy logic. The first one is algorithm of direct generating fuzzy linguistic rules, proposed in <ref type="bibr">[3]</ref>. The second method of fuzzy decision trees which is proposed in this article was used. Fig. <ref type="figure" target="#fig_10">7</ref> gives visual illustration of comparison between the two methods for sequentially growing number of terms.</p><p>Here we can observe that the classification accuracy for sequentially growing number of terms (from 3 cases to 7) remains quite high. Method of fuzzy decision trees is better for medium and high sizes of terms while the method of direct generating fuzzy rules is better for small size of training sample. Further research will be in development of algorithm based on combination of both approaches.</p><p>On fig. <ref type="figure" target="#fig_11">8</ref> we can observe that the classification accuracy for sequentially reducing size of training sample (from 105 cases to 45) remains quite high. Method of fuzzy decision trees is better for medium size of training sample while the method of direct generating fuzzy rules is better for small size of training sample. Further research will be in development of algorithm based on combination of both approaches.    0, 𝑥 ≥ 𝑏 For T-class membership functions, we observe that fuzzy decision trees method is better for large size of training sample while the method of direct generating fuzzy rules is better for small and medium size of training sample. In case of S-class membership functions, we observe the same situation. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>Decision trees are successfully used to solve regression and classification problems. They are popular in the field of machine learning, because decision trees build graphic models, along with text rules that are easily interpreted by the users. On the other hand, fuzzy systems can solve classification problems with input inaccurate and noisy data. The combination of fuzzy trees and fuzzy logic makes it possible to construct intuitive graphic models for qualitative and quantitative data [2, <ref type="bibr" target="#b2">7,</ref><ref type="bibr" target="#b3">8]</ref>. Usage of this type of decision tree gives us several solutions with different probabilities of belonging to a particular class.</p><p>In addition, in the course of the conducted studies, the advantage of classification using fuzzy decision trees with respect to classical ones was revealed, by comparing the percentage of correctly classed objects. Also, a direct correlation between the accuracy of the classification and the value of the information gain was revealed (the increment is a criterion for stopping the further construction of the tree).</p><p>The comparison between algorithm of direct generating fuzzy linguistic rules and method of fuzzy decision trees didn't reveal the one and only right one. Both methods show high classification accuracy under certain conditions. The proposed approach can be applied to built fuzzy neural networks <ref type="bibr" target="#b8">[13]</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">References</head></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>2. Replace the continuous attributes of the training set using the linguistic labels of the fuzzy sets with highest compatibility with the input values [5, 6]. 3. Calculate the entropy and information gain of each feature to split the training set and define the test nodes of the tree until all features are used or all training examples are classified. Figure 1 shows an example of the fuzzification of continuous data.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 1 .</head><label>1</label><figDesc>Figure 1. Algorithm for constructing a fuzzy decision tree. The first block of Figure 1 illustrates a dataset with n examples, three attributes (At 1, At 2, At 3 ) and a class attribute. The fuzzified version of this dataset is presented in the second block. This fuzzified set of examples is used to induce the final DT, illustrated in the last block of Figure 1.The entropy and information gain formulas remain the same for the classical version of the ID3 algorithm<ref type="bibr" target="#b5">[10]</ref>. Let us introduce the following notation:set of data samples; set of attributes; a singleton set with a solution attribute or class attribute. Let this attribute have m different values, thens i is the number of samples of set U in class d i .Information gain I relative to subset S j is equal to:</figDesc><graphic coords="2,104.40,508.10,386.40,113.50" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 .</head><label>2</label><figDesc>Figure 2. Classical (left) and fuzzy (left) decision trees.</figDesc><graphic coords="3,117.60,303.10,359.75,135.85" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head></head><label></label><figDesc>, 5 and 6 respectively.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 3 .</head><label>3</label><figDesc>Figure 3. Sepal Length Attribute fuzzification.Figure 4. Sepal Width Attribute fuzzification.</figDesc><graphic coords="3,312.95,615.35,217.87,117.30" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 4 .</head><label>4</label><figDesc>Figure 3. Sepal Length Attribute fuzzification.Figure 4. Sepal Width Attribute fuzzification.</figDesc><graphic coords="3,70.80,615.35,217.20,120.56" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 5 .</head><label>5</label><figDesc>Figure 5. Petal Length Attribute fuzzification.Figure 6. Petal Width Attribute fuzzification.</figDesc><graphic coords="4,70.80,99.35,212.40,115.50" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 6 .</head><label>6</label><figDesc>Figure 5. Petal Length Attribute fuzzification.Figure 6. Petal Width Attribute fuzzification.</figDesc><graphic coords="4,323.75,99.35,202.54,114.03" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_9"><head>Fig. 9 and</head><label>9</label><figDesc>Fig. 10 illustrate the comparison between the two methods with sequentially reducing size of training sample using T-class and S-class membership functions respectively.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_10"><head>Figure 7 .</head><label>7</label><figDesc>Figure 7. Classification accuracy for sequentially growing number of terms.</figDesc><graphic coords="6,164.40,276.00,266.15,168.25" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_11"><head>Figure 8 .</head><label>8</label><figDesc>Figure 8. Classification accuracy for sequentially reducing size of training sample.</figDesc><graphic coords="6,166.44,456.90,263.24,170.05" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_12"><head>Figure 9 .</head><label>9</label><figDesc>Figure 9. Classification accuracy for sequentially reducing size of Iris dataset training sample (Tclass).Tclass membership function also known as triangular is specified by three parameters {a, b, c} as follows:</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_13"><head>Figure 10 .</head><label>10</label><figDesc>Figure 10. Classification accuracy for sequentially reducing size of Iris dataset training sample (Sclass).</figDesc><graphic coords="7,166.80,218.90,261.35,172.80" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_14"><head></head><label></label><figDesc>[1] Quinlan J R 1986 Induction of decision trees Machine learning 1 81-106 [2] Cintra M E, Meira C A A, Monard M C, Camargo H A and Rodrigues L H 2011 The use of fuzzy decision trees for coee rust warning in Brazilian Int. Conf. Int. Sys. Design &amp; Applications 1 1347-1352 [3] Avdeenko T V and Makarova E S 2017 Acquisition of knowledge in the form of fuzzy rules for cases classification Lecture Notes in Computer Science 10387 536-544 [4] Cintra M E, Monard M C and Camargo H A 2012 Fuzzy DT-a fuzzy decision tree algorithm based on C4.5 CBSF -Brazilian Congress on Fuzzy Systems 199-211</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 2 .</head><label>2</label><figDesc>Dependence of the accuracy of classification of data on the number of terms.</figDesc><table><row><cell cols="2">The number of terms Classification results</cell></row><row><cell>3</cell><cell>Correct = 142</cell></row><row><cell></cell><cell>Incorrect = 8</cell></row><row><cell></cell><cell>WithoutClass = 0</cell></row><row><cell></cell><cell>Percent Correct= 94.36</cell></row><row><cell>5</cell><cell>Correct = 143</cell></row><row><cell></cell><cell>Incorrect = 7</cell></row><row><cell></cell><cell>WithoutClass = 0</cell></row><row><cell></cell><cell>Percent Correct = 95.33</cell></row><row><cell>7</cell><cell>Correct = 143</cell></row><row><cell></cell><cell>Incorrect = 7</cell></row><row><cell></cell><cell>WithoutClass = 0</cell></row><row><cell></cell><cell>Percent Correct= 95.33</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 3 .</head><label>3</label><figDesc>Dependence of the accuracy of classification of data on the information gain.</figDesc><table><row><cell>Information gain</cell><cell>Classification results</cell></row><row><cell>0.02</cell><cell>14 leaves</cell></row><row><cell></cell><cell>Correct = 142</cell></row><row><cell></cell><cell>Incorrect = 8</cell></row><row><cell></cell><cell>WithoutClass = 0</cell></row><row><cell></cell><cell>PercentCorrect= 94.67</cell></row><row><cell>0.2</cell><cell>5 leaves</cell></row><row><cell></cell><cell>Correct = 139</cell></row><row><cell></cell><cell>Incorrect = 11</cell></row><row><cell></cell><cell>WithoutClass = 0</cell></row><row><cell></cell><cell>PercentCorrect= 92.67</cell></row><row><cell>0.4</cell><cell>3 leaves</cell></row><row><cell></cell><cell>Correct = 119</cell></row><row><cell></cell><cell>Incorrect = 31</cell></row><row><cell></cell><cell>WithoutClass = 0</cell></row><row><cell></cell><cell>PercentCorrect= 79.33</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>The work is supported by a grant from the Ministry of Education and Science of the Russian Federation within the framework of the project part of the state task, project No. 2.2327.2017 / 4.6 "Integration of knowledge representation models based on intellectual analysis of large data to support decision making in the field of software engineering."</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Fuzzy Decision Trees</title>
		<author>
			<persName><forename type="first">C</forename><surname>Janikow</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Issues and Methods IEEE Transactions of Man, Systems, Cybernetics</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="issue">1</biblScope>
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Bottom-up Partitioning in Fuzzy Decision Trees</title>
		<author>
			<persName><forename type="first">M</forename><surname>Faifer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C Z</forename><surname>Janikow</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 19th International Conference of the North American Fuzzy Information Society</title>
				<meeting>the 19th International Conference of the North American Fuzzy Information Society</meeting>
		<imprint>
			<date type="published" when="2000">2000</date>
			<biblScope unit="page" from="326" to="330" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Impression analysis using fuzzy c4.5 decision tree</title>
		<author>
			<persName><forename type="first">M</forename><surname>Tokumaru</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Muranaka</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Int. con. on Kansei engineering and emotion research</title>
				<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">1: an overview Proc</title>
		<author>
			<persName><forename type="first">C</forename><surname>Janikow</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">of the North American Fuzzy Information Processing Society</title>
				<imprint>
			<date type="published" when="2004">2004</date>
			<biblScope unit="page" from="877" to="881" />
		</imprint>
	</monogr>
	<note>Fid 4</note>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<ptr target="http://archive.ics.uci.edu/ml/datasets/Iris)" />
		<title level="m">Machine Learning Repository</title>
				<imprint>
			<date type="published" when="2018-05-30">30.05.2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Building of fuzzy decision trees using ID3 algorithm</title>
		<author>
			<persName><forename type="first">S B</forename><surname>Begenova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T V</forename><surname>Avdeenko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Physics: Conference Series</title>
		<imprint>
			<date type="published" when="1015">2018. 1015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">The use of fuzzy decision trees for coffee rust warning in Brazilian crop</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Cintra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Meira</forename><forename type="middle">C A A</forename><surname>Monard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename><surname>Camargo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Rodrigues</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L H</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Int. Conf. Int. Sys. Design &amp; Applications</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="1347" to="1352" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title/>
		<author>
			<persName><forename type="first">C</forename><surname>Olaru</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wehenkel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Fuzzy Sets and Systems</title>
		<imprint>
			<biblScope unit="volume">138</biblScope>
			<biblScope unit="page" from="221" to="254" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Application of fuzzy neural networks to determine the type of crystal lattices observed on nanoscale images</title>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">P</forename><surname>Soldatova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">A</forename><surname>Lezin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Lezina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kupriyanov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D V</forename><surname>Kirsh</surname></persName>
		</author>
		<idno type="DOI">10.18287/0134-2452-2015-39-5-787-794</idno>
	</analytic>
	<monogr>
		<title level="j">Computer Optics</title>
		<imprint>
			<biblScope unit="volume">39</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="787" to="795" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
