<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">ANN for prognosis of abdominal pain in childhood: use of fuzzy modelling for convergence estimation</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">George</forename><forename type="middle">C</forename><surname>Anastassopoulos</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Lazaros</forename><forename type="middle">S</forename><surname>Iliadis</surname></persName>
						</author>
						<title level="a" type="main">ANN for prognosis of abdominal pain in childhood: use of fuzzy modelling for convergence estimation</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">5FC80624BDCE9D7EF8CFE4E488754012</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T21:06+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper focuses in two parallel objectives. First it aims in presenting a series of Artificial Neural Network models that are capable of performing prognosis of abdominal pain in childhood. Clinical medical data records have been gathered and used towards this direction. Its second target is the presentation and application of an innovative fuzzy algebraic model capable of evaluating Artificial Neural Networks' performance <ref type="bibr" target="#b0">[1]</ref>. This model offers a flexible approach that uses fuzzy numbers, fuzzy sets and various fuzzy intensification and dilution techniques to perform assessment of neural models under different perspectives. It also produces partial and overall evaluation indices. The produced ANN models have proven to perform the classification with significant success in the testing phase with first time seen data.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>The wide range of problems in which Artificial Neural Networks can be used with promising results, is the reason of their growth <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3]</ref>. Some of the fields that ANNs are used are: medical systems <ref type="bibr" target="#b3">[4]</ref><ref type="bibr" target="#b4">[5]</ref><ref type="bibr" target="#b5">[6]</ref>, robotics <ref type="bibr" target="#b6">[7]</ref>, industry <ref type="bibr">[8 -11]</ref>, image processing <ref type="bibr" target="#b11">[12]</ref>, applied mathematics <ref type="bibr" target="#b12">[13]</ref>, financial analysis <ref type="bibr" target="#b13">[14]</ref>, environmental risk modelling <ref type="bibr" target="#b14">[15]</ref> and others.</p><p>Prognosis is a medical term denoting an attempt of physician to accurately estimate how a patient's disease will progress, and whether there is chance of recovery, based on an objective set of factors that represent that situation. The inference about prognosis of a patient when presented with complex clinical and prognostic information is a common problem, in clinical medicine. The diagnosis of a disease is the outcome of combination of clinical and laboratorial examinations through medical techniques.</p><p>In this paper various ANN architectures using different learning rules, transfer functions and optimization algorithms have been tried. This research effort was motivated form the fact that reliable and seasonable detection of abdomen pain constitute attainments in effective treatment of disease and avoidance of relapses. That is why the development of such an intelligent model that can collaborate with the doctors will be very useful towards successful treatment of potential patients.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">DIAGNOSTIC FACTORS OF ABDOMINAL PAIN</head><p>Several reports have described clinical scoring systems incorporating specific elements of the history, physical examination, and laboratory studies designed to improve diagnostic accuracy of abdominal pain <ref type="bibr" target="#b15">[16]</ref>. Nothing is guaranteed, but Democritus University of Thrace, Hellenic Open University anasta@med.duth.gr, liliadis@fmenr.duth.gr decision rules can predict which children are at risk for appendicitis (appendicitis is the most common surgical condition of the abdomen). One such numerically based system is based on a 6-part scoring system: nausea (6 point), history of local RLQ pain (2 point), migration of pain (1 point), difficulty walking (1 point), rebound tenderness / pain with percussion (2 point), and absolute neutrophil count of &gt;6.75 x 10`3/μL (6 point). A score &lt;5 had a sensitivity of 96.3% with a negative predictive value of 95.6% for AA.</p><p>To date, all efforts to find clinical features or laboratory tests, either alone or in combination, that are able to diagnose appendicitis with 100% sensitivity or specificity have proven futile. Also, there is only one research work <ref type="bibr" target="#b3">[4]</ref> in bibliography based on ANN that deals with the abdominal pain prognosis in childhood.</p><p>The incidence of Acute Appendicitis (AA) is 4 cases per 1000 children. However appendicitis despite pediatric surgeons' best efforts remains the most commonly misdiagnosed surgical condition. Although diagnosis and treatment have improved, appendicitis continues to cause significant morbidity and still remains, although rarely, a cause of death. Appendicitis has a male-to-female ratio of 3:2 with a peak incidence between ages 12 and 18 years. The mean age in the pediatric population is 6-10 years. The lifetime risk is 8.6% for boys and 6.7% for girls.</p><p>The 15 factors that are used in the routine clinical practice for the assessment of AA in childhood are: Sex, Age, Religion, Demographic data, Duration of Pain, Vomitus, Diarrhea, Anorexia, Tenderness, Rebound, Leucocytosis, Neutrophilia, Urinalysis, Temperature, Constipation. The sex (males), the age (peak of appearance of A.A in children aged 9 to 13 years), and the religion (hygiene condition, feeding attitudes, genetic predisposition) were in relation with a higher frequency for AA. Anorexia, vomitus, diarrhea or constipation and a slight elevation of the temperature (37 0 C -38 0 C) were common manifestation of AA. Additionally, abdominal tenderness principally in the RLQ of the abdomen and the existence of the rebound sign, are strongly related with AA. Leucocytosis (&gt;10.800 K/μl) with neutrophilia (neutrophil count &gt; 75%) is considered to be a significant clue for AA. Urinalysis is useful for detecting urinary tract disease, normal findings on urinalysis are of limited diagnostic value for appendicitis.</p><p>The role of race, ethnicity, health insurance, education, access to healthcare, and economic status on the development and treatment of appendicitis are widely debated. Cogent arguments have been made on both sides for and against the significance of each socioeconomic or racial condition. A genetic predisposition appears operative in some cases, particularly in children in whom appendicitis develops before age 6 years. Although the disorder is uncommon in infants and elderly, these groups have a disproportionate number of compilations because of delays in diagnosis and the presence of comorbid conditions.</p><p>As diagnosis, there are four stages of appendicitis, including acute focal appendicitis, acute supurative appendicitis, gangrenous appendicitis and perforated appendicitis. These distinctions are vague, and only the clinically relevant distinction of perforated (gangrenous appendicitis includes into this entity as dead intestine functionally acts as a perforation) versus non-perforated appendicitis (acute focal and supurative appendicitis) should be made.</p><p>The present study is based on data set that is obtained from the Pediatric Surgery Clinical Information System of the University Hospital of Alexandroupolis, Greece. It consisted of 516 children's medical records. Some of these children had different stages of appendicitis and, therefore, underwent operative treatment. This data set was divided into a set of 422 records and another set of 94 records. The former was used for training of the ANN, while the latter for testing. A small number of data records were used as a validation set during training to avoid overfitting. Table <ref type="table" target="#tab_0">1</ref> represents the stages of appendicitis as well as the corresponding cases for each one. The 3rd column of Table <ref type="table" target="#tab_0">1</ref> depicts the coding of possible diagnosis, as they used for ANN training and testing stages. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">NEURAL NETWORK DESIGN</head><p>Data were divided into two groups, the training cases (TRAC) and the testing cases (TESC). The TRAC consisted of 417 concrete medical data records and the TESC consisted of 101. Each input record was organised in a format of fifteen fields, namely sex, age, religion, area of residence, pain time period, vomit symptoms, diarrhoea, anorexia, located sensitivity, rebound, wbc, poly, general analysis of urine, body temperature, constipation. The output record contained a single field which corresponded to the potential outcome of each case.</p><p>The determination if the TRAC and TESC data sets was performed in a rather random manner. The training and testing sample size which would be sufficient for a good generalization was determined by using the Widrow's rule of thumb for the LMS algorithm which is a distribution free, worst case formula <ref type="bibr" target="#b1">[2]</ref> and it is shown in the following equation 1. W is the total number of free parameters in the network (synaptic weights and biases) and ε denotes the fraction of the classification errors permitted during testing. The O notation shows the order of quantity enclosed within <ref type="bibr" target="#b1">[2]</ref>.</p><formula xml:id="formula_0">⎟ ⎠ ⎞ ⎜ ⎝ ⎛ = ε W O N (1)</formula><p>In the case examined here with 417 training examples used, the classification error that could be tolerated would be about 4%.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Description of the experiments performed</head><p>During experimentations, numerous ANN architectures, learning algorithms and transfer functions were combined in an effort to obtain the optimal network. For the Tangent Hyperbolic (TanH) transfer function the input data were normalized (divided properly) in order to be included in the acceptable range of <ref type="bibr">[-3, 3]</ref> to avoid problems such as saturation, where an element's summation value (the sum of the inputs times the weights) exceeds the acceptable network range <ref type="bibr" target="#b16">[17]</ref>. Standard back-propagation optimization algorithms using TanH, or Sigmoid or Digital Neural Network Architecture (DNNA) transfer functions, combined with the Extended Delta Bar Delta (ExtDBD) or with the Quick Prop learning rules <ref type="bibr" target="#b17">[18,</ref><ref type="bibr" target="#b18">19]</ref> were employed. The ExtDBD is a heuristic technique reinforcing good general trends and damping oscillations <ref type="bibr" target="#b19">[20]</ref>.</p><p>Modular and radial basis function (RBF) ANN applying the ExtDBD learning rule and the TanH transfer function were also used in an effort to determine the optimal networks. RBFs have an internal representation of hidden neurons which are radially symmetric, and the hidden layer consists of pattern units fully connected to a linear output layer <ref type="bibr" target="#b20">[21,</ref><ref type="bibr" target="#b21">22]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">ANN evaluation metrics applied</head><p>Traditional ANN evaluation measures like the Root Mean Square Error (RMS error), R 2 and the confusion matrix were used to validate the ensuing neural network models. It is well known that the RMS error adds up the squares of the errors for each neuron in the output layer, divides by the number of neurons in the output layer to obtain an average, and then takes the square root of that average. The confusion matrix is a graphical way of measuring the network's performance during the "training" and "testing" phases. It also facilitates the correlation of the network output to the actual observed values that belong to the testing set in a visual display <ref type="bibr" target="#b16">[17]</ref>, and therefore provides a visual indication of the network's performance. A network with the optimal configuration should have the "bins" (the cells in each matrix) on the diagonal from the lower left to the upper right of the output. An important aspect of the matrix is that the value of the vertical axis in the generated histogram is the Common Mean Correlation (CMC) coefficient of the desired (d), and the actual (predicted) output (y) across the Epoch.</p><p>Finally, the FUSETRESYS (Fuzzy Set Transformer Evaluation System) that constitutes an innovative ANN evaluation system has been applied offering a more flexible approach <ref type="bibr" target="#b0">[1]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Technical description of the FUSETRESYS ANN evaluation model</head><p>Fuzzy logic enables the performance of calculations with mathematically defined words called "Linguistics" <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b22">[23]</ref><ref type="bibr" target="#b23">[24]</ref><ref type="bibr" target="#b24">[25]</ref>. FUSETRESYS faces each training/testing example as a Fuzzy Set. It applies triangular or trapezoidal membership functions in order to determine the partial degree of convergence (PADECOV) of the ANN for each training/testing example separately. The following equations 2 and 3 represent a triangular and a trapezoidal membership functions respectively <ref type="bibr" target="#b0">[1]</ref>.</p><formula xml:id="formula_1">μ s (x;a,b,c)=max{min{ b c x c a b a x − − − − , },0} a&lt;b&lt;c (2) μ s (x;a,b,c,d)= max{min{ c d x d a b a x − − − − , 1 , },0}a&lt;b&lt;c&lt;d<label>(3)</label></formula><p>The model can produce various overall degrees of convergence (OVDECOV) for all of the training examples by applying either fuzzy T-Norm or fuzzy S-Norm conjunction operations, depending on the optimistic or pessimistic point of view of the developer. T-Norms tend to produce lower aggregation indices so in the case of ANN evaluation they can be considered as a pessimistic approach, whereas the opposite happens with S-Norms <ref type="bibr" target="#b25">[26]</ref>. In fact, each distinct Norm evaluates the performance of an ANN under a different perspective. For example the drastic product assigns the ANN a high OVDECOV only if it does not have extreme deviations between the desired and the produced classifications during the training/testing process <ref type="bibr" target="#b0">[1]</ref> whereas the Einstein T-Norm acts in a more average mode. The following equations 4 and 5 present the drastic product and the Einstein product T-Norms. More details on fuzzy conjunction operators can be found in <ref type="bibr" target="#b25">[26]</ref><ref type="bibr" target="#b26">[27]</ref><ref type="bibr" target="#b27">[28]</ref>.</p><formula xml:id="formula_2">= ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ∩ B A μ 1 )} ( μ (Χ), {μ )} ( μ (Χ), {μ Min B A B A = Χ Χ Max if 0 else ~= ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ∩B A μ (4) )] ( ) { ) ( ) { [ 2 ) ( ) { ~X X X X X X B A B A B A μ μ μ μ μ μ μ − + − = ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎝ ⎛ Β ∩ Α (5)</formula><p>The fact that the FUSETRESYS evaluates each training/testing example separately, offers a more clear view of the ANN's performance. In this way the developers know if the network operates extremely bad or well in specific cases.</p><p>Also when there are several neurons in the output layer, the traditional approaches produce separate evaluation results for each one whereas the FUSETRESYS can produce an additive performance index (ADPERI) of the ANN. This could be done under different perspectives and under different degrees of optimism <ref type="bibr" target="#b0">[1]</ref>.</p><p>Finally the application of fuzzy set hedges offers the "dilution" and the "intensification" options. In this way by using the dilution approach the developer softens the membership function over the fuzzy set and weakens the membership constraints so that a point of the Universe of discourse is "truer" than it would be before <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b26">27]</ref>. On the contrary the intensification hardens the MF over the FS and strengthens the membership constraints so that a point on the domain is "less true" than it used to be <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b26">27]</ref>. The following equations 6 and 7 correspond to the intensification and dilution functions respectively.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>( ) ( i n</head><formula xml:id="formula_3">A i A ensify X X μ μ = ) ( int ) (6) ( ) ( ) i n A i A dilute X X 1 ) ( μ μ = (7)</formula><p>In this way the ANN can be evaluated strictly by using a "very well fit" evaluation option, or in a more relaxed way by using the "somewhat fit" option. Of course it is in the developer's hand to decide the potential type of the ANN's evaluation and the degree of dilution or intensification. For a more detailed description of FUSETRESYS please see <ref type="bibr" target="#b0">[1]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">RESULTS AND DISCUSSION</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">ANN analysis</head><p>Several experiments were performed. The following table 2 presents the structure of the four most effective Back Propagation (BP) multilayer (ML) neural networks. In all cases of ANN models, the classical approach for overcoming the overfitting problem has been followed. More specifically, a set of validation data have been provided to the algorithm in addition to the training data. The algorithm has monitored the error with respect to this validation set, while using the training set to drive the gradient descent search. The number of weight tuning iterations performed by the system, were determined in each case based on the criterion of lowest error over the validation set. Two copies of the best performing weights are kept: one copy for training and another one of the best performing weights thus far.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Sigmoid or TanH</head><p>The following Table <ref type="table" target="#tab_4">4</ref> presents the training and testing results for the most successful ML and RBF ANN using R 2 . Also for the three most successful networks, namely 2,3,4 the FUSETRESYS model was applied to determine the average degree of convergence. According to the results, the most suitable network was ML#3. The structure and the architecture of this successful network have been described in the above Table <ref type="table" target="#tab_1">2</ref>. The following figure <ref type="figure" target="#fig_0">1</ref> is a graphical representation of the all the PADECOV of the ML#2, for each testing example. The absolute degree of convergence has the value of one. A serious effort was made towards the development of modular ANN (MODANN) for the classification problem solution. The term MODANN refers to the "adaptive" mixtures of local experts (LOCEXP) as proposed by <ref type="bibr" target="#b28">[29]</ref>.</p><p>They consist of a group of BP ANN referred to as local experts competing to learn different aspects of a problem. A "gating ANN" controls the competition and learns to assign different parts of the data space to different networks. The LOCEXP have the same architecture but they can apply distinct learning rules or transfer functions. Also the number of the output processing elements of the gating network is equal to the number of LOCEXP used. The number of the neurons in the hidden layer of the gating network should be larger than the number of the output processing elements <ref type="bibr" target="#b16">[17]</ref>. The above table 5 presents the structure and the architecture of the optimal MODANN that was developed for the medical classification problem examined here. The performance of the developed modular network is very satisfying, having an R 2 value of 0.9434 and a FUSETRESYS produced average PADECOV equal to 0.9733 (using the Triangular membership function) in the testing process using the first time seen testing data set.</p><p>The following figure <ref type="figure" target="#fig_1">2</ref> depicts the gating probabilities for the optimal MODANN..  The above Table <ref type="table" target="#tab_6">6</ref> presents a small sample of the 101 distinct PADECOV values produced by the FUSTRESYS. Also the Einstein T-Norm was applied for the determination of the overall degree of convergence of the ANN. The ML#2 ANN had a very high OVEDECOV index with a value of 0.98299 whereas the other ML#3 ANN and the MODANN #REF1 had OVEDECOV indices as high as 0.97. The Drastic Product T-Norm was not applied in this research effort because it was proven unnecessary from the data in table <ref type="table" target="#tab_5">5</ref> where there were no serious indications of extreme bad ANN performance in any of the testing examples. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">CONCLUSIONS</head><p>The above research has obtained six ANNs with good level of convergence and it has proven that there exist at least four ANNs that have high performance indices, in the case of abdominal pain classification. Namely the best ANNs are two ML BP ANN, a RBF ANN and a MODANN using a referee gating network and two local experts. All of them have been described in the previous sections.</p><p>A very interesting part of the whole research effort is the application of an innovative ANN evaluation model called FUSETRESYS that uses fuzzy logic and fuzzy algebra proposed in <ref type="bibr" target="#b10">[11]</ref>.</p><p>The new evaluation scheme has performed individual convergence indices namely PADECOV, for the output of each single data record used in the testing phase. The worst PADECOV value equals to 0.6666 which actually is the degree of membership of each data record to the FS "Actual output value equal to the desired value". This worst case appears three times exactly in the same cases of data records, for the ML#2, ML#3, #1REF ANN and it shows that the classification capacity of the developed networks is not bad even in the worst cases. This conclusion becomes stronger by considering the fact that the second worst PADECOV index has a value of 0.833.</p><p>If an overall ANN validation is performed the traditional evaluation instruments agree with the FUSETRESYS that the most suitable ANN is the ML BP with code# 4 whereas all of the other developed ANN have almost an equally good performance. The Einstein T-Norm produces a higher "good performance index" for the MODANN than the traditional methods.</p><p>As it can be seen in table <ref type="table" target="#tab_7">7</ref>, the OVDECOV indices have very high values for ML#2 and for REF#1 and ML#3 networks when a "Partly fit" validation is performed. There is significant differentiation when a very strict evaluation is done under the linguistic "Very well fit". The OVDECOV indices fall from 0.99 to 0.75 for ML#2, from 0.99 to 0.65 for #REF and from 0.99 to 0.71 for ML#3 respectively. This is a very useful approach and it shows the actual power of FUSETRESYS due to the fact that it shows the differentiation of the average convergence degree of the three ANN when more strict validation methods are applied. So ANN fed with the same data records in testing and appearing to have more or less the same performance, they are very seriously differentiated when more strict convergence validation methods are performed.</p><p>The proposed ANN architecture faces the appendicitis prediction quite satisfactory, based on both the above presented results, and the pediatric surgeon's opinion that used these ANNs in their everyday routine clinical practice.</p><p>The innovative ANN evaluation model that was applied successfully in this research effort will be used extensively in the future, in an integrated effort to check its validity under various perspectives.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 .</head><label>1</label><figDesc>Figure 1. Representation of all the PADECOV of the network ML# 2</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 .</head><label>2</label><figDesc>Figure 2. Gating Probabilities of the MODANN with code #1Ref.</figDesc><graphic coords="4,356.46,72.00,159.72,51.90" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>Possible diagnosis and corresponding cases.</figDesc><table><row><cell></cell><cell></cell><cell>Diagnosis</cell><cell>Coding</cell><cell>Cases</cell></row><row><cell cols="2">Normal</cell><cell>Discharge Observation</cell><cell>-2 -1</cell><cell>236 186</cell></row><row><cell></cell><cell></cell><cell>No findings</cell><cell>0</cell><cell>15</cell></row><row><cell>Operative</cell><cell>treatment</cell><cell>Focal appendicitis Phlegmonous Supurative appendicitis Gangrenous appendicitis or</cell><cell>1 2 3</cell><cell>34 29 8</cell></row><row><cell></cell><cell></cell><cell>Peritonitis</cell><cell>4</cell><cell>8</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 .</head><label>2</label><figDesc>Structure of the four most successful ML ANN</figDesc><table><row><cell>Optimization algorithm</cell><cell>Input Layer</cell><cell>Hidden sub-layer neurons</cell><cell>Second Hidden sub-layer</cell><cell>Learning Rule/Transfer Function</cell></row><row><cell>ANN ML#1. Reinforcement ANN using BackPropagation</cell><cell>15</cell><cell>7</cell><cell>7</cell><cell>Genetic Algorithm /TanH</cell></row><row><cell>ANN ML#2.</cell><cell></cell><cell></cell><cell></cell><cell>Norm-</cell></row><row><cell>Multilayer</cell><cell>15</cell><cell>9</cell><cell>0</cell><cell>Cum_Delta/</cell></row><row><cell>Backpropagation</cell><cell></cell><cell></cell><cell></cell><cell>TanH</cell></row><row><cell>ANN ML#3.</cell><cell></cell><cell></cell><cell></cell><cell>Norm-</cell></row><row><cell>Multilayer</cell><cell>15</cell><cell>9</cell><cell>9</cell><cell>Cum_Delta/</cell></row><row><cell>Backpropagation</cell><cell></cell><cell></cell><cell></cell><cell>TanH</cell></row><row><cell>ANN ML#4. Multilayer Backpropagation</cell><cell>15</cell><cell>7</cell><cell>0</cell><cell>ExtDBD/ TanH</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3</head><label>3</label><figDesc></figDesc><table /><note>shows the architecture and structure of the four most successful radial basis function (RBF) ANN.</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 3 .</head><label>3</label><figDesc>Structure of the two most successful RBF ANN</figDesc><table><row><cell></cell><cell></cell><cell></cell><cell>Number</cell><cell></cell><cell></cell></row><row><cell>Optimization algorithm</cell><cell>Input Layer</cell><cell>Proto</cell><cell>of neurons Hidden</cell><cell>Output Layer</cell><cell>Learning Rule/Transfer function</cell></row><row><cell></cell><cell></cell><cell></cell><cell>layer</cell><cell></cell><cell></cell></row><row><cell>ANN 5.</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>Norm</cell></row><row><cell>ANN6 Radial Basis</cell><cell>15</cell><cell>30</cell><cell>2</cell><cell>1</cell><cell>Cum_Delta /</cell></row><row><cell>Function</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 4 .</head><label>4</label><figDesc>Evaluation of the most successful ML and RBF ANN</figDesc><table><row><cell></cell><cell></cell><cell></cell><cell>Average Degree</cell></row><row><cell></cell><cell></cell><cell></cell><cell>of success in</cell></row><row><cell>ANN code</cell><cell>R 2 in Training</cell><cell>R 2 in Testing</cell><cell>Testing using FUSETRESYS</cell></row><row><cell></cell><cell></cell><cell></cell><cell>for the three</cell></row><row><cell></cell><cell></cell><cell></cell><cell>best ANN</cell></row><row><cell>1</cell><cell>0.8258</cell><cell>0.8247</cell><cell></cell></row><row><cell>2</cell><cell>0.9615</cell><cell>0.9471</cell><cell>0.9699</cell></row><row><cell>3</cell><cell>0.9721</cell><cell>0.9489</cell><cell>0.9716</cell></row><row><cell>4</cell><cell>0.9352</cell><cell>0.9588</cell><cell>0.9799</cell></row><row><cell>5</cell><cell>0.9114</cell><cell>0.9000</cell><cell></cell></row><row><cell>6</cell><cell>0.9346</cell><cell>0.9400</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_5"><head>Table 5 .</head><label>5</label><figDesc>Refereed Back Propagation using Gating Networks with two Competing local experts</figDesc><table><row><cell cols="5">Refereed #1 REF ANN using Gating ANN with 2 Local Experts</cell></row><row><cell>Learning</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>rule</cell><cell>Transfer</cell><cell>Error</cell><cell>Output</cell><cell>Noise</cell></row><row><cell></cell><cell cols="3">Local Expert's #1 functions</cell><cell></cell></row><row><cell>ExtDBD</cell><cell>TanH</cell><cell>standard</cell><cell>Direct</cell><cell>Uniform</cell></row><row><cell></cell><cell cols="3">Local Expert's #2 functions</cell><cell></cell></row><row><cell>Norm-</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Cum</cell><cell>TanH</cell><cell>standard</cell><cell>Direct</cell><cell>Uniform</cell></row><row><cell>Delta</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell cols="3">Local Expert's Architecture</cell><cell></cell></row><row><cell>Input</cell><cell></cell><cell>Hidden</cell><cell></cell><cell>Output</cell></row><row><cell>neurons</cell><cell></cell><cell>neurons</cell><cell></cell><cell>neurons</cell></row><row><cell>15</cell><cell></cell><cell>6</cell><cell></cell><cell>1</cell></row><row><cell></cell><cell cols="2">Gating ANN functions</cell><cell></cell><cell></cell></row><row><cell>Learning</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>rule</cell><cell>Transfer</cell><cell>Error</cell><cell>Output</cell><cell>Noise</cell></row><row><cell>ExtDBD</cell><cell>Linear</cell><cell>Standard</cell><cell>SoftMax</cell><cell>Uniform</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_6"><head>Table 6 .</head><label>6</label><figDesc>Small sample of the PADECOV indices</figDesc><table><row><cell cols="3">PADECOV by FUSETRESYS</cell></row><row><cell>ML#2</cell><cell>ML#3</cell><cell>#1REF</cell></row><row><cell>0.83333</cell><cell>0.83333</cell><cell>1</cell></row><row><cell>0.83333</cell><cell>0.83333</cell><cell>0.833</cell></row><row><cell>1</cell><cell>1</cell><cell>1</cell></row><row><cell>0.83333</cell><cell>0.83333</cell><cell>1</cell></row><row><cell>1</cell><cell>1</cell><cell>1</cell></row><row><cell>0.83333</cell><cell>0.83333</cell><cell>1</cell></row><row><cell>0.833333</cell><cell>0.833333</cell><cell>1</cell></row><row><cell cols="2">OVDECOV Einstein</cell><cell></cell></row><row><cell>0.98299</cell><cell>0.97784</cell><cell>0.971</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_7"><head>Table 7 .</head><label>7</label><figDesc>OVDECOV values when intensification and dilution of first order was applied using Einstein product and Triangular membership function</figDesc><table><row><cell></cell><cell>OVDECOV of</cell><cell>OVDECOV of</cell><cell>OVDECOV of</cell></row><row><cell></cell><cell>ML #2</cell><cell>#1REF</cell><cell>ML # 3</cell></row><row><cell>Dilution "Partly fit"</cell><cell>0.99972</cell><cell>0.99934</cell><cell>0.99957</cell></row><row><cell>Intensification "Very well fit"</cell><cell>0.75932</cell><cell>0.64887</cell><cell>0.71033</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ACKNOWLEDGEMENTS</head><p>We would like to thank the pediatric surgeons of the Pediatric Surgeon Department of Medical School of Democritus University of Thrace, for their contribution in the concession of the medical records.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">An intelligent Artificial Neural Network evaluation system using Fuzzy Set Hedges: Application in wood industry</title>
		<author>
			<persName><forename type="first">L</forename><surname>Iliadis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 19th IEEE ICTAI The Annual IEEE International Conference on Tools with Artificial Intelligence. IEEE Volume II</title>
				<meeting>the 19th IEEE ICTAI The Annual IEEE International Conference on Tools with Artificial Intelligence. IEEE Volume II<address><addrLine>Los Alamitos California</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="366" to="370" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Neural Networks: A comprehensive foundation</title>
		<author>
			<persName><forename type="first">S</forename><surname>Haykin</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1999">1999</date>
			<publisher>McMillan College Publishing Company</publisher>
			<pubPlace>New York</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Picton</surname></persName>
		</author>
		<title level="m">Neural Networks</title>
				<meeting><address><addrLine>New York, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Palgrave</publisher>
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
	<note>2 nd edition</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Abdominal Pain Estimation in Childhood based on Artificial Neural Network Classification</title>
		<author>
			<persName><forename type="first">D</forename><surname>Mantzaris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Anastassopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Adamopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Stephanakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kambouri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gardikis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 10 th International Conference on Engineering Applications of Neural Networks</title>
				<meeting>of the 10 th International Conference on Engineering Applications of Neural Networks<address><addrLine>EANN</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2007-08">2007. August, 2007</date>
			<biblScope unit="page" from="129" to="134" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A new concept toward computer-aided medical diagnosis -A prototype implementation addressing pulmonary diseases</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">P K</forename><surname>Economou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Lymberopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Karavatselou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Chassomeris</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Information Technology in Biomedicine</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="55" to="66" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">The intelligent model of a patient using artificial neural networks for inhalational anaesthesia</title>
		<author>
			<persName><forename type="first">J</forename><surname>Shieh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Fan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Shi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Chin. Inst. Chem. Engrs</title>
		<imprint>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="609" to="620" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Control of industrial Robot using neural network compensator</title>
		<author>
			<persName><forename type="first">V</forename><surname>Rankovic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Nikolic</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Theoretical Applications of Mech</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="147" to="163" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Wood-water sorption isotherm prediction with artificial neural networks: a preliminary study</title>
		<author>
			<persName><forename type="first">S</forename><surname>Avramidis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Iliadis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Holzforschung</title>
		<imprint>
			<biblScope unit="volume">59</biblScope>
			<biblScope unit="page" from="336" to="341" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Neural Network Prediction of Bending Strength and Stiffness in Western Hemlock</title>
		<author>
			<persName><forename type="first">S</forename><surname>Mansfield</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Iliadis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Avramidis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">HOLZFORSCHUNG</title>
		<imprint>
			<biblScope unit="volume">61</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="707" to="716" />
			<date type="published" when="2007">2007</date>
			<publisher>Walter De Gruyter &amp; Co Berlin</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Dynamic Neural Networks for Prediction of Disruptions in Tokamaks</title>
		<author>
			<persName><forename type="first">B</forename><surname>Cannas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Fanni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Montisci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Murgia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Sonato</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 10th International Conference on Engineering Applications of Neural Networks (EANN)</title>
				<meeting>the 10th International Conference on Engineering Applications of Neural Networks (EANN)<address><addrLine>Thessaloniki, Greece</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A Fuzzy intelligent Artificial Neural Network evaluation System: Application in Industry</title>
		<author>
			<persName><forename type="first">L</forename><surname>Iliadis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Spartalis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tachos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 10th International Conference Engineering Applications of Neural Networks</title>
				<meeting>the 10th International Conference Engineering Applications of Neural Networks<address><addrLine>Thessaloniki, Greece</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="320" to="326" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Colour class identification of tracers using artificial neural networks</title>
		<author>
			<persName><forename type="first">R</forename><surname>Kuhn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Bordas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Wunderlich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Michaelis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Thevenin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 10th International Conference on Engineering Applications of Neural Networks (EANN)</title>
				<meeting>the 10th International Conference on Engineering Applications of Neural Networks (EANN)<address><addrLine>Thessaloniki, Greece</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Artificial Neural Networks equivalent to Fuzzy Algebra T_Norm conjunction operators</title>
		<author>
			<persName><forename type="first">L</forename><surname>Iliadis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Spartalis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings (Book of extended abstracts) of the ICCMSE 2007 Published by the AIP</title>
				<meeting>(Book of extended abstracts) of the ICCMSE 2007 Published by the AIP<address><addrLine>USA</addrLine></address></meeting>
		<imprint>
			<publisher>American Institute of Physics)</publisher>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Municipal creditworthiness Modelling by clustering methods</title>
		<author>
			<persName><forename type="first">P</forename><surname>Hajek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Olej</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 10th International Conference on Engineering Applications of Neural Networks (EANN)</title>
				<meeting>the 10th International Conference on Engineering Applications of Neural Networks (EANN)<address><addrLine>Thessaloniki, Greece</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">A decision support system applying an integrated Fuzzy model for long -term forest fire risk estimation</title>
		<author>
			<persName><forename type="first">L</forename><surname>Iliadis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Environmental Modelling and Software</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="613" to="621" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Deep Assessment of Machine Learning Techniques Using Patient Treatment in Acute Abdominal Pain in Children</title>
		<author>
			<persName><forename type="first">M</forename><surname>Blazadonakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Moustakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Charissis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence in Medicine</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="527" to="542" />
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Getting started</title>
		<author>
			<persName><surname>Neuralware</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">A tutorial for Neuralworks Professional II/PLUS</title>
				<meeting><address><addrLine>Carnegie, PA,USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Increased rates of convergence through learning rate adaption</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Jacobs</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Networks</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="295" to="307" />
			<date type="published" when="1988">1988</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Acceleration of back-propagation through learning rate and momentum adaption</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Minai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">D</forename><surname>Wiliams</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Joint Conference on Neural Networks</title>
				<imprint>
			<date type="published" when="1990">1990</date>
			<biblScope unit="page" from="676" to="679" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Learning internal representations by error propagation</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">E</forename><surname>Rummelhart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">E</forename><surname>Hinton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">J</forename><surname>Wiliams</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1985">1985</date>
			<pubPlace>San Diego,</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Institute for Cognitive Science ; University of California</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Report 8506</note>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">A resource allocating network for function interpolation</title>
		<author>
			<persName><forename type="first">J</forename><surname>Platt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Computation</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="213" to="225" />
			<date type="published" when="1991">1991</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Fast learning in networks of locally tuned processing units</title>
		<author>
			<persName><forename type="first">J</forename><surname>Moody</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">J</forename><surname>Darken</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Computation</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="281" to="294" />
			<date type="published" when="1989">1989</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<title level="m" type="main">The Essence of Neural Networks</title>
		<author>
			<persName><forename type="first">R</forename><surname>Callan</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1999">1999</date>
			<publisher>Prentice Hall</publisher>
			<pubPlace>, UK</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Structural interpolation and approximation with fuzzy relations: A study in knowledge reuse</title>
		<author>
			<persName><forename type="first">W</forename><surname>Pedrycz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal Studies in Fuzziness and Soft Computing</title>
		<imprint>
			<biblScope unit="volume">215</biblScope>
			<biblScope unit="page" from="65" to="77" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Fuzzy polynomial neural network: hybrid architectures of fuzzy modelling</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">J</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Pedrycz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">K</forename><surname>Oh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. Fuzzy Systems</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="607" to="621" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<title level="m" type="main">Learning and Soft Computing</title>
		<author>
			<persName><forename type="first">V</forename><surname>Kecman</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2001">2001</date>
			<publisher>MIT Press</publisher>
			<pubPlace>London England</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<title level="m" type="main">Fuzzy Modeling and Genetic Algorithms for Data Mining and Exploration</title>
		<author>
			<persName><forename type="first">E</forename><surname>Cox</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2005">2005</date>
			<publisher>Elsevier Science</publisher>
			<pubPlace>USA</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Aggregation Operators: New Trends and Applications</title>
		<author>
			<persName><forename type="first">T</forename><surname>Calvo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Mayor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Mesira</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Studies in Fuzziness and Soft Computing</title>
				<meeting><address><addrLine>Heildeberg</addrLine></address></meeting>
		<imprint>
			<publisher>Physica-Verlag</publisher>
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Adaptive mixtures of local experts</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Jacobs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">I</forename><surname>Jordan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">J</forename><surname>Nowlan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">E</forename><surname>Hinton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural computation</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="79" to="87" />
			<date type="published" when="1991">1991</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
