<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>ANN for prognosis of abdominal pain in childhood: use of fuzzy modelling for convergence estimation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>George C. Anastassopoulos</string-name>
          <email>anasta@med.duth.gr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lazaros S. Iliadis</string-name>
          <email>liliadis@fmenr.duth.gr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Several reports have described clinical scoring systems incorporating specific elements of the history, physical examination, and laboratory studies designed to improve diagnostic accuracy of abdominal pain [16]. Nothing is guaranteed, but Democritus University of Thrace, Hellenic Open University</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper focuses in two parallel objectives. First it aims in presenting a series of Artificial Neural Network models that are capable of performing prognosis of abdominal pain in childhood. Clinical medical data records have been gathered and used towards this direction. Its second target is the presentation and application of an innovative fuzzy algebraic model capable of evaluating Artificial Neural Networks' performance [1]. This model offers a flexible approach that uses fuzzy numbers, fuzzy sets and various fuzzy intensification and dilution techniques to perform assessment of neural models under different perspectives. It also produces partial and overall evaluation indices. The produced ANN models have proven to perform the classification with significant success in the testing phase with first time seen data.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 INTRODUCTION</title>
      <p>
        The wide range of problems in which Artificial Neural
Networks can be used with promising results, is the reason of their
growth [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]. Some of the fields that ANNs are used are: medical
systems [
        <xref ref-type="bibr" rid="ref4 ref5 ref6">4-6</xref>
        ], robotics [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], industry [
        <xref ref-type="bibr" rid="ref10 ref11 ref8 ref9">8 – 11</xref>
        ], image processing
[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], applied mathematics [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], financial analysis [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ],
environmental risk modelling [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and others.
      </p>
      <p>Prognosis is a medical term denoting an attempt of physician to
accurately estimate how a patient's disease will progress, and
whether there is chance of recovery, based on an objective set of
factors that represent that situation. The inference about prognosis
of a patient when presented with complex clinical and prognostic
information is a common problem, in clinical medicine. The
diagnosis of a disease is the outcome of combination of clinical
and laboratorial examinations through medical techniques.</p>
      <p>In this paper various ANN architectures using different learning
rules, transfer functions and optimization algorithms have been
tried. This research effort was motivated form the fact that reliable
and seasonable detection of abdomen pain constitute attainments in
effective treatment of disease and avoidance of relapses. That is
why the development of such an intelligent model that can
collaborate with the doctors will be very useful towards successful
treatment of potential patients.</p>
    </sec>
    <sec id="sec-2">
      <title>2 DIAGNOSTIC FACTORS OF ABDOMINAL</title>
    </sec>
    <sec id="sec-3">
      <title>PAIN</title>
      <p>decision rules can predict which children are at risk for
appendicitis (appendicitis is the most common surgical condition
of the abdomen). One such numerically based system is based on
a 6-part scoring system: nausea (6 point), history of local RLQ
pain (2 point), migration of pain (1 point), difficulty walking (1
point), rebound tenderness / pain with percussion (2 point), and
absolute neutrophil count of &gt;6.75 x 10`3/μL (6 point). A score &lt;5
had a sensitivity of 96.3% with a negative predictive value of
95.6% for AA.</p>
      <p>
        To date, all efforts to find clinical features or laboratory tests,
either alone or in combination, that are able to diagnose
appendicitis with 100% sensitivity or specificity have proven
futile. Also, there is only one research work [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] in bibliography
based on ANN that deals with the abdominal pain prognosis in
childhood.
      </p>
      <p>The incidence of Acute Appendicitis (AA) is 4 cases per 1000
children. However appendicitis despite pediatric surgeons’ best
efforts remains the most commonly misdiagnosed surgical
condition. Although diagnosis and treatment have improved,
appendicitis continues to cause significant morbidity and still
remains, although rarely, a cause of death. Appendicitis has a
male-to-female ratio of 3:2 with a peak incidence between ages 12
and 18 years. The mean age in the pediatric population is 6-10
years. The lifetime risk is 8.6% for boys and 6.7% for girls.</p>
      <p>The 15 factors that are used in the routine clinical practice for
the assessment of AA in childhood are: Sex, Age, Religion,
Demographic data, Duration of Pain, Vomitus, Diarrhea, Anorexia,
Tenderness, Rebound, Leucocytosis, Neutrophilia, Urinalysis,
Temperature, Constipation. The sex (males), the age (peak of
appearance of A.A in children aged 9 to 13 years), and the religion
(hygiene condition, feeding attitudes, genetic predisposition) were
in relation with a higher frequency for AA. Anorexia, vomitus,
diarrhea or constipation and a slight elevation of the temperature
(370 C - 380 C) were common manifestation of AA. Additionally,
abdominal tenderness principally in the RLQ of the abdomen and
the existence of the rebound sign, are strongly related with AA.
Leucocytosis (&gt;10.800 K/μl) with neutrophilia (neutrophil count &gt;
75%) is considered to be a significant clue for AA. Urinalysis is
useful for detecting urinary tract disease, normal findings on
urinalysis are of limited diagnostic value for appendicitis.</p>
      <p>The role of race, ethnicity, health insurance, education, access to
healthcare, and economic status on the development and treatment
of appendicitis are widely debated. Cogent arguments have been
made on both sides for and against the significance of each
socioeconomic or racial condition. A genetic predisposition
appears operative in some cases, particularly in children in whom
appendicitis develops before age 6 years. Although the disorder is
uncommon in infants and elderly, these groups have a
disproportionate number of compilations because of delays in
diagnosis and the presence of comorbid conditions.</p>
      <p>As diagnosis, there are four stages of appendicitis, including
acute focal appendicitis, acute supurative appendicitis, gangrenous
appendicitis and perforated appendicitis. These distinctions are
vague, and only the clinically relevant distinction of perforated
(gangrenous appendicitis includes into this entity as dead intestine
functionally acts as a perforation) versus non-perforated
appendicitis (acute focal and supurative appendicitis) should be
made.</p>
      <p>The present study is based on data set that is obtained from the
Pediatric Surgery Clinical Information System of the University
Hospital of Alexandroupolis, Greece. It consisted of 516 children’s
medical records. Some of these children had different stages of
appendicitis and, therefore, underwent operative treatment. This
data set was divided into a set of 422 records and another set of 94
records. The former was used for training of the ANN, while the
latter for testing. A small number of data records were used as a
validation set during training to avoid overfitting. Table 1
represents the stages of appendicitis as well as the corresponding
cases for each one. The 3rd column of Table 1 depicts the coding
of possible diagnosis, as they used for ANN training and testing
stages.</p>
      <sec id="sec-3-1">
        <title>Normal</title>
        <p>ev ten
i
tra m
e ta
pO tre</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>3 NEURAL NETWORK DESIGN</title>
      <p>Data were divided into two groups, the training cases (TRAC) and
the testing cases (TESC). The TRAC consisted of 417 concrete
medical data records and the TESC consisted of 101. Each input
record was organised in a format of fifteen fields, namely sex, age,
religion, area of residence, pain time period, vomit symptoms,
diarrhoea, anorexia, located sensitivity, rebound, wbc, poly,
general analysis of urine, body temperature, constipation. The
output record contained a single field which corresponded to the
potential outcome of each case.</p>
      <p>
        The determination if the TRAC and TESC data sets was
performed in a rather random manner. The training and testing
sample size which would be sufficient for a good generalization
was determined by using the Widrow’s rule of thumb for the LMS
algorithm which is a distribution free, worst case formula [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and it
is shown in the following equation 1. W is the total number of free
parameters in the network (synaptic weights and biases) and ε
denotes the fraction of the classification errors permitted during
testing. The O notation shows the order of quantity enclosed within
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. N = O⎜⎛ W ⎟⎞ (1)
⎝ ε ⎠
      </p>
      <p>In the case examined here with 417 training examples used, the
classification error that could be tolerated would be about 4%.</p>
    </sec>
    <sec id="sec-5">
      <title>3.1 Description of the experiments performed</title>
      <p>
        During experimentations, numerous ANN architectures,
learning algorithms and transfer functions were combined in an
effort to obtain the optimal network. For the Tangent Hyperbolic
(TanH) transfer function the input data were normalized (divided
properly) in order to be included in the acceptable range of [
        <xref ref-type="bibr" rid="ref3">-3, 3</xref>
        ]
to avoid problems such as saturation, where an element’s
summation value (the sum of the inputs times the weights) exceeds
the acceptable network range [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. Standard back-propagation
optimization algorithms using TanH, or Sigmoid or Digital Neural
Network Architecture (DNNA) transfer functions, combined with
the Extended Delta Bar Delta (ExtDBD) or with the Quick Prop
learning rules [
        <xref ref-type="bibr" rid="ref18 ref19">18, 19</xref>
        ] were employed. The ExtDBD is a heuristic
technique reinforcing good general trends and damping oscillations
[
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
      </p>
      <p>
        Modular and radial basis function (RBF) ANN applying the
ExtDBD learning rule and the TanH transfer function were also
used in an effort to determine the optimal networks. RBFs have an
internal representation of hidden neurons which are radially
symmetric, and the hidden layer consists of pattern units fully
connected to a linear output layer [
        <xref ref-type="bibr" rid="ref21 ref22">21, 22</xref>
        ].
      </p>
    </sec>
    <sec id="sec-6">
      <title>3.2 ANN evaluation metrics applied</title>
      <p>
        Traditional ANN evaluation measures like the Root Mean Square
Error (RMS error), R2 and the confusion matrix were used to
validate the ensuing neural network models. It is well known that
the RMS error adds up the squares of the errors for each neuron in
the output layer, divides by the number of neurons in the output
layer to obtain an average, and then takes the square root of that
average. The confusion matrix is a graphical way of measuring the
network’s performance during the “training” and “testing” phases.
It also facilitates the correlation of the network output to the actual
observed values that belong to the testing set in a visual display
[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], and therefore provides a visual indication of the network’s
performance. A network with the optimal configuration should
have the “bins” (the cells in each matrix) on the diagonal from the
lower left to the upper right of the output. An important aspect of
the matrix is that the value of the vertical axis in the generated
histogram is the Common Mean Correlation (CMC) coefficient of
the desired (d), and the actual (predicted) output (y) across the
Epoch.
      </p>
      <p>
        Finally, the FUSETRESYS (Fuzzy Set Transformer Evaluation
System) that constitutes an innovative ANN evaluation system has
been applied offering a more flexible approach [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
    </sec>
    <sec id="sec-7">
      <title>3.3 Technical description of the FUSETRESYS</title>
    </sec>
    <sec id="sec-8">
      <title>ANN evaluation model</title>
      <p>
        Fuzzy logic enables the performance of calculations with
mathematically defined words called “Linguistics” [
        <xref ref-type="bibr" rid="ref1 ref23 ref24 ref25">1, 23-25</xref>
        ].
FUSETRESYS faces each training/testing example as a Fuzzy Set.
It applies triangular or trapezoidal membership functions in order
to determine the partial degree of convergence (PADECOV) of the
ANN for each training/testing example separately. The following
equations 2 and 3 represent a triangular and a trapezoidal
membership functions respectively [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>x − a c − x
μs(x;a,b,c)=max{min{ , },0} a&lt;b&lt;c (2)
b − a c − b
μs(x;a,b,c,d)= max{min{ x − a ,1, d − x },0}a&lt;b&lt;c&lt;d
b − a d − c
(3)
The model can produce various overall degrees of convergence
(OVDECOV) for all of the training examples by applying either
fuzzy T-Norm or fuzzy S-Norm conjunction operations, depending
on the optimistic or pessimistic point of view of the developer.
T15
15
15
15
7
9
9
7
7
0
9
0</p>
      <sec id="sec-8-1">
        <title>Learning</title>
      </sec>
      <sec id="sec-8-2">
        <title>Rule/Transfer</title>
      </sec>
      <sec id="sec-8-3">
        <title>Function</title>
        <sec id="sec-8-3-1">
          <title>Genetic Algorithm /TanH</title>
        </sec>
        <sec id="sec-8-3-2">
          <title>NormCum_Delta/ TanH</title>
        </sec>
        <sec id="sec-8-3-3">
          <title>NormCum_Delta/ TanH</title>
        </sec>
        <sec id="sec-8-3-4">
          <title>ExtDBD/ TanH</title>
          <p>
            μ ⎛ ~
⎜⎜ A∩ B~ ⎟⎟⎞
⎝ ⎠
Norms tend to produce lower aggregation indices so in the case of
ANN evaluation they can be considered as a pessimistic approach,
whereas the opposite happens with S-Norms [
            <xref ref-type="bibr" rid="ref26">26</xref>
            ]. In fact, each
distinct Norm evaluates the performance of an ANN under a
different perspective. For example the drastic product assigns the
ANN a high OVDECOV only if it does not have extreme
deviations between the desired and the produced classifications
during the training/testing process [
            <xref ref-type="bibr" rid="ref1">1</xref>
            ] whereas the Einstein
TNorm acts in a more average mode. The following equations 4 and
5 present the drastic product and the Einstein product T-Norms.
More details on fuzzy conjunction operators can be found in
[2628].
          </p>
          <p>= Min {μ ~ (Χ),μ ~ (Χ)} if Max {μ ~ (Χ),μ ~ (Χ)} = 1 else</p>
          <p>A B A B
μ
⎜⎜⎛⎝ A~∩ B~ ⎞⎟⎟⎠ = 0 (4)μ ⎜⎛⎜⎜⎝ Α ∩~ Β~⎞⎟⎟⎟⎠ = 2 − [μ A~{X ) μ+ μA~{B~X( X)μ)B~−( Xμ A)~{X )μ B~ ( X )]
(5)</p>
          <p>The fact that the FUSETRESYS evaluates each training/testing
example separately, offers a more clear view of the ANN’s
performance. In this way the developers know if the network
operates extremely bad or well in specific cases.</p>
          <p>
            Also when there are several neurons in the output layer, the
traditional approaches produce separate evaluation results for each
one whereas the FUSETRESYS can produce an additive
performance index (ADPERI) of the ANN. This could be done
under different perspectives and under different degrees of
optimism [
            <xref ref-type="bibr" rid="ref1">1</xref>
            ].
          </p>
          <p>
            Finally the application of fuzzy set hedges offers the “dilution”
and the “intensification” options. In this way by using the dilution
approach the developer softens the membership function over the
fuzzy set and weakens the membership constraints so that a point
of the Universe of discourse is “truer” than it would be before [
            <xref ref-type="bibr" rid="ref1 ref27">1,
27</xref>
            ]. On the contrary the intensification hardens the MF over the FS
and strengthens the membership constraints so that a point on the
domain is “less true” than it used to be [
            <xref ref-type="bibr" rid="ref1 ref27">1, 27</xref>
            ]. The following
equations 6 and 7 correspond to the intensification and dilution
functions respectively.
μ int ensify ( A) (X i ) = μ An ( X i ) (6) μ dilute ( A) (X i ) = μ An (X i ) (7)
1
          </p>
          <p>
            In this way the ANN can be evaluated strictly by using a “very
well fit” evaluation option, or in a more relaxed way by using the
“somewhat fit” option. Of course it is in the developer’s hand to
decide the potential type of the ANN’s evaluation and the degree
of dilution or intensification. For a more detailed description of
FUSETRESYS please see [
            <xref ref-type="bibr" rid="ref1">1</xref>
            ].
          </p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>4 RESULTS AND DISCUSSION</title>
    </sec>
    <sec id="sec-10">
      <title>4.1 ANN analysis</title>
      <p>
        Several experiments were performed. The following table 2
presents the structure of the four most effective Back Propagation
(BP) multilayer (ML) neural networks. In all cases of ANN
models, the classical approach for overcoming the overfitting
problem has been followed. More specifically, a set of validation
data have been provided to the algorithm in addition to the training
data. The algorithm has monitored the error with respect to this
validation set, while using the training set to drive the gradient
descent search. The number of weight tuning iterations performed
by the system, were determined in each case based on the criterion
of lowest error over the validation set. Two copies of the best
performing weights are kept: one copy for training and another one
of the best performing weights thus far.
made towards the development of modular ANN (MODANN) for
the classification problem solution. The term MODANN refers to
the “adaptive” mixtures of local experts (LOCEXP) as proposed
by [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ].
      </p>
      <p>They consist of a group of BP ANN referred to as local experts
competing to learn different aspects of a problem. A “gating ANN”
controls the competition and learns to assign different parts of the
data space to different networks.</p>
      <p>1 7 13 19 25 31 37 43 49 55 61 67 73 79 85 91 97</p>
      <p>
        Code number for each evaluated record
The LOCEXP have the same architecture but they can apply
distinct learning rules or transfer functions. Also the number of the
output processing elements of the gating network is equal to the
number of LOCEXP used. The number of the neurons in the
hidden layer of the gating network should be larger than the
number of the output processing elements [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
The above table 5 presents the structure and the architecture of the
optimal MODANN that was developed for the medical
classification problem examined here. The performance of the
developed modular network is very satisfying, having an R2 value
of 0.9434 and a FUSETRESYS produced average PADECOV
equal to 0.9733 (using the Triangular membership function) in the
testing process using the first time seen testing data set.
      </p>
      <p>The following figure 2 depicts the gating probabilities for the
optimal MODANN..
The above Table 6 presents a small sample of the 101 distinct
PADECOV values produced by the FUSTRESYS.</p>
      <p>Also the Einstein T-Norm was applied for the determination of
the overall degree of convergence of the ANN. The ML#2 ANN
had a very high OVEDECOV index with a value of 0.98299
whereas the other ML#3 ANN and the MODANN #REF1 had
OVEDECOV indices as high as 0.97. The Drastic Product T-Norm
was not applied in this research effort because it was proven
unnecessary from the data in table 5 where there were no serious
indications of extreme bad ANN performance in any of the testing
examples.</p>
    </sec>
    <sec id="sec-11">
      <title>5 CONCLUSIONS</title>
      <p>The above research has obtained six ANNs with good level of
convergence and it has proven that there exist at least four ANNs
that have high performance indices, in the case of abdominal pain
classification. Namely the best ANNs are two ML BP ANN, a RBF
ANN and a MODANN using a referee gating network and two
local experts. All of them have been described in the previous
sections.</p>
      <p>
        A very interesting part of the whole research effort is the
application of an innovative ANN evaluation model called
FUSETRESYS that uses fuzzy logic and fuzzy algebra proposed in
[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>The new evaluation scheme has performed individual
convergence indices namely PADECOV, for the output of each
single data record used in the testing phase. The worst PADECOV
value equals to 0.6666 which actually is the degree of membership
of each data record to the FS “Actual output value equal to the
desired value”. This worst case appears three times exactly in the
same cases of data records, for the ML#2, ML#3, #1REF ANN and
it shows that the classification capacity of the developed networks
is not bad even in the worst cases. This conclusion becomes
stronger by considering the fact that the second worst PADECOV
index has a value of 0.833.</p>
      <p>If an overall ANN validation is performed the traditional
evaluation instruments agree with the FUSETRESYS that the most
suitable ANN is the ML BP with code# 4 whereas all of the other
developed ANN have almost an equally good performance. The
Einstein T-Norm produces a higher “good performance index” for
the MODANN than the traditional methods.</p>
      <p>As it can be seen in table 7, the OVDECOV indices have very
high values for ML#2 and for REF#1 and ML#3 networks when a
“Partly fit” validation is performed. There is significant
differentiation when a very strict evaluation is done under the
linguistic “Very well fit”. The OVDECOV indices fall from 0.99
to 0.75 for ML#2, from 0.99 to 0.65 for #REF and from 0.99 to
0.71 for ML#3 respectively. This is a very useful approach and it
shows the actual power of FUSETRESYS due to the fact that it
shows the differentiation of the average convergence degree of the
three ANN when more strict validation methods are applied. So
ANN fed with the same data records in testing and appearing to
have more or less the same performance, they are very seriously
differentiated when more strict convergence validation methods are
performed.</p>
      <p>The proposed ANN architecture faces the appendicitis prediction
quite satisfactory, based on both the above presented results, and
the pediatric surgeon’s opinion that used these ANNs in their
everyday routine clinical practice.</p>
      <p>The innovative ANN evaluation model that was applied
successfully in this research effort will be used extensively in the
future, in an integrated effort to check its validity under various
perspectives.</p>
    </sec>
    <sec id="sec-12">
      <title>ACKNOWLEDGEMENTS</title>
      <p>We would like to thank the pediatric surgeons of the Pediatric
Surgeon Department of Medical School of Democritus University
of Thrace, for their contribution in the concession of the medical
records.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L.</given-names>
            <surname>Iliadis</surname>
          </string-name>
          , '
          <article-title>An intelligent Artificial Neural Network evaluation system using Fuzzy Set Hedges: Application in wood industry'</article-title>
          ,
          <source>Proceedings of the 19th IEEE ICTAI The Annual IEEE International Conference on Tools with Artificial Intelligence. IEEE</source>
          Volume II,
          <fpage>366</fpage>
          -
          <lpage>370</lpage>
          , Los Alamitos California,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Haykin</surname>
          </string-name>
          ,
          <string-name>
            <surname>Neural Networks</surname>
          </string-name>
          :
          <article-title>A comprehensive foundation</article-title>
          ,.
          <source>McMillan College</source>
          Publishing Company, New York,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Picton</surname>
          </string-name>
          ,
          <string-name>
            <surname>Neural</surname>
            <given-names>Networks</given-names>
          </string-name>
          ,
          <source>(2nd edition) Palgrave</source>
          , New York, USA,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Mantzaris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Anastassopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Adamopoulos</surname>
          </string-name>
          , I. Stephanakis,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kambouri</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Gardikis</surname>
          </string-name>
          , '
          <article-title>Abdominal Pain Estimation in Childhood based on Artificial Neural Network Classification'</article-title>
          ,
          <source>Proc. of the 10th International Conference on Engineering Applications of Neural Networks (EANN</source>
          <year>2007</year>
          ),
          <fpage>129</fpage>
          -
          <lpage>134</lpage>
          ,
          <year>August</year>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>G.P.K.</given-names>
            <surname>Economou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lymberopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Karavatselou</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Chassomeris</surname>
          </string-name>
          , '
          <article-title>A new concept toward computer-aided medical diagnosis - A prototype implementation addressing pulmonary diseases'</article-title>
          ,
          <source>IEEE Transactions on Information Technology in Biomedicine</source>
          ,
          <volume>5</volume>
          (
          <issue>1</issue>
          ):
          <fpage>55</fpage>
          -
          <lpage>66</lpage>
          , (
          <year>2001</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Shieh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Fan</surname>
          </string-name>
          and
          <string-name>
            <given-names>W.</given-names>
            <surname>Shi</surname>
          </string-name>
          , '
          <article-title>The intelligent model of a patient using artificial neural networks for inhalational anaesthesia'</article-title>
          ,
          <source>J. Chin. Inst. Chem</source>
          . Engrs.,
          <volume>33</volume>
          , No.
          <volume>6</volume>
          ,
          <fpage>609</fpage>
          -
          <lpage>620</lpage>
          , (
          <year>2002</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>V.</given-names>
            <surname>Rankovic</surname>
          </string-name>
          and
          <string-name>
            <surname>I. Nikolic</surname>
          </string-name>
          , '
          <article-title>Control of industrial Robot using neural network compensator', Theoretical Applications of Mech</article-title>
          .,
          <volume>32</volume>
          , No.
          <volume>2</volume>
          ,
          <fpage>147</fpage>
          -
          <lpage>163</lpage>
          , (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Avramidis</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Iliadis</surname>
          </string-name>
          , '
          <article-title>Wood-water sorption isotherm prediction with artificial neural networks: a preliminary study'</article-title>
          ,
          <source>Holzforschung</source>
          ,
          <volume>59</volume>
          ,
          <fpage>336</fpage>
          -
          <lpage>341</lpage>
          , (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mansfield</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Iliadis</surname>
          </string-name>
          , S. Avramidis, '
          <article-title>Neural Network Prediction of Bending Strength and Stiffness in Western Hemlock'</article-title>
          ,
          <source>HOLZFORSCHUNG, 61, Issue</source>
          <volume>6</volume>
          ,
          <fpage>707</fpage>
          -716 Walter De Gruyter &amp; Co Berlin, New York,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>B.</given-names>
            <surname>Cannas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fanni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Montisci</surname>
          </string-name>
          , G. Murgia and
          <string-name>
            <given-names>P.</given-names>
            <surname>Sonato</surname>
          </string-name>
          , '
          <article-title>Dynamic Neural Networks for Prediction of Disruptions in Tokamaks'</article-title>
          ,
          <source>Proceedings of the 10th International Conference on Engineering Applications of Neural Networks (EANN)</source>
          , Thessaloniki, Greece.
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Iliadis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Spartalis</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Tachos</surname>
          </string-name>
          , '
          <article-title>A Fuzzy intelligent Artificial Neural Network evaluation System: Application in Industry'</article-title>
          ,
          <source>Proceedings of the 10th International Conference Engineering Applications of Neural Networks</source>
          ,
          <fpage>320</fpage>
          -
          <lpage>326</lpage>
          , Thessaloniki, Greece,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>R.</given-names>
            <surname>Kuhn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bordas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Wunderlich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Michaelis</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Thevenin</surname>
          </string-name>
          , '
          <article-title>Colour class identification of tracers using artificial neural networks'</article-title>
          ,
          <source>Proceedings of the 10th International Conference on Engineering Applications of Neural Networks (EANN)</source>
          , Thessaloniki, Greece,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>L.</given-names>
            <surname>Iliadis</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Spartalis</surname>
          </string-name>
          , '
          <article-title>Artificial Neural Networks equivalent to Fuzzy Algebra T_Norm conjunction operators', Proceedings (Book of extended abstracts) of the ICCMSE 2007 Published by the AIP (American Institute of Physics)</article-title>
          , USA,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>P.</given-names>
            <surname>Hajek</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Olej</surname>
          </string-name>
          , '
          <article-title>Municipal creditworthiness Modelling by clustering methods'</article-title>
          ,
          <source>Proceedings of the 10th International Conference on Engineering Applications of Neural Networks (EANN)</source>
          , Thessaloniki, Greece,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>L.</given-names>
            <surname>Iliadis</surname>
          </string-name>
          , '
          <article-title>A decision support system applying an integrated Fuzzy model for long - term forest fire risk estimation'</article-title>
          ,
          <source>Environmental Modelling and Software, 20, Issue</source>
          <volume>5</volume>
          ,
          <fpage>613</fpage>
          -
          <lpage>621</lpage>
          , (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M.</given-names>
            <surname>Blazadonakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Moustakis</surname>
          </string-name>
          , and G. Charissis, '
          <article-title>Deep Assessment of Machine Learning Techniques Using Patient Treatment in Acute Abdominal Pain in Children'</article-title>
          ,
          <source>Artificial Intelligence in Medicine</source>
          ,
          <volume>8</volume>
          ,
          <fpage>527</fpage>
          -
          <lpage>542</lpage>
          , (
          <year>1996</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Neuralware</surname>
          </string-name>
          ,
          <article-title>Getting started. A tutorial for Neuralworks Professional II/PLUS</article-title>
          , Carnegie, PA,USA,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>R.A.</given-names>
            <surname>Jacobs</surname>
          </string-name>
          , '
          <article-title>Increased rates of convergence through learning rate adaption'</article-title>
          ,
          <source>Neural Networks</source>
          <volume>1</volume>
          ,
          <fpage>295</fpage>
          -
          <lpage>307</lpage>
          , (
          <year>1988</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>A.A.</given-names>
            <surname>Minai</surname>
          </string-name>
          , R.D. Wiliams, '
          <article-title>Acceleration of back-propagation through learning rate and momentum adaption'</article-title>
          ,
          <source>International Joint Conference on Neural Networks, I</source>
          ,
          <fpage>676</fpage>
          -
          <lpage>679</lpage>
          , (
          <year>1990</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>D.E.</given-names>
            <surname>Rummelhart</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.E.</given-names>
            <surname>Hinton and R.J. Wiliams</surname>
          </string-name>
          ,
          <article-title>Learning internal representations by error propagation</article-title>
          .
          <source>Institute for Cognitive Science Report 8506</source>
          . San Diego, University of California (
          <year>1985</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>J.</given-names>
            <surname>Platt</surname>
          </string-name>
          , “
          <article-title>A resource allocating network for function interpolation”</article-title>
          ,
          <source>Neural Computation</source>
          ,
          <volume>3</volume>
          ,
          <fpage>213</fpage>
          -
          <lpage>225</lpage>
          , (
          <year>1991</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>J.</given-names>
            <surname>Moody</surname>
          </string-name>
          , C.J. Darken, '
          <article-title>Fast learning in networks of locally tuned processing units'</article-title>
          ,
          <source>Neural Computation</source>
          ,
          <volume>1</volume>
          ,
          <fpage>281</fpage>
          -
          <lpage>294</lpage>
          , (
          <year>1989</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>R.</given-names>
            <surname>Callan</surname>
          </string-name>
          ,
          <source>The Essence of Neural Networks. Prentice Hall</source>
          , UK,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>W.</given-names>
            <surname>Pedrycz</surname>
          </string-name>
          , '
          <article-title>Structural interpolation and approximation with fuzzy relations: A study in knowledge reuse'</article-title>
          ,
          <source>Journal Studies in Fuzziness and Soft Computing</source>
          ,
          <volume>215</volume>
          ,
          <fpage>65</fpage>
          -
          <lpage>77</lpage>
          , (
          <year>2007</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>B. J.</given-names>
            <surname>Park</surname>
          </string-name>
          , W. Pedrycz,
          <string-name>
            <given-names>S.K.</given-names>
            <surname>Oh</surname>
          </string-name>
          , '
          <article-title>Fuzzy polynomial neural network: hybrid architectures of fuzzy modelling'</article-title>
          ,
          <source>IEEE Trans. Fuzzy Systems</source>
          .
          <volume>10</volume>
          (
          <issue>5</issue>
          ),
          <fpage>607</fpage>
          -
          <lpage>621</lpage>
          , (
          <year>2002</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>V.</given-names>
            <surname>Kecman</surname>
          </string-name>
          , Learning and Soft Computing, MIT Press. London England,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>E.</given-names>
            <surname>Cox</surname>
          </string-name>
          , Fuzzy Modeling and
          <article-title>Genetic Algorithms for Data Mining and Exploration, Elsevier Science</article-title>
          , USA,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>T.</given-names>
            <surname>Calvo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Mayor</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Mesira</surname>
          </string-name>
          , Aggregation Operators:
          <article-title>New Trends and Applications, (Studies in Fuzziness and Soft Computing)</article-title>
          .
          <source>Physica-Verlag, Heildeberg</source>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>R.A.</given-names>
            <surname>Jacobs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.I.</given-names>
            <surname>Jordan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.J.</given-names>
            <surname>Nowlan</surname>
          </string-name>
          and
          <string-name>
            <given-names>G.E.</given-names>
            <surname>Hinton</surname>
          </string-name>
          , '
          <article-title>Adaptive mixtures of local experts'</article-title>
          ,
          <source>Neural computation</source>
          ,
          <volume>3</volume>
          ,
          <fpage>79</fpage>
          -
          <lpage>87</lpage>
          ,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>