=Paper=
{{Paper
|id=Vol-375/paper-10
|storemode=property
|title=Improving the Accuracy of Neuro-Symbolic Rules with Case-Based Reasoning
|pdfUrl=https://ceur-ws.org/Vol-375/paper9.pdf
|volume=Vol-375
}}
==Improving the Accuracy of Neuro-Symbolic Rules with Case-Based Reasoning==
Improving the Accuracy of Neuro-Symbolic Rules with
Case-Based Reasoning
Jim Prentzas1, Ioannis Hatzilygeroudis2 and Othon Michail2
Abstract. In this paper, we present an improved approach combination. The bulk of the approaches combining rule-based
integrating rules, neural networks and cases, compared to a and case-based reasoning follow the coupling models [17]. In
previous one. The main approach integrates neurules and cases. these models, the problem-solving (or reasoning) process is
Neurules are a kind of integrated rules that combine a symbolic decomposed into tasks (or stages) for which different
(production rules) and a connectionist (adaline unit) representation formalisms (i.e., rules or cases) are applied.
representation. Each neurule is represented as an adaline unit. However, a more interesting approach is one integrating
The main characteristics of neurules are that they improve the more than two reasoning methods towards the same objective.
performance of symbolic rules and, in contrast to other hybrid In [16] and [10], such an approach integrating three reasoning
neuro-symbolic approaches, retain the modularity of production schemes, namely rules, neurocomputing and case-based
rules and their naturalness in a large degree. In the improved reasoning in an effective way is introduced. To this end,
approach, various types of indices are assigned to cases neurules and cases are combined. Neurules are a type of hybrid
according to different roles they play in neurule-based rules integrating symbolic rules with neurocomputing in a
reasoning, instead of one. Thus, an enhanced knowledge seamless way. Their main characteristic is that they retain the
representation scheme is derived resulting in accuracy modularity of production rules and also their naturalness in a
improvement. Experimental results demonstrate its large degree. In that approach, on the one hand, cases are used
effectiveness. as exceptions to neurules, filling their gaps in representing
domain knowledge and, on the other hand, neurules perform
indexing of the cases facilitating their retrieval. Finally, it
1 INTRODUCTION results in accuracy improvement.
In contrast to rule-based systems that solve problems from In this paper, we enhance the above approach by employing
scratch, case-based systems use pre-stored situations (i.e., different types of indices for the cases according to different
cases) to deal with similar new situations. Case-based reasoning roles they play in neurule-based reasoning. In this way, an
offers some advantages compared to symbolic rules and other improved knowledge representation scheme is derived as
knowledge representation formalisms. Cases represent specific various types of neurules’ gaps in representing domain
knowledge of the domain, are natural and usually easy to obtain knowledge are filled in by indexed cases. Experimental results
[11], [12]. Incremental learning comes natural to case-based demonstrate the effectiveness of the presented approach
reasoning. New cases can be inserted into a knowledge base compared to our previous one.
without making changes to the preexisting knowledge. The The rest of the paper is organized as follows. Section 2
more cases are available, the better the domain knowledge is presents neurules, whereas Section 3 presents methods for
represented. Therefore, the accuracy of a case-based system can constructing the indexing scheme of the case library. Section 4
be enhanced throughout its operation, as new cases become describes the hybrid inference mechanism. Section 5 presents
available. A negative aspect of cases compared to symbolic experimental results regarding accuracy of the inference
rules is that they do not provide concise representations of the process. Section 6 discusses related work. Finally, Section 7
incorporated knowledge. Also it is not possible to represent concludes.
heuristic knowledge. Furthermore, the time-performance of the
retrieval operations is not always the desirable.
2 NEURULES
Approaches integrating rule-based and case-based reasoning
have given interesting and effective knowledge representation Neurules are a type of hybrid rules integrating symbolic rules
schemes and are becoming more and more popular in various with neurocomputing giving pre-eminence to the symbolic
fields [3], [13], [14], [15], [17], [18], [19]. The objective of component. Neurocomputing is used within the symbolic
these efforts is to derive hybrid representations that augment the framework to improve the performance of symbolic rules [7],
positive aspects of the integrated formalisms and [10]. In contrast to other hybrid approaches (e.g. [4], [5]), the
simultaneously minimize their negative aspects. The constructed knowledge base retains the modularity of
complementary advantages and disadvantages of rule-based and production rules, since it consists of autonomous units
case-based reasoning are a good justification for their possible (neurules), and also retains their naturalness in a large degree,
1 Technological Educational Institute of Lamia, Department of Informatics and Computer Technology, 35100 Lamia, Greece, email: dprentzas@teilam.gr.
2 University of Patras, Dept of Computer Engineering & Informatics, 26500 Patras, Greece, email: {ihatz, michailo}@ceid.upatras.gr.
49
since neurules look much like symbolic rules [7], [8]. Also, the conclusion(s). Table 1 (Section 3) presents two example
inference mechanism is a tightly integrated process, which neurules, from a medical diagnosis domain.
results in more efficient inferences than those of symbolic rules Neurules can be constructed either from symbolic rules, thus
[7], [10]. Explanations in the form of if-then rules can be exploiting existing symbolic rule bases, or from empirical data
produced [9], [10]. (i.e., training examples) (see [7] and [8] respectively). An
adaline unit is initially assigned to each possible conclusion.
Each unit is individually trained via the Least Mean Square
2.1 Syntax and Semantics (LMS) algorithm. When the training set is inseparable, special
The form of a neurule is depicted in Fig.1a. Each condition Ci is techniques are used. In that case, more than one neurule having
assigned a number sfi, called its significance factor. Moreover, the same conclusion are produced.
each rule itself is assigned a number sf0, called its bias factor.
Internally, each neurule is considered as an adaline unit Table 1. Example neurules
(Fig.1b). The inputs Ci (i=1,...,n) of the unit are the conditions NR1: (-23.9) NR2: (-13.4)
of the rule. The weights of the unit are the significance factors if patient-class is human0-20 (10.6), if patient-class is human21-35 (6.9),
of the neurule and its bias is the bias factor of the neurule. Each pain is continuous (10.5), pain is continuous (3.2),
input takes a value from the following set of discrete values: [1 fever is high (8.8), joints-pain is yes (3.1),
(true), 0 (false), 0.5 (unknown)]. This gives the opportunity to fever is medium (8.4), fever is low (1.5),
easily distinguish between the falsity and the absence of a patient-class is human21-35 (6.2), fever is no-fever (1.5)
condition in contrast to symbolic rules. The output D, which fever is no-fever (2.7), then disease-type is chronic-
ant-reaction is medium (1.1) inflammation
represents the conclusion (decision) of the rule, is calculated via
then disease-type is inflammation
the standard formulas:
n
D = f(a) , a = sf 0 + ∑ sf i C i 2.2 The Neurule-Based Inference Engine
i=1
The neurule-based inference engine performs a task of
⎧1 if a ≥ 0 classification: based on the values of the condition variables
f (a ) = ⎨ and the weighted sums of the conditions, conclusions are
⎩ -1 otherwise
reached. It gives pre-eminence to symbolic reasoning, based on
a backward chaining strategy [7], [10]. As soon as the initial
where a is the activation value and f(x) the activation function, input data is given and put in the working memory, the output
a threshold function. Hence, the output can take one of two neurules are considered for evaluation. One of them is selected
values (‘-1’, ‘1’) representing failure and success of the rule for evaluation. Selection is based on textual order. A neurule
respectively. fires if the output of the corresponding adaline unit is computed
to be ‘1’ after evaluation of its conditions. A neurule is said to
be ‘blocked’ if the output of the corresponding adaline unit is
computed to be ‘-1’ after evaluation of its conditions.
A condition evaluates to ‘true’ (‘1’), if it matches a fact in
the working memory, that is there is a fact with the same
variable, predicate and value. A condition evaluates to
‘unknown’, if there is a fact with the same variable, predicate
and ‘unknown’ as its value. A condition cannot be evaluated if
there is no fact in the working memory with the same variable.
In this case, either a question is made to the user to provide data
for the variable, in case of an input variable, or an intermediate
Fig. 1. (a) Form of a neurule (b) a neurule as an adaline unit
neurule with a conclusion containing the variable is examined,
in case of an intermediate variable. A condition with an input
The general syntax of a condition Ci and the conclusion D is: variable evaluates to ‘false’ (‘0’), if there is a fact in the
::= working memory with the same variable, predicate and
::= different value. A condition with an intermediate variable
where denotes a variable, that is a symbol evaluates to ‘false’ if additionally to the latter there is no
representing a concept in the domain, e.g. ‘sex’, ‘pain’ etc, in a unevaluated intermediate neurule that has a conclusion with the
medical domain. denotes a symbolic or a numeric same variable. Inference stops either when one or more output
predicate. The symbolic predicates are {is, isnot} whereas the neurules are fired (success) or there is no further action
numeric predicates are {<, >, =}. can only be a (failure).
symbolic predicate. denotes a value. It can be a symbol During inference, a conclusion is rejected (or not drawn)
or a number. The significance factor of a condition represents when none of the neurules containing it fires. This happens
the significance (weight) of the condition in drawing the when: (i) all neurules containing the conclusion have been
examined and are blocked or/and (ii) a neurule containing an
50
alternative conclusion for the specific variable fires instead. For conclusions and not with specific neurules because there
instance, if all neurules containing the conclusion ‘disease-type may be more than one neurule with the same conclusion
is inflammation’ have been examined and are blocked, then this in the neurule base.
conclusion is rejected (or not drawn). If a neurule containing The indexing process may take as input the following types
e.g. the alternative conclusion ‘disease-type is primary- of knowledge:
malignant’ fires, then conclusion ‘disease-type is inflammation’ (a) Available neurules and non-indexed cases.
is rejected (or not drawn), no matter whether all neurules (b) Available symbolic rules and indexed cases. This type of
containing as conclusion ‘disease-type is inflammation’ have knowledge concerns an available formalism of symbolic
been examined (and are blocked) or not. rules and indexed exception cases as the one presented in
[6].
The availability of data determines which type of knowledge
3 INDEXING is provided as input to the indexing module. If an available
Indexing concerns the organization of the available cases so formalism of symbolic rules and indexed cases is presented as
that combined neurule-based and case-based reasoning can be input, the symbolic rules are converted to neurules using the
performed. Indexed cases fill in gaps in the domain knowledge ‘rules to neurules’ module. The produced neurules are
representation by neurules and during inference may assist in associated with the exception cases of the corresponding
reaching the right conclusion. To be more specific, cases may symbolic rules [10]. Exception cases are indexed as ‘false
enhance neurule-based reasoning to avoid reasoning errors by positives’ by neurules. Furthermore, for each case ‘true
handling the following situations: positive’ and ‘false negative’ indices may be acquired using the
(a) Examining whether a neurule misfires. If sufficient same process as in type (a).
conditions of the neurule are satisfied so that it can fire, it When available neurules and non-indexed cases are given as
should be examined whether the neurule misfires for the input to the indexing process, cases must be associated with
specific facts, thus producing an incorrect conclusion. neurules and neurule base conclusions. For each case, this
(b) Examining whether a specific conclusion was erroneously information can be easily acquired as following:
rejected (or not drawn). Until all intermediate and output attribute values of the case
In the approach in [10], the neurules contained in the neurule have been considered:
base were used to index cases representing their exceptions. A 1. Perform neurule-based reasoning for the neurules based on
case constitutes an exception to a neurule if its attribute values the attribute values of the case.
satisfy sufficient conditions of the neurule (so that it can fire) 2. If a neurule fires, check whether the value of its conclusion
but the neurule's conclusion contradicts the corresponding variable matches the corresponding attribute value of the
attribute value of the case. In this approach, various types of case. If it does (doesn't), associate the case as a ‘true
indices are assigned to cases. More specifically, indices are positive’ (‘false positive’) with this neurule.
assigned to cases according to different roles they play in 3. Check all intermediate and final conclusions. Associate the
neurule-based reasoning and assist in filling in different types case as a ‘false negative’ with each rejected (or not drawn)
of gaps in the knowledge representation by neurules. Assigning conclusion that ought to have been drawn based on the
different types of indices to cases can produce an effective attribute values of the case.
approach combining symbolic rule-based with case-based To illustrate how the indexing process works, we present the
reasoning [1]. following example. Suppose that we have a neurule base
In this new approach, a case may be indexed by neurules and containing the two neurules in Table 1 and the example cases
by neurule base conclusions as well. In particular, a case may shown in Table 2 (only the most important attributes of the
be indexed as: cases are shown). The cases however, also possess other
(a) False positive (FP), by a neurule whose conclusion is attributes (not shown in Table 2).
contradicting. Such cases, as in our previous approach, ‘disease-type’ is the output attribute that corresponds to the
represent exceptions to neurules and may assist in neurules’ conclusion variable. Table 3 shows the types of
avoiding neurule misfirings. indices associated with each case in Table 2 at the end of the
(b) True positive (TP), by a neurule whose conclusion is indexing process.
endorsing. The attribute values of such a case satisfy To acquire indexing information, the input values
sufficient conditions of the neurule (so that it can fire) corresponding to the attribute values of the cases are presented
and the neurule's conclusion agrees with the to the example neurules. Recall that when a neurule condition
corresponding attribute value of the case. Such cases evaluates to ‘true’ it gets the value ‘1’, whereas when it is false
may assist in endorsing correct neurule firings. gets ‘0’.
(c) False negative (FN), by a conclusion erroneously For example, given the input case C2, the final weighted sum
rejected (or not drawn) by neurules. Such cases may of neurule NR1 is: -23.9 + 10.6 + 10.5 + 8.8 = 6>0. Note that
assist in reaching conclusions that ought to have been the first three conditions of NR1 evaluate to ‘true’ whereas the
drawn by neurules (and were not drawn). If neurules remaining four (i.e., ‘fever is medium’, ‘fever is no-fever’,
with alternative conclusions containing this variable ‘patient-class is human21-35’ and ‘ant-reaction is medium’) to
were fired instead, it may also assist in avoiding neurule ‘false’ (not contributing to the weighted sum).
misfirings. ‘False negative’ indices are associated with
51
Table 2. Example cases
Case ant- joints-
patient-class pain fever disease-type
ID reaction pain
chronic-
C1 human21-35 continuous low none yes
inflammation
C2 human0-20 continuous high none no inflammation
C3 human0-20 night high none no inflammation
C4 human0-20 continuous medium none no inflammation
chronic-
C5 human21-35 continuous no-fever medium yes
inflammation
chronic-
C6 human0-20 continuous low none no
inflammation
The fact that the final weighted sum is positive means that In case (a), firing of the neurule is suspended and case-based
sufficient conditions of NR1 are satisfied so that it can fire. reasoning is performed for cases indexed as ‘false positives’
Furthermore, the corresponding output attribute value of the and ‘true positives’ by the neurule and cases indexed as ‘false
case matches the conclusion of NR1 and therefore C2 is negatives’ by alternative conclusions containing the neurule’s
associated as ‘true positive’ with NR1. conclusion variable. Cases indexed as ‘true positives’ by the
neurule endorse its firing whereas the other two sets of cases
Table 3. Indices assigned to the example cases in Table 2 considered (i.e., ‘false positives’ and ‘false negatives’) prevent
Case Type of index Indexed by its firing. The results produced by case-based reasoning are
ID evaluated in order to assess whether the neurule will fire or
C1 ‘True positive’ Neurule NR2 whether an alternative conclusion proposed by the retrieved
C2 ‘True positive’ Neurule NR1 case will be considered valid instead.
C3 ‘False negative’ Conclusion ‘disease-type is In case (b), the case-based module will focus on cases
inflammation’ indexed as ‘false negatives’ by conclusions containing the
C4 ‘True positive’ Neurule NR1 specific (intermediate or output) variable.
C5 ‘False positive’ Neurule NR1 The basic steps of the inference process are the following:
C5 ‘True positive’ Neurule NR2 1. Perform neurule-based reasoning for the neurules.
C6 ‘False negative’ Conclusion ‘disease-type is chronic- 2. If sufficient conditions of a neurule are fulfilled so that it can
inflammation’ fire, then
2.1. Perform case-based reasoning for the ‘false positive’
Similarly, when the input values corresponding to the and ‘true positive’ cases indexed by the neurule and the
attribute values of cases C1 and C4 are given as input to the ‘false negative’ cases associated with alternative
neurule base, sufficient conditions of neurules NR2 and NR1 conclusions containing the neurule’s conclusion
respectively are satisfied so that they can fire and the variable.
corresponding output attribute case values match their 2.2. If none case is retrieved or the best matching case is
conclusions. Furthermore, when the input values corresponding indexed as ‘true positive’, the neurule fires and its
to the attribute values of case C5 are given as input to the conclusion is inserted into the working memory.
neurule base, sufficient conditions of both neurules NR1 and 2.3. If the best matching case is indexed as ‘false positive’ or
NR2 are satisfied so that they can fire. However, the ‘false negative’, insert the conclusion supported by the
corresponding output attribute case values match the conclusion case into the working memory and mark the neurule as
of NR2 and contradict the conclusion of NR1. In addition, 'blocked'.
conclusion ‘disease-type is inflammation’ cannot be drawn 3. If all intermediate neurules with a specific conclusion
when the input values corresponding to the attribute values of variable are blocked, then
case C3 are given as input because the only neurule with the 3.1. Examine all cases indexed as ‘false negatives’ by the
corresponding conclusion (i.e., NR1) is blocked. A similar corresponding intermediate conclusions, retrieve the
situation happens for case C6. best matching one and insert the conclusion supported
by the retrieved case into the working memory.
4. If all output neurules with a specific conclusion variable are
4 THE HYBRID INFERENCE MECHANISM blocked, then
The inference mechanism combines neurule-based with case- 4.1. Examine all cases indexed as ‘false negatives’ by the
based reasoning. The combined inference process mainly corresponding final conclusions, retrieve the best
focuses on the neurules. The indexed cases are considered matching one and insert the conclusion supported by the
when: (a) sufficient conditions of a neurule are fulfilled so that retrieved case into the working memory.
it can fire, (b) all output or intermediate neurules with a specific The similarity measure between two cases ck and cl is
conclusion variable are blocked and thus no final or calculated via a distance metric [1]. The best-matching case to
intermediate conclusion containing this variable is drawn. the problem at hand is the one having the maximum similarity
52
with (minimum distance from) the input case. If multiple stored according to [10] which will be referred to as
cases have a similarity equal to the maximum one, a simple NBRCBR_PREV.
heuristic is used. Inferences were run for both NBRCBR and
Let present now two simple inference examples concerning NBRCBR_PREV using the testing sets. Inferences from
the combined neurule base (Table 1) and the indexed example NBRCBR_Prev were performed using the inference mechanism
cases (Tables 2 and 3). Suppose that during inference sufficient combining neurule-based and CBR as described in [10].
conditions of neurule NR1 are satisfied so that it can fire. Firing Inferences from NBRCBR were performed according to the
of NR1 is suspended and the case-based reasoning process inference mechanism described in this paper. No test case was
focuses on the cases contained in the union of the following sets stored in the case libraries.
of indexed cases: Table 4 presents such experimental results regarding
• the set of cases indexed as ‘true positives’ by NR1: inferences from NBRCBR and NBRCBR_PREV. It presents
{C2, C4}, results regarding classification accuracy of the integrated
• the set of cases indexed as ‘false positives’ by approaches and the percentage of test cases resulting in neurule-
NR1: {C5} and based reasoning errors that were successfully handled by case-
• the set of cases indexed as ‘false negatives’ by based reasoning. Column ‘% FPs handled’ refers to the
alternative conclusions containing variable percentage of test cases resulting in neurule misfirings (i.e.,
‘disease-type’ (i.e., ‘disease-type is chronic ‘false positives’) that were successfully handled by case-based
inflammation’): {C6}. reasoning. Column ‘% FNs handled’ refers to the percentage of
test cases resulting in having all output neurules blocked (i.e.,
So, in this example the case-based reasoning process focuses on ‘false negatives’) that were successfully handled by case-based
the following set of indexed cases: {C2, C4} ∪ {C5} ∪ {C6} = reasoning. ‘False negative’ test cases are handled in
{C2, C4, C5, C6}. NBRCBR_PREV by retrieving the best-matching case from the
Suppose now that during inference both output neurules in whole library of indexed cases.
the example neurule base are blocked. The case-based
reasoning process will focus on the cases contained in the union Table 4. Experimental results
set of the following sets of indexed cases: NBRCBR NBRCBR_PREV
• the set of cases indexed as ‘false negatives’ by
Classification
Classification
conclusion ‘disease-type is inflammation’: {C3}.
Accuracy
Accuracy
Handled
Handled
Handled
Handled
% FNs
% FNs
% FPs
% FPs
• the set of cases indexed as ‘false negatives’ by Dataset
conclusion ‘disease-type is chronic-inflammation’:
{C6}.
Therefore, in this example the case-based reasoning process Car 96.04% 52.81% 64.07% 92.49% 15.51% 20.36%
focuses on the following set of indexed cases: {C3} ∪ {C6} = (1728
{C3, C6}. patterns)
Nursery 98.92% 58.68% 52.94% 97.68% 6.60% 18.82%
(12960
5 EXPERIMENTAL RESULTS patterns)
In this section, we present experimental results using datasets
As can be seen from the table, the presented approach results
acquired from [2]. Note that there are no intermediate
in improved classification accuracy. Furthermore, in inferences
conclusions in these datasets. The experimental results involve
from NBRCBR the percentages of both ‘false positive’ and
evaluation of the presented approach combining neurule-based
‘false negative’ test cases successfully handled are greater than
and case-based reasoning and comparison with our previous
the corresponding percentages in inferences from
approach [10]. 75% and 25% of each dataset were used as
NBRCBR_PREV. Results also show that there is still room for
training and testing sets respectively. Each initial training set
improvement.
was used to create a combined neurule base and indexed case
We also tested a nearest neighbor approach working alone in
library. For this purpose, each initial training set was randomly
these two datasets (75% of the dataset used as case library and
split into two disjoint subsets, one used to create neurules and
25% of the dataset used as testing set). We used the similarity
one used to create an indexed case library. More specifically,
measure presented in Section 5. The approach classified the
2/3 of each initial training set was used to create neurules by
input case to the conclusion supported by the best-matching
employing the ‘patterns to neurules’ module [8] whereas the
case retrieved from the case library. Classification accuracy for
remaining 1/3 of each initial training set constituted non-
car and nursery dataset is 90.45% and 96.67% respectively. So,
indexed cases. Both types of knowledge (i.e., neurules and non-
both integrated approaches perform better. This is due to the
indexed cases) were given as input to the indexing construction
fact that the indexing schemes assist in focusing on specific
module presented in this paper producing a combined neurule
parts of the case library.
base and an indexed case library which will be referred to as
NBRCBR. Neurules and non-indexed cases were also used to
produce a combined neurule base and an indexed case library
53
7 CONCLUSIONS [12] D.B. Leake (ed.), Case-Based Reasoning: Experiences, Lessons &
Future Directions, AAAI Press/MIT Press, 1996.
In this paper, we present an approach integrating neurule-based [13] M.R. Lee, ‘An Exception Handling of Rule-Based Reasoning Using
and case-based reasoning that improves a previous hybrid Case-Based Reasoning’, Journal of Intelligent and Robotic Systems,
approach [10]. Neurules are a type of hybrid rules integrating 35, 327-338, (2002).
symbolic rules with neurocomputing. In contrast to other neuro- [14] C.R. Marling, M. Sqalli, E. Rissland, H. Munoz-Avila, D. Aha,
symbolic approaches, neurules retain the naturalness and ‘Case-Based Reasoning Integrations’, AI Magazine, 23, 69-86,
modularity of symbolic rules. Integration of neurules and cases (2002).
[15] S. Montani, R. Bellazzi, ‘Supporting Decisions in Medical
is done in order to improve the accuracy of the inference
Applications: the Knowledge Management Perspective’, International
mechanism. Cases are indexed according to the roles they can Journal of Medical Informatics, 68, 79-90, (2002).
play during neurule-based inference. More specifically, they are [16] J. Prentzas, I. Hatzilygeroudis, ‘Integrating Hybrid Rule-Based with
associated as ‘true positives’ and ‘false positives’ with neurules Case-Based Reasoning’, In S. Craw and A. Preece (Eds), Advances in
and as ‘false negatives’ with neurule base conclusions. Case-Based Reasoning, Proceedings of the European Conference on
The presented approach integrates three types of knowledge Case-Based Reasoning, ECCBR-2002, Lecture Notes in Artificial
representation schemes: symbolic rules, neural networks and Intelligence, Vol. 2416, Springer-Verlag, 336-349, 2002.
case-based reasoning. Most hybrid intelligent systems [17] J. Prentzas, I. Hatzilygeroudis, ‘Categorizing Approaches Combining
implemented in the past usually integrate two intelligent Rule-Based and Case-Based Reasoning’, Expert Systems, 24, 97-122,
(2007).
technologies e.g. neural networks and expert systems, neural
[18] E.L. Rissland, D.B. Skalak, ‘CABARET: Rule Interpretation in a
and fuzzy logic, genetic algorithms and neural networks, etc. A
Hybrid Architecture’, International Journal of Man-Machine Studies,
new development that should receive interest in the future is the 34, 839-887, (1991).
integration of more than two intelligent technologies, [19] D. Rossille, J.-F. Laurent, A. Burgun, ‘Modeling a Decision Support
facilitating the solution of complex problems and exploiting System for Oncology using Rule-Based and Case-Based Reasoning
multiple types of data sources. Methodologies’, International Journal of Medical Informatics, 74,
299-306, (2005).
[20] H. Vafaie, C. Cecere, ‘CORMS AI: Decision Support System for
References Monitoring US Maritime Environment’, Proceedings of the 17th
Innovative Applications of Artificial Intelligence Conference (IAAI),
[1] G. Agre, ‘KBS Maintenance as Learning Two-Tiered Domain
AAAI Press, 1499-1507, (2005).
Representation’, In M.M. Veloso, A. Aamodt, (Eds.): Case-Based
Reasoning Research and Development, First International Conference,
ICCBR-95, Proceedings, Lecture Notes in Computer Science, Vol.
1010, Springer-Verlag, 108-120, 1995.
[2] A. Asuncion, D.J. Newman, ‘UCI Repository of Machine Learning
Databases’ [http://www.ics.uci.edu/~mlearn/MLRepository.html].
Irvine, CA, University of California, School of Information and
Computer Science (2007).
[3] N. Cercone, A. An, C. Chan, ‘Rule-Induction and Case-Based
Reasoning: Hybrid Architectures Appear Advantageous’, IEEE
Transactions on Knowledge and Data Engineering, 11, 164-174,
(1999).
[4] S. I. Gallant, Neural Network Learning and Expert Systems, MIT
Press, 1993.
[5] A.Z. Ghalwash, ‘A Recency Inference Engine for Connectionist
Knowledge Bases’, Applied Intelligence, 9, 201-215, (1998).
[6] A.R. Golding, P.S. Rosenbloom, ‘Improving accuracy by combining
rule-based and case-based reasoning’, Artificial Intelligence, 87, 215-
254, (1996).
[7] I. Hatzilygeroudis, J. Prentzas, ‘Neurules: Improving the Performance
of Symbolic Rules’, International Journal on AI Tools, 9, 113-130,
(2000).
[8] I. Hatzilygeroudis, J. Prentzas, ‘Constructing Modular Hybrid Rule
Bases for Expert Systems’, International Journal on AI Tools, 10, 87-
105, (2001).
[9] I. Hatzilygeroudis, J. Prentzas, ‘An Efficient Hybrid Rule-Based
Inference Engine with Explanation Capability’, Proceedings of the
14th International FLAIRS Conference, AAAI Press, 227-231,
(2001).
[10] I. Hatzilygeroudis, J. Prentzas, ‘Integrating (Rules, Neural Networks)
and Cases for Knowledge Representation and Reasoning in Expert
Systems’, Expert Systems with Applications, 27, 63-75, (2004).
[11] J. Kolodner, Case-Based Reasoning, Morgan Kaufmann Publishers,
San Mateo, CA, 1993.
54