<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Data Mining Methods for Case-Based Reasoning in Health Sciences</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Isabelle Bichindaritz Computer Science Department, State University of New York Oswego</institution>
          ,
          <addr-line>NY</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <fpage>184</fpage>
      <lpage>198</lpage>
      <abstract>
        <p>Case-based reasoning (CBR) systems often refer to diverse data mining functionalities and algorithms. This article locates examples, many from health sciences domains, mapping data mining functionalities to CBR tasks and steps, such as case mining, memory organization, case base reduction, generalized case mining, indexing, and weight mining. Data mining in CBR focuses greatly on incremental mining for memory structures and organization with the goal of improving performance of retrieval, reuse, revise, and retain steps. Researchers are aiming at the ideal memory as described in the theory of the dynamic memory, which follows a cognitive model, while also improving performance and accuracy in retrieve, reuse, revise, and retain steps. Several areas of potential crossfertilization between CBR and data mining are also proposed.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>
        Case-based reasoning (CBR) systems have tight connections with machine
learning and data mining as exemplified by their description in data mining
        <xref ref-type="bibr" rid="ref10">(Han et al.
2012)</xref>
        and machine learning
        <xref ref-type="bibr" rid="ref19">(Mitchell 1997)</xref>
        textbooks. They have been tagged by
machine learning researchers as lazy learners because they defer the decision of
how to generalize beyond the training set until a target new case is encountered
        <xref ref-type="bibr" rid="ref19">(Mitchell 1997)</xref>
        , by opposition to most other learners, tagged as eager. Even
though a large part of the inductive inferences are definitely performed at
Retrieval time in CBR
        <xref ref-type="bibr" rid="ref2">(Aha 1997)</xref>
        , mostly through sophisticated similarity evaluation,
most CBR systems also perform inductive inferences at Retain time. There is a
long tradition within this research community to study what is a memory, and
what its components and organization should be. Indeed CBR methodology
focuses more on the memory part of its intelligent systems
        <xref ref-type="bibr" rid="ref27">(Schank 1982)</xref>
        than any
other artificial intelligence (AI) methodology, and this often entails learning
declarative memory structures and organization. This article proposes to review the main
data mining functionalities and how they are used in CBR systems by describing
examples of systems using them and analyzing which roles they play in the CBR
framework
        <xref ref-type="bibr" rid="ref1 ref3">(Aamodt and Plaza 1994)</xref>
        . The research question addressed is to
deCopyright © 2015 for this paper by its authors. Copying permitted for private and
academic purposes. In Proceedings of the ICCBR 2015 Workshops. Frankfurt, Germany.
termine the extent to which data mining functionalities are being used in CBR
systems, to enlighten possible future research collaborations between these two fields,
particularly in health sciences applications. This paper is organized as follows.
After the introduction, the second section highlights major concepts and techniques
in data mining. The third section reviews the main CBR cycle and principles. The
fourth section explains relationships between CBR and machine learning. The
following sections dive into several major data mining functionalities and how they
relate to CBR. The ninth section summarizes the findings and proposes future
directions. It is followed by the conclusion.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2 Data Mining Functionalities and Methods</title>
      <p>
        Data mining is the analysis of observational data sets to find unsuspected
relationships and to summarize the data in novel ways that are both understandable and
useful to the data owner
        <xref ref-type="bibr" rid="ref11">(Hand et al. 2001)</xref>
        . Traditionally described as a
misnomer, knowledge discovery or knowledge discovery in databases is a preferred
term. Some functionalities are clearly well defined and researched, among which
        <xref ref-type="bibr" rid="ref10">(Han et al. 2012)</xref>
        :
• Classification / prediction: classification is a supervised data mining
method applied to datasets containing an expert labeling in the form of a
categorical attribute, called a class; when the attribute is numeric, the
method is called prediction. Examples of classifiers include neural
networks, support vector machines (SVMs), naïve Bayes, and decision trees.
• Association Mining: association mining mines for frequent itemsets in a
dataset, which can be represented as rules such as in market basket
analysis. It is an unsupervised method. The most famous algorithm in this
category is a priori algorithm.
• Clustering: clustering finds groups of similar objects in a dataset, which
are also dissimilar from the objects in other clusters. In addition to the
similarity-based methods like K Means, some methods use density-based
algorithms or hierarchical algorithms.
      </p>
      <p>Considerations for evaluating the mining results vary in these different
methods, however a set of quality measurements are traditionally associated with each,
for example accuracy or error rate for classification, and lift or confidence for
association mining.</p>
      <p>These core functionalities can be combined and applied to several data types,
with extensions to the underlying algorithms or completely new methods.in
addition to the classical nominal and numeric data types. Well researched data types
are graphs, texts, images, time series, networks, streams, etc. We refer to these
extensions as multimedia mining.</p>
      <p>Other types of functionalities, generally combined with the core ones are for
example feature selection, where the goal is to select a subset of features,
sampling, where the goal is to select a subset of input rows, and characterization,
where the goal is to provide a summary representation of a set of rows, for
example those contained in a cluster.</p>
      <p>
        Finally, the CRISP-DM methodology has been described to guide the data
mining process (see Fig. 1)
        <xref ref-type="bibr" rid="ref10">(Han et al. 2012)</xref>
        . This methodology stresses the
importance of stages preparing for and following the actual model building stage:
data preparation, dealing with issues such as data consolidation, data cleaning, data
transformation, and data reduction, which can require up to 85% of all the time
dedicated to a project.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3 CBR Cycle and Methods</title>
      <p>
        Case Based Reasoning is a problem solving methodology that aims at reusing
previously solved and memorized problem situations, called cases. Traditionally, its
reasoning cycle proceeds through steps (see Fig. 2). This article will refer to the
major steps as Retrieve, Reuse, Revise, and Retain
        <xref ref-type="bibr" rid="ref1 ref3">(Aamodt and Plaza 1994)</xref>
        .
      </p>
    </sec>
    <sec id="sec-4">
      <title>4 CBR and Machine Learning</title>
      <p>CBR systems are generally classified as data mining systems because they can
perform classification or prediction tasks. From a set of data – called cases in CBR
– the classification or prediction achieved gives the case base a competency
beyond what the data provide. If CBR systems are in par with data mining systems
in such tasks as classification and prediction, there is, though an important
difference. CBR systems start their reasoning from knowledge units, called cases, while
data mining systems most often start from raw data. This is why case mining,
which consists in mining raw data for these knowledge units called cases, is a data
mining task often used in CBR. CBR systems also belong to instance based
learning systems in the field of machine learning, defined as systems capable of
automatically improving their performance over time. Although there is much
commonality between data mining and machine learning, their definitions and goals
are different. CBR systems are problem-solving systems following a reasoning
cycle illustrated in Fig. 1. However as long as they learn new cases in their retain
step, they are qualified as learning systems, thus belonging to machine learning
system.</p>
      <p>For this article, we will focus on identifying which data mining functionalities
and methods are used in CBR, and what is their result in the CBR memory.
Retain</p>
      <p>LEARNED CASE</p>
      <p>REPAIRED CASE
Confirmed
Solution</p>
      <p>Revise</p>
      <p>
        Input Problem
interpretation
First of all, since data mining emerged in the 90’s from scaling up machine
learning algorithms to large datasets, let us review what machine learning authors
have been saying about CBR. They consider case-based reasoning systems as
either analogical reasoning systems
        <xref ref-type="bibr" rid="ref18">(Michalski 1993)</xref>
        , or instance based learners
        <xref ref-type="bibr" rid="ref19">(Mitchell 1997)</xref>
        .
        <xref ref-type="bibr" rid="ref18">Michalski (1993)</xref>
        presents the analogical inference, at the basis of
case-based retrieval, as a dynamic induction performed during the matching
process.
        <xref ref-type="bibr" rid="ref19">Mitchell (1997)</xref>
        refers to CBR as a kind of instance based learner. This
author labels these systems as lazy learners because they defer the decision about
how to generalize beyond the training data until each new query instance is
encountered. This allows CBR systems to not commit to a global approximation
once and for all during the training phase of machine learning, but to generalize
specifically for each target case, therefore to fit its approximation bias, or
induction bias, to the case at hand. He points here to the drawback of overgeneralization
that is well known for eager learners, from which instance based learners are
exempt
        <xref ref-type="bibr" rid="ref19">(Mitchell 1997)</xref>
        .
      </p>
      <p>
        These authors focus their analysis on the inferential aspects of learning in
case-based reasoning. Historically CBR systems have evolved from the early work
of Schank in the theory of the dynamic memory
        <xref ref-type="bibr" rid="ref27">(Schank 1982)</xref>
        , where this author
proposes to design intelligent systems primarily by modeling their memory. Ever
since Schank’s precursory work on natural language understanding, one of the
main goals of case-based reasoning has been to integrate as much as possible
memory and inferences for the performance of intelligent tasks. Therefore
focusing on studying how case-based reasoning systems learn, or mine, their memory
structures and organization can prove at least as fruitful as studying and
classifying them from an inference standpoint.
      </p>
      <p>
        From a memory standpoint, learning in CBR consists in the creation and
maintenance of the structures and organization in memory. It is often referred to as
case base maintenance
        <xref ref-type="bibr" rid="ref31">(Wilson and Leake 2001)</xref>
        . In the general cycle of CBR,
learning takes place within the reasoning cycle - see
        <xref ref-type="bibr" rid="ref1 ref3">(Aamodt and Plaza 1994)</xref>
        for
this classical cycle. It completely serves the reasoning, and therefore one of its
characteristics is that it is an incremental type of mining. It is possible to fix it
after a certain point, though; in certain types of applications, but it is not a tradition
in CBR: learning is an emergent behavior from normal functioning
        <xref ref-type="bibr" rid="ref12">(Kolodner
1993)</xref>
        . When an external problem-solving source is available, CBR systems start
reasoning from an empty memory, and their reasoning capabilities stem from their
progressive learning from the cases they process.
        <xref ref-type="bibr" rid="ref1">Aamodt and Plaza (1994)</xref>
        further
state that case-based reasoning favours learning from experience. The decision to
stop learning because the system is judged competent enough is not taken from
definitive criteria. It is the consequence of individual decisions made about each
case, to keep it or not in memory depending upon its potential contribution to the
system. Thus often the decisions about each case, each structure in memory, allow
the system to evolve progressively toward states as different as ongoing learning,
in novice mode, and its termination, in expert mode. If reasoning and thus learning
are directed from the memory, learning answers to a process of prediction of the
conditions of cases recall (or retrieval). As the theory of the dynamic memory
showed, recall and learning are closely linked
        <xref ref-type="bibr" rid="ref27">(Schank 1982)</xref>
        . Learning in
casebased reasoning answers a disposition of the system to anticipate future situations:
the memory is directed toward the future both to avoid situations having caused a
problem and to reinforce the performance in success situations.
      </p>
      <p>
        More precisely, learning in case-based reasoning, takes the following forms:
1. Adding a case to the memory: it is at the heart of CBR systems, traditionally
one of the main phases in the reasoning cycle, and the last one: Retain
        <xref ref-type="bibr" rid="ref1 ref3">(Aamodt
and Plaza 1994)</xref>
        . It is the most primitive learning kind, also called learning by
consolidation, or rote learning.
2. Explaining: the ability of a system to find explanations for its successes and
failures, and by generalization the ability to anticipate.
3. Choosing the indices: it consists in anticipating Retrieval, the first reasoning
step.
4. Learning memory structures: these may be learnt by generalization from cases
or be provided from the start to hold the indices for example. These learnt
memory structures can play additional roles, such as facilitating reuse or
retrieval.
5. Organizing the memory: the memory comprises a network of cases, given
memory structures, and learned memory structures, organized in efficient ways.
      </p>
      <p>
        Flat and hierarchical memories have been traditionally described.
6. Refining cases: cases may be updated, refined based upon the CBR result.
7. Discovering knowledge or metareasoning: the knowledge at the basis of the
case-based reasoning can be refined, such as modifying the similarity measure
(weight learning), or situation assessment refinement. For example
        <xref ref-type="bibr" rid="ref8">d’Aquin et
al. (2007</xref>
        ) learn new adaptation rules through knowledge discovery.
      </p>
    </sec>
    <sec id="sec-5">
      <title>5 Classification / Prediction and CBR</title>
      <p>
        Since CBR is often used as a classifier, other classifiers are generally used in
ensemble learning to combine the CBR expertise with other classification/prediction
algorithms. Another type of combination of classifier is to use several CBR
systems as input to another classifier, for example SVM, applied to the task of
predicting business failure
        <xref ref-type="bibr" rid="ref14">(Li and Sun 2009)</xref>
        .
      </p>
      <p>
        Another notable class of systems is composed of those performing decision
tree induction to organize their memory. INRECA
        <xref ref-type="bibr" rid="ref4">(Auriol et al. 1994)</xref>
        project
studied how to integrate CBR and decision tree induction. They propose to
preprocess the case base by an induction tree algorithm, namely a decision tree. Later
refined into an INRECA tree (see Fig. 2), which is a hybrid between a decision
tree and a k-d tree, this method allows both similarity based retrieval and decision
tree retrieval, is incremental, and speeds up the retrieval. This system was used in
biological domains among others.
      </p>
    </sec>
    <sec id="sec-6">
      <title>6 Association Mining and CBR</title>
      <p>Association mining, although not looking closely related to CBR, can be resorted
in several scenarios. Main uses are for case mining and case base maintenance.</p>
      <p>
        <xref ref-type="bibr" rid="ref33">Wong et al. (2001)</xref>
        use fuzzy association rule mining to learn cases from
a web log, for future reuse through CBR.
      </p>
      <p>
        <xref ref-type="bibr" rid="ref15">Liu et al. (2008)</xref>
        use frequent item sets mining to detect associations
between cases, and thus detect cases candidate for removal from the case base and
thus its reduction (Retain step).
      </p>
    </sec>
    <sec id="sec-7">
      <title>7 Clustering and CBR</title>
      <p>
        Memory structures in CBR are foremost cases. A case is defined as a
contextualized piece of knowledge representing an experience that teaches a lesson
fundamental to achieving the goals of a reasoner
        <xref ref-type="bibr" rid="ref12">(Kolodner 1993)</xref>
        . For many systems,
cases are represented as truthfully as possible to the application domain.
Additionally, data mining methods have been applied to cases themselves, features, and
generalized cases. These techniques can be applied concurrently to the same
problem, or selectively. If the trend is now to use them selectively, probably in the near
future CBR systems will use these methods more and more concurrently.
      </p>
    </sec>
    <sec id="sec-8">
      <title>7.1 Case mining</title>
      <p>
        Case mining refers to the process of mining potentially large data sets for cases
        <xref ref-type="bibr" rid="ref34">(Yang and Cheng 2003)</xref>
        . Researchers have often noticed that cases simply do not
exist in electronic format, that databases do not contain well-defined cases, and
that the cases need to be created before CBR can be applied. Instead of starting
CBR with an empty case base, when large databases are available, preprocessing
these to learn cases for future CBR permits to capitalize on the experience
dormant in these databases.
        <xref ref-type="bibr" rid="ref34">Yang and Cheng (2003)</xref>
        propose to learn cases by
linking several database tables through clustering and Support Vector Machines
(SVM). The approach can be applied to learning cases from electronic medical
records (EMRs).
      </p>
    </sec>
    <sec id="sec-9">
      <title>7.2 Generalized case mining</title>
      <p>
        Generalized case mining refers to the process of mining databases for generalized
and/or abstract cases. Generalized cases are named in varied ways, such as
prototypical cases, abstract cases, prototypes, stereotypes, templates, classes, ossified
cases, categories, concepts, and scripts – to name the main ones
        <xref ref-type="bibr" rid="ref17">(Maximini et al.
2003)</xref>
        . Although all these terms refer to slightly different concepts, they represent
structures that have been abstracted or generalized from real cases either by the
CBR system, or by an expert. When these prototypical cases are provided by a
domain expert, this is a knowledge acquisition task. More frequently they are
learnt from actual cases. In CBR, prototypical cases are often learnt to structure
the memory. Therefore most of the prototypical cases presented here will also be
listed in the section on structured memories.
      </p>
      <p>
        In medical domains, many authors mine for prototypes, and simply refer to
induction for learning these. CHROMA
        <xref ref-type="bibr" rid="ref1 ref3">(Armengol and Plaza 1994)</xref>
        uses
induction to learn prototypes corresponding to general cases. Bellazzi et al. organize
their memory around prototypes
        <xref ref-type="bibr" rid="ref6">(Bellazzi et al. 1998)</xref>
        . The prototypes can either
have been acquired from an expert, or induced from a large case base.
        <xref ref-type="bibr" rid="ref28">Schmidt
and Gierl (1998)</xref>
        point that prototypes are an essential knowledge structure to fill
the gap between general knowledge and cases in medical domains. The main
purpose of this prototype learning step is to guide the retrieval process and to
decrease the amount of storage by erasing redundant cases. A generalization step
becomes necessary to learn the knowledge contained in stored cases.
      </p>
      <p>
        Others specifically refer to generalization, so that their prototypes correspond
to generalized cases. For example Malek proposes to use a neural network to learn
the prototypes in memory for a classification task, such as diagnosis
        <xref ref-type="bibr" rid="ref16">(Malek
1995)</xref>
        .
        <xref ref-type="bibr" rid="ref25">Portinale and Torasso (1995)</xref>
        in ADAPTER organize their memory through
E-MOPs
        <xref ref-type="bibr" rid="ref12">(Kolodner 1993)</xref>
        learnt by generalization from cases for diagnostic
problem-solving.
        <xref ref-type="bibr" rid="ref17">Maximini et al. (2003)</xref>
        have studied the different structures induced
from cases and point out that several different terms exist, such as generalized
case, prototype, schema, script, and abstract case. The same terms do not always
correspond to the same type of entity. They define three types of cases. A point
case is what we refer to as a real or ground case. The values of all its attributes are
known. A generalized case is an arbitrary subset of the attribute space.
      </p>
      <p>Research Point of View</p>
      <p>Follow-up Point of</p>
      <p>Treatment Point of View
Diagnosis Point of View</p>
      <p>There are two forms: the attribute independent generalized case, in
which some attributes have been generalized (interval of values) or are unknown,
and the attribute dependent generalized case, which cannot be defined from
independent subsets of their attributes.</p>
      <p>
        Finally, many authors learn concepts through conceptual clustering.
MNAOMIA
        <xref ref-type="bibr" rid="ref7">(Bichindaritz 1995)</xref>
        learns concepts and trends from cases through
conceptual clustering (see Fig. 3). Perner learns a hierarchy of classes by
hierarchical conceptual clustering, where the concepts represent clusters of prototypes
        <xref ref-type="bibr" rid="ref24">(Perner 1998)</xref>
        .
      </p>
      <p>
        <xref ref-type="bibr" rid="ref9">Dìaz-Agudo and Gonzàlez-Calero (2003)</xref>
        use formal concept analysis (FCA)
– a mathematical method from data analysis - as another induction method for
extracting knowledge from case bases, in the form of concepts. The authors point to
one notable advantage of this method, during adaptation. The FCA structure
induces dependencies among the attributes that guide the adaptation process
(DìazAgudo et al. 2003).
        <xref ref-type="bibr" rid="ref21">Napoli (2010)</xref>
        stresses the important role FCA can play for
classification purposes in CBR, through learning a case hierarchy, indexing, and
information retrieval.
      </p>
    </sec>
    <sec id="sec-10">
      <title>7.3 Mining for Memory Organization</title>
      <p>Efficiency at case retrieval time is conditioned by a judicious memory
organization. Two main classes of memory are presented here: unstructured – or flat –
memories, and structured memories.</p>
    </sec>
    <sec id="sec-11">
      <title>Flat memories</title>
      <p>
        Flat memories are memories in which all cases are organized at the same level.
Retrieval in such memories processes all the cases in memory. Classical nearest
neighbor (kNN) retrieval is a method of choice for retrieval in flat memories. Flat
memories can also contain prototypes, but in this case the prototypical cases do
not serve as indexing structures for the cases. They can simply replace a cluster of
similar cases that has been deleted from the case base during case base
maintenance activity. They can also have been acquired from experts. Flat memories are
the memories of predilection of kNN retrieval methods
        <xref ref-type="bibr" rid="ref2">(Aha 1997)</xref>
        and of
socalled memory-based systems.
      </p>
    </sec>
    <sec id="sec-12">
      <title>Structured memories</title>
      <p>Among the different structured organizations, the accumulation of generalizations
or abstractions facilitates the evaluation of the situation the control of indexation.</p>
      <p>
        Structured memories, dynamic, present the advantage of being declarative.
The important learning efforts in declarative learning are materialized in the
structures and the dynamic organization of their memories. In medical imaging, Perner
learns a hierarchy of classes by hierarchical conceptual clustering, where the
concepts are clusters of prototypes
        <xref ref-type="bibr" rid="ref24">(Perner 1998)</xref>
        . She notes the advantages of this
method: a more compact case base, and more robust (error-tolerant).
      </p>
      <p>
        MNAOMIA
        <xref ref-type="bibr" rid="ref7">(Bichindaritz 1995)</xref>
        proposes to use incremental concept
learning, which is a form of hierarchical clustering, to organize the memory. This
system integrates highly data mining with CBR because it reuses the learnt structures
to answer higher level tasks such as generating hypotheses for clinical research
(see Fig. 3), as a side effect of CBR for clinical diagnosis and treatment decision
support. Therefore this system illustrates that by learning memory structures in the
form of concepts, the classical CBR classification task improves, and at the same
time the system extracts what it has learnt, thus adding a knowledge discovery
dimension to the classification tasks performed.
      </p>
      <p>
        Another important method, presented in CHROMA
        <xref ref-type="bibr" rid="ref1 ref3">(Armengol and Plaza
1994)</xref>
        , is to organize the memory like a hierarchy of objects, by subsomption.
Retrieval is then a classification in a hierarchy of objects, and functions by
substitution of values in slots. CHROMA uses its prototypes, induced from cases, to
organize its memory. The retrieval step of CBR retrieves relevant prototypes by
using subsomption in the object oriented language NOOS to find the matching
prototypes.
      </p>
      <p>
        Many systems use personalized memory organizations structured around
several layers or networks, for example neural networks
        <xref ref-type="bibr" rid="ref16">(Malek 1995)</xref>
        .
      </p>
      <p>
        Another type of memory organization is the formal concept lattice.
DìazAgudo and Gonzàlez-Calero (2003) organize through formal concept analysis
(FCA) the case base around Galois lattices. Retrieval step is a classification in a
concept hierarchy, as specified in the FCA methodology, which provides such
algorithms
        <xref ref-type="bibr" rid="ref21">(Napoli 2010)</xref>
        . The concepts can be seen as an alternate form of indexing
structure.
      </p>
      <p>
        Yet other authors take advantage of the B-tree structure implementing
databases and retrieve cases using database SQL query language over a large case base
stored in a database
        <xref ref-type="bibr" rid="ref30">(West and McDonald 2003)</xref>
        .
      </p>
    </sec>
    <sec id="sec-13">
      <title>8 Feature Selection and CBR</title>
      <p>
        Feature mining refers to the process of mining data sets for features. Many CBR
systems select the features for their cases, and/or generalize them.
        <xref ref-type="bibr" rid="ref32">Wiratunga et al.
(2004)</xref>
        notice that transforming textual documents into cases requires dimension
reduction and/or feature selection, and show that this preprocessing improves the
classification in terms of CBR accuracy – and efficiency. These authors induce a
kind of decision tree called boosted decision stumps, comprised of only one level,
in order to select features, and induce rules to generalize the features. In
biomedical domains, in particular when data vary continuously, the need to abstract
features from streams of data is particularly prevalent. Other, and notable, examples
include Montani et al., who reduce their cases time series dimensions through
Discrete Fourier Transform
        <xref ref-type="bibr" rid="ref20">(Montani et al. 2004)</xref>
        , approach adopted by other authors
for time series
        <xref ref-type="bibr" rid="ref23">(Nilsson and Funk 2004)</xref>
        . Niloofar and Jurisica propose an original
method for generalizing features. Here the generalization is an abstraction that
reduces the number of features stored in a case
        <xref ref-type="bibr" rid="ref22">(Niloofar and Jurisica 2004)</xref>
        .
Applied to the bioinformatics domain of micro arrays, the system uses both
clustering techniques to group the cases into clusters containing similar cases, and
feature selection techniques.
      </p>
    </sec>
    <sec id="sec-14">
      <title>9 Discussion and Future Directions</title>
      <p>In addition to the main functionalities listed above, multimedia mining extends the
algorithms to the form taken by cases and the type of their features for the same
kinds of applications previously listed.</p>
      <p>In summary, if we map the different data mining functionalities and the
CBR steps / tasks, we notice on Table 1 that the steps benefitting the most from
data mining are Retain, Data preparation and Metareasoning. This is not surprising
because these steps are the most involved in declarative knowledge learning or
updating. However the processing intensive steps such as Retrieve, Reuse and
Revise do not seem to resort to data mining beside the dynamic induction mentioned
in Section 4.</p>
      <p>
        Interesting areas to explore could be feature selection functionality for
case mining, data preparation, or metareasoning. Retrieve, Reuse, and Revise
could also explore the use of data mining. For retrieval, in addition to weight
learning already mentioned, learning similarity measures
        <xref ref-type="bibr" rid="ref29">(Stahl 2005)</xref>
        , or
improving on an existing one, would be valuable. For reuse or revise, learning adaptation
rules or revision rules or models would be highly pertinent – and some work has
started in these areas
        <xref ref-type="bibr" rid="ref5">(Badra et al. 2009)</xref>
        . These synergies could take place during
the Retain step, but also in an opportunistic fashion during the processing steps
(see Table 1).
      </p>
      <p>We can also foresee such synergies with Big Data for the processing of
large datasets in distributed main memory that can make efficient use of data
mining during processing on a larger scale. It is therefore very important for CBR
researchers and professionals to gain expertise in data mining advances and their
applicability to CBR.</p>
      <p>CBR research focuses mostly on the model building stage of CRISP-DM.
Other aspects of the CRISP-DM methodology would also be interesting for CBR
synergies, for example aspects of data understanding, data preparation, testing,
evaluation, and deployment in relationship with CBR to make this methodology
more robust to fielded applications.
10 Conclusion
CBR systems make efficient use of most data mining tasks defined for descriptive
modeling. We can list among the main ones encountered in biomedical domains,
cluster analysis, rule induction, hierarchical cluster analysis, and decision tree
induction. The motivations for performing an incremental type of data mining
during CBR are several folds, and their efficiency has been measured to validate the
approach. The main motivations are the following:
• Increase efficiency of retrieval mostly, but also of reuse, revise, and retain
steps.
• Increase robustness, tolerance to noise.
• Increase reasoning accuracy and effectiveness.
• Improve storage needs.
• Follow a cognitive model.
• Add functionality, such as a synthetic task like generating new research
hypotheses as a side effect of normal CBR functioning.
• Perform metareasoning, such as knowledge discovery to learn new adaptation
rules.</p>
      <p>The memory organization maps directly into the retrieval method used. For
example, generalized cases and the like are used both as indexing structures, and
organizational structures. We can see here a direct mapping with the theory of the
dynamic memory, which constantly influences the CBR approach. The general
idea is that the learned memory structures and organizations condition what
inferences will be performed, and how. This is a major difference with database
approaches, which concentrate only on retrieval, and also with data mining
approaches, which concentrate only on the structures learned, and not on how they
will be used. Opportunistic use of data mining during the retrieval, reuse, and
revise steps would bring a more robust dimension to CBR by learning when a need
arises, instead of, or in addition to, systematically at Retain. The ideal CBR
memory is one which at the same time speeds up the retrieval step, and improves
effectiveness, efficiency, and robustness of the task performed by the reasoner,
and particularly the reuse performed, influencing positively both the retrieval, the
reuse and the other steps. Researchers do not want to settle for a faster retrieval at
the expense of less accuracy due to an overgeneralization. And they succeed at it.</p>
      <p>
        Future work involves revisiting these data mining techniques in the
framework of the knowledge containers identified by
        <xref ref-type="bibr" rid="ref26">Richter (2003)</xref>
        and constantly
tracking novel methods used as they appear. The variety of approaches as well as
the specific and complex purpose lead to thinking that there is space for future
models and theories of CBR memories, in particular embracing metareasoning and
opportunistic approaches more systematically, and where data mining will play a
larger role.
11 References
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Aamodt</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Plaza</surname>
            <given-names>E</given-names>
          </string-name>
          (
          <year>1994</year>
          )
          <article-title>Case-Based Reasoning: Foundational Issues, Methodologies Variations, and Systems Approaches</article-title>
          .
          <source>AI Communications</source>
          , IOS Press, Vol.
          <volume>7</volume>
          :
          <issue>1</issue>
          :
          <fpage>39</fpage>
          -
          <lpage>59</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Aha</surname>
            <given-names>DW</given-names>
          </string-name>
          (
          <year>1997</year>
          )
          <article-title>Lazy Learning</article-title>
          .
          <source>Artificial Intelligence Review</source>
          <volume>11</volume>
          :
          <fpage>7</fpage>
          -
          <lpage>10</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Armengol</surname>
            <given-names>E</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Plaza</surname>
            <given-names>E</given-names>
          </string-name>
          (
          <year>1994</year>
          )
          <article-title>Integrating induction in a case-based reasoner</article-title>
          . In:
          <string-name>
            <surname>Keane</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haton</surname>
            <given-names>JP</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Manago</surname>
            <given-names>M</given-names>
          </string-name>
          <source>(eds) Proceedings of EWCBR 94</source>
          . Acknosoft Press, Paris, pp
          <fpage>243</fpage>
          -
          <lpage>251</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Auriol</surname>
            <given-names>E</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Manago</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Althoff</surname>
            <given-names>KD</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wess</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dittrich</surname>
            <given-names>S</given-names>
          </string-name>
          (
          <year>1994</year>
          )
          <article-title>Integrating Induction and Case-Based Reasoning: Methodological Approach and First Evaluations</article-title>
          . In:
          <string-name>
            <surname>Keane</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haton</surname>
            <given-names>JP</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Manago</surname>
            <given-names>M</given-names>
          </string-name>
          <source>(eds) Proceedings of EWCBR 94</source>
          . Acknosoft Press, Paris, pp
          <fpage>145</fpage>
          -
          <lpage>155</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Badra</surname>
            <given-names>F</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cordier</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lieber</surname>
            <given-names>J</given-names>
          </string-name>
          (
          <year>2009</year>
          )
          <article-title>Opportunistic Adaptation Knowledge Discovery</article-title>
          . In:
          <string-name>
            <surname>McGinty</surname>
            <given-names>L</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wilson</surname>
            <given-names>DC</given-names>
          </string-name>
          <source>(eds) Proceedings of ICCBR 09. SpringerVerlag, Lecture Notes in Artificial Intelligence</source>
          , Berlin, Heidelberg, New York, pp
          <fpage>60</fpage>
          -
          <lpage>74</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Bellazzi</surname>
            <given-names>R</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Montani</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Portinale</surname>
            <given-names>L</given-names>
          </string-name>
          (
          <year>1998</year>
          )
          <article-title>Retrieval in a Prototype-Based Case Library: A Case Study in Diabetes Therapy Revision</article-title>
          . In:
          <string-name>
            <surname>Smyth</surname>
            <given-names>B</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cunningham</surname>
            <given-names>P</given-names>
          </string-name>
          <source>(eds) Proceedings of ECCBR 98. Springer-Verlag, Lecture Notes in Artificial Intelligence 1488</source>
          , Berlin, Heidelberg, New York, pp
          <fpage>64</fpage>
          -
          <lpage>75</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Bichindaritz</surname>
            <given-names>I</given-names>
          </string-name>
          (
          <year>1995</year>
          )
          <article-title>A case-based reasoner adaptive to several cognitive tasks</article-title>
          . In:
          <string-name>
            <surname>Veloso</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aamodt</surname>
            <given-names>A</given-names>
          </string-name>
          <source>(eds) Proceedings of ICCBR 95. Springer-Verlag, Lecture Notes in Artificial Intelligence 1010</source>
          , Berlin, Heidelberg, New York, pp
          <fpage>391</fpage>
          -
          <lpage>400</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>d'Aquin</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Badra</surname>
            <given-names>F</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lafrogne</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lieber</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Napoli</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Szathmary</surname>
            <given-names>L</given-names>
          </string-name>
          (
          <year>2007</year>
          )
          <article-title>Case Base Mining for Adaptation Knowledge Acquisition</article-title>
          .
          <source>In : IJCAI. 7</source>
          , pp.
          <fpage>750</fpage>
          -
          <lpage>755</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Dìaz-Agudo</surname>
            <given-names>B</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gervàz</surname>
            <given-names>P</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gonzàlez-Calero</surname>
            <given-names>P</given-names>
          </string-name>
          (
          <year>2003</year>
          )
          <article-title>Adaptation Guided Retrieval Based on Formal Concept Analysis</article-title>
          . In:
          <string-name>
            <surname>Ashley</surname>
            <given-names>K</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bridge</surname>
            <given-names>DG</given-names>
          </string-name>
          <source>(eds) Proceedings of ICCBR 03. Springer-Verlag, Lecture Notes in Artificial Intelligence 2689</source>
          , Berlin, Heidelberg, New York, pp
          <fpage>131</fpage>
          -
          <lpage>145</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>Han</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kamber</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pei</surname>
            <given-names>J</given-names>
          </string-name>
          (
          <year>2012</year>
          )
          <article-title>Data Mining concepts and Techniques</article-title>
          . Morgan Kaufmann, Waltham, Massachussetts
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Hand</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mannila</surname>
            <given-names>H</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smyth</surname>
            <given-names>P</given-names>
          </string-name>
          (
          <year>2001</year>
          )
          <article-title>Principles of Data Mining</article-title>
          . The MIT Press, Cambridge, Massachusetts
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>Kolodner</surname>
            <given-names>JL</given-names>
          </string-name>
          (
          <year>1993</year>
          )
          <article-title>Case-Based Reasoning</article-title>
          . Morgan Kaufmann Publishers, San Mateo, California
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Leake</surname>
            ,
            <given-names>DB</given-names>
          </string-name>
          , &amp; Wilson, DC (
          <year>1998</year>
          ).
          <article-title>Categorizing Case-base Maintenance: Dimensions and Directions</article-title>
          .
          <source>In: Advances in Case-Based Reasoning</source>
          . Springer Berlin Heidelberg, pp.
          <fpage>196</fpage>
          -
          <lpage>207</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <surname>Li</surname>
            <given-names>H</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sun</surname>
            <given-names>J</given-names>
          </string-name>
          , (
          <year>2009</year>
          )
          <article-title>Predicting business failure using multiple case-based reasoning combined with support vector machine</article-title>
          ,
          <source>Expert Systems with Applications</source>
          , Volume
          <volume>36</volume>
          , Issue 6, pp
          <fpage>10085</fpage>
          -
          <lpage>10096</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <surname>Liu C-H</surname>
            , Chen L-S, Hsu
            <given-names>C-C</given-names>
          </string-name>
          (
          <year>2008</year>
          )
          <article-title>An association-based case reduction technique for case-based reasoning</article-title>
          ,
          <source>Information Sciences</source>
          , Volume
          <volume>178</volume>
          , Issue 17 pp.
          <fpage>3347</fpage>
          -
          <lpage>3355</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <surname>Malek</surname>
            <given-names>M</given-names>
          </string-name>
          (
          <year>1995</year>
          )
          <article-title>A Connectionist Indexing Approach for CBR Systems</article-title>
          . In:
          <string-name>
            <surname>Veloso</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aamodt</surname>
          </string-name>
          <article-title>A (eds)</article-title>
          <source>Proceedings of ICCBR 95. Springer-Verlag, Lecture Notes in Artificial Intelligence 1010</source>
          , Berlin, Heidelberg, New York, pp
          <fpage>520</fpage>
          -
          <lpage>527</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>Maximini</surname>
            <given-names>K</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maximini</surname>
            <given-names>R</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bergmann</surname>
            <given-names>R</given-names>
          </string-name>
          (
          <year>2003</year>
          )
          <article-title>An Investigation of Generalized Cases</article-title>
          . In:
          <string-name>
            <surname>Ashley</surname>
            <given-names>KD</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bridge</surname>
            <given-names>DG</given-names>
          </string-name>
          <source>(eds) Proceedings of ICCBR 03. SpringerVerlag, Lecture Notes in Artificial Intelligence 2689</source>
          , Berlin, Heidelberg, New York, pp
          <fpage>261</fpage>
          -
          <lpage>275</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <surname>Michalski</surname>
            <given-names>RS</given-names>
          </string-name>
          (
          <year>1993</year>
          )
          <article-title>Toward a Unified Theory of Learning</article-title>
          . In:
          <string-name>
            <surname>Buchanan</surname>
            <given-names>BG</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wilkins</surname>
            <given-names>DC</given-names>
          </string-name>
          <article-title>(eds) Readings in knowledge acquisition and learning, automating the construction and improvement of expert systems</article-title>
          . Morgan Kaufmann Publishers, San Mateo, California, pp
          <fpage>7</fpage>
          -
          <lpage>38</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>Mitchell</surname>
            <given-names>TM</given-names>
          </string-name>
          (
          <year>1997</year>
          )
          <article-title>Machine Learning</article-title>
          .
          <source>Mc Graw Hill</source>
          , Boston, Massachusetts
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <surname>Montani</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Portinale</surname>
            <given-names>L</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bellazzi</surname>
            <given-names>R</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leornardi</surname>
            <given-names>G</given-names>
          </string-name>
          (
          <year>2004</year>
          )
          <article-title>RHENE: A Case Retrieval System for Hemodialysis Cases with Dynamically Monitored Parameters</article-title>
          . In: Funk P, Gonzàlez Calero P (eds)
          <source>Proceedings of ECCBR 04. Springer-Verlag, Lecture Notes in Artificial Intelligence 3155</source>
          , Berlin, Heidelberg, New York, pp
          <fpage>659</fpage>
          -
          <lpage>672</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <surname>Napoli A</surname>
          </string-name>
          (
          <year>2010</year>
          )
          <article-title>Why and How Knowledge Discovery Can Be Useful for Solving Problems with CBR</article-title>
          .
          <source>In: Proceedings of ICCBR 10. Springer-Verlag, Lecture Notes in Artificial Intelligence</source>
          , Berlin, Heidelberg, New York, pp.
          <fpage>12</fpage>
          -
          <lpage>19</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <string-name>
            <surname>Niloofar</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jurisica</surname>
            <given-names>I</given-names>
          </string-name>
          (
          <year>2004</year>
          )
          <article-title>Maintaining Case-Based Reasoning Systems: A Machine Learning Approach</article-title>
          . In: Funk P, Gonzàlez Calero P (eds)
          <source>Proceedings of ECCBR 04. Springer-Verlag, Lecture Notes in Artificial Intelligence 3155</source>
          , Berlin, Heidelberg, New York, pp
          <fpage>17</fpage>
          -
          <lpage>31</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <string-name>
            <surname>Nilsson</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Funk</surname>
            <given-names>P</given-names>
          </string-name>
          (
          <year>2004</year>
          )
          <article-title>A Case-Based Classification of Respiratory sinus Arrhythmia</article-title>
          . In: Funk P, Gonzàlez Calero P (eds)
          <source>Proceedings of ECCBR 04. Springer-Verlag, Lecture Notes in Artificial Intelligence 3155</source>
          , Berlin, Heidelberg, New York, pp
          <fpage>673</fpage>
          -
          <lpage>685</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <surname>Perner P</surname>
          </string-name>
          (
          <year>1998</year>
          )
          <article-title>Different Learning Strategies in a Case-Based Reasoning System for Image Interpretation</article-title>
          . In:
          <string-name>
            <surname>Smyth</surname>
            <given-names>B</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cunningham</surname>
            <given-names>P</given-names>
          </string-name>
          <source>(eds) Proceedings of ECCBR 98. Springer-Verlag, Lecture Notes in Artificial Intelligence 1488</source>
          , Berlin, Heidelberg, New York, pp
          <fpage>251</fpage>
          -
          <lpage>261</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <string-name>
            <surname>Portinale</surname>
            <given-names>L</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Torasso</surname>
            <given-names>P</given-names>
          </string-name>
          (
          <year>1995</year>
          )
          <article-title>ADAPTER: An Integrated Diagnostic System Combining Case-Based and Abductive Reasoning</article-title>
          . In:
          <string-name>
            <surname>Veloso</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aamodt</surname>
          </string-name>
          <article-title>A (eds)</article-title>
          <source>Proceedings of ICCBR 95. Springer-Verlag, Lecture Notes in Artificial Intelligence 1010</source>
          , Berlin, Heidelberg, New York, pp
          <fpage>277</fpage>
          -
          <lpage>288</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <string-name>
            <surname>Richter</surname>
            <given-names>MM</given-names>
          </string-name>
          (
          <year>2003</year>
          ).
          <article-title>Knowledge containers</article-title>
          .
          <source>In: Readings in Case-Based Reasoning</source>
          . Morgan Kaufmann Publishers
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <string-name>
            <surname>Schank</surname>
            <given-names>RC</given-names>
          </string-name>
          (
          <year>1982</year>
          )
          <article-title>Dynamic memory. A theory of reminding and learning in computers and people</article-title>
          . Cambridge University Press, Cambridge
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          <string-name>
            <surname>Schmidt</surname>
            <given-names>R</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gierl</surname>
            <given-names>L</given-names>
          </string-name>
          (
          <year>1998</year>
          )
          <article-title>Experiences with Prototype Designs and Retrieval Methods in Medical Case-Based Reasoning Systems</article-title>
          . In:
          <string-name>
            <surname>Smyth</surname>
            <given-names>B</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cunningham</surname>
            <given-names>P</given-names>
          </string-name>
          <source>(eds) Proceedings of ECCBR 98. Springer-Verlag, Lecture Notes in Artificial Intelligence 1488</source>
          , Berlin, Heidelberg, New York, pp
          <fpage>370</fpage>
          -
          <lpage>381</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          <string-name>
            <surname>Stahl A</surname>
          </string-name>
          (
          <year>2005</year>
          )
          <article-title>Learning Similarity Measures: A Formal View Based on a Generalized CBR Model</article-title>
          . In:
          <string-name>
            <surname>Munoz-Avila</surname>
            <given-names>H</given-names>
          </string-name>
          , Ricci F (eds):
          <source>Proceedings of ICCBR 05. Springer-Verlag, Lecture Notes in Artificial Intelligence 3620</source>
          , Berlin, Heidelberg, New York, pp
          <fpage>507</fpage>
          -
          <lpage>521</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          <string-name>
            <surname>West</surname>
            <given-names>GM</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McDonald</surname>
            <given-names>JR</given-names>
          </string-name>
          (
          <year>2003</year>
          )
          <article-title>An SQL-Based Approach to Similarity Assessment within a Relational Database</article-title>
          . In:
          <string-name>
            <surname>Ashley</surname>
            <given-names>K</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bridge</surname>
            <given-names>DG</given-names>
          </string-name>
          <source>(eds) Proceedings of ICCBR 03. Springer-Verlag, Lecture Notes in Artificial Intelligence 2689</source>
          , Berlin, Heidelberg, New York, pp
          <fpage>610</fpage>
          -
          <lpage>621</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          <string-name>
            <surname>Wilson</surname>
            <given-names>DC</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leake</surname>
            <given-names>DB</given-names>
          </string-name>
          (
          <year>2001</year>
          )
          <article-title>Maintaining Case-based Reasoners: Dimensions and Directions</article-title>
          .
          <source>Computational Intelligence Journal, Vo</source>
          .
          <volume>17</volume>
          , No.
          <volume>2</volume>
          :
          <fpage>196</fpage>
          -
          <lpage>213</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          <string-name>
            <surname>Wiratunga</surname>
            <given-names>N</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koychev</surname>
            <given-names>I</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Massie</surname>
            <given-names>S</given-names>
          </string-name>
          (
          <year>2004</year>
          )
          <article-title>Feature Selection and Generalisation for Retrieval of Textual Cases</article-title>
          . In: Funk P, Gonzàlez Calero P (eds)
          <source>Proceedings of ECCBR 04. Springer-Verlag, Lecture Notes in Artificial Intelligence 3155</source>
          , Berlin, Heidelberg, New York, pp
          <fpage>806</fpage>
          -
          <lpage>820</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          <string-name>
            <surname>Wong</surname>
            <given-names>C</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shiu</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pal</surname>
            <given-names>S</given-names>
          </string-name>
          (
          <year>2001</year>
          )
          <article-title>Mining fuzzy association rules for web access case adaptation</article-title>
          .
          <source>In Workshop Proceedings of Soft Computing in Case-Based Reasoning Workshop</source>
          , Vancouver, Canada, pp.
          <fpage>213</fpage>
          -
          <lpage>220</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          <string-name>
            <surname>Yang</surname>
            <given-names>Q</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cheng</surname>
            <given-names>H</given-names>
          </string-name>
          (
          <year>2003</year>
          )
          <article-title>Case Mining from Large Databases</article-title>
          . In:
          <string-name>
            <surname>Ashley</surname>
            <given-names>K</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bridge</surname>
            <given-names>DG</given-names>
          </string-name>
          <source>(eds) Proceedings of ICCBR 03. Springer-Verlag, Lecture Notes in Artificial Intelligence 2689</source>
          , Berlin, Heidelberg, New York, pp
          <fpage>691</fpage>
          -
          <lpage>702</lpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>