<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Scalable Matching of Industry Models - A Case Study</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Brian Byrne</string-name>
          <email>byrneb@us.ibm.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Achille Fokoue</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Aditya Kalyanpur</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kavitha Srinivas</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Min Wang</string-name>
          <email>min@us.ibm.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>IBM Software Group, Information Management</institution>
          ,
          <addr-line>Austin, Texas</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>IBM T. J. Watson Research Center</institution>
          ,
          <addr-line>Hawthorne, New York achille, adityakal, ksrinivs</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>A recent approach to the problem of ontology matching has been to convert the problem of ontology matching to information retrieval. We explore the utility of this approach in matching model elements of real UML, ER, EMF and XML-Schema models, where the semantics of the models are less precisely defined. We validate this approach with domain experts for industry models drawn from very different domains (healthcare, insurance, and banking). We also observe that in the field, manually constructed mappings for such large industry models are prone to serious errors. We describe a novel tool we developed to detect suspicious mappings to quickly isolate these errors.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>The world of business is centered around information. Every business deals with
a myriad of different semantic expressions of key business information, and
expends huge resources working around the inconsistencies, challenges and errors
introduced by a variety of information models. Typically, these information
models organize the data, services, business processes, or vocabulary of an enterprise,
and they may exist in different forms such as ER models, UML models, thesauri,
ontologies or XML schema. A common problem is that these varying models
rarely share a common terminology, because they have emerged as a result of
several inputs. In some cases, mergers of organizations operating in the same
business result in different information models, to express the same exact
concepts. In other cases, they may have been developed by different organizational
units to express overlapping business concepts, but in slightly different domains.</p>
      <p>Irrespective of how these models came about, today’s business is faced with
many different information models, and an increasing need to integrate across
these models, through data integration, shared processes and rules, or reusable
business services. In all of these cases, the ability to relate, or map, between
different models is critical. Both human attempts to manually map different
information models and and the use of tools to automate mappings however are
very error prone in the real world. For humans, the source of the error comes
from multiple sources:
– The size of these models (typically, these models have several thousand
elements each)
– The fact that lexical names of model elements rarely match, or when they
do match, its because of the wrong reasons (e.g., a document may have an
endDate attribute, as does a claim, but the two endDate reflect semantically
different things, although they match at the lexical level).
– Models often express concepts at different levels of granularity, and it may
not always be apparent at what level the concept should be mapped. In
many real world mappings, we have observed a tendency for human analysts
to map everything to generic concepts rather than more specific concepts.
While these mappings are not necessarily invalid, they have limited utility
in data integration scenarios, or in solution building.</p>
      <p>
        The above points make it clear that there is a need for a tool to perform
semiautomated model mapping, where a tool can help suggest appropriate mappings
to a human analyst. Literature on ontology matching and alignment is clearly
helpful in designing such a tool. Our approach to building such a tool is similar
in spirit to the ideas implemented in Falcon-AO [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ],[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and PRIOR ([
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]), except
that we adapted their techniques to UML, ER and EMF models. Matching or
alignment across these models is different from matching ontologies, because the
semantics of these models are poorly defined compared to those of ontologies.
Perhaps due to this reason, schema mapping approaches tend to focus mostly on
lexical and structural analysis. However, existing schema mapping approaches
scale very poorly to large models. Most analysts in the field therefore tend to
revert to manual mapping, despite the availability of many schema mapping
tools.
      </p>
      <p>We however make the observation that in most industry models, the
semantics of model elements is buried in documentation (either within the model, or
in separate PDF, Excel or Word files). We therefore use techniques described
by Falcon-AO and PRIOR to build a generic representation that allows us to
exploit the structural and lexical information about model elements along with
semantics in documentation. The basic idea, as described in PRIOR is to convert
the model mapping problem into a problem of information retrieval. Specifically,
each model element is converted into a virtual document with a number of fields
that encode the structural, lexical and semantic information associated with
that model element. This information is in turn expressed as a term vector for
a document. Mapping across model elements is then measured as a function of
document similarity; i.e., the cosine similarity between two term vectors. This
approach scales very well because we use the Apache Lucene text search engine
for indexing and searching these virtual documents.</p>
      <p>The novelty in our approach is that we also developed an engine to identify
suspicious mappings produced either by our tool or by human analysts. We
call this tool a Lint engine for model mappings, after the popular Lint tool
which checks C programs for common software errors. The key observation that
motivated our development of the Lint engine was that human model mappings
were shockingly poor for 3/4 model mappings that were produced in real business
scenarios. Common errors made by human analysts included the following:
– Mapping elements to overly general classes (equivalent to Thing ).
– Mapping elements to subtypes even when the superclass was the
appropriate match. As an example, Hierarchy was mapped to HierarchyType when
Hierarchy existed in the other model.
– Mapping elements that were simply invalid or wrong.</p>
      <p>We encoded 6 different heuristics to flag suspicious mappings, including
heuristics that can identify common errors made by our own algorithm (e.g.,
the tendency to match across elements with duplicate, copied documentation).
The Lint engine for model mappings is thus incorporated as a key filter for
semiautomated model mapping tool, to reduce the number of false positives that
the human analyst needs to examine. A second use of our tool is of course to
review the quality of human mappings in cases where the model mappings were
produced manually.</p>
      <p>Our key contributions are as follows:
– We describe a technique to extend existing techniques in ontology mapping to
the problem of model mapping across UML, ER, and EMF models. Unlike
existing approaches in schema mapping, we exploit semantic information
embedded in documentation along with semantic and lexical information to
perform the mapping.
– We describe a novel Lint engine which can be used to review the quality of
model mappings produced either by a human or by our algorithm.
– We perform a detailed evaluation of the semi-automated tool on 7 real world
model mappings. Four of the seven mappings had human mappings that were
performed in a business context. We evaluated the Lint engine on these 4
mappings. The mappings involved large industry specific framework
models with thousands of elements in each model in the domains of healthcare,
insurance, and banking, as well as customer models in the domains of
healthcare and banking. Our approach has therefore been validated on mappings
that were performed for real business scenarios. In all cases, we validated
the output of both tools with domain experts.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        Ontology matching or the related problem of schema matching is a well studied
problem, with a number of different approaches that are too numerous to be
outlined here in detail. We refer the reader instead to surveys of ontology or schema
matching [
        <xref ref-type="bibr" rid="ref4 ref5 ref6">4–6</xref>
        ]. A sampling of ontology matching approaches include GLUE
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], PROMPT [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], HCONE-merge [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] and SAMBO [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Sample approaches to
schema matching include Cupid [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], Artemis [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], and Clio [
        <xref ref-type="bibr" rid="ref13 ref14 ref15 ref16">13–16</xref>
        ]. Our work is
mostly closely related to Falcon-AO [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ] and PRIOR [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], two recent approaches
to ontology matching that combine some of the advantages of earlier approaches
such as linguistic and structural matching incorporated within an
informationretrieval approach, and seem well positioned to be extended to address matching
in shallow-structured models such as UML, ER and EMF models. Both
FalconAO and PRIOR have been compared with existing systems in OAEI 2007 and
appear to scale well in terms of performance. Because our work addresses
matching across very large UML, ER and EMF data models (about 5000 elements),
we adapted the approaches described in Falcon-AO and PRIOR to these models.
Matching or alignment across these models is different from matching ontologies,
because the semantics of these models are poorly defined compared to those of
ontologies. More importantly, we report the results of applying these techniques
to 7 real ontology matching problems in the field, and describe scenarios where
the approach is most effective.
3
3.1
      </p>
    </sec>
    <sec id="sec-3">
      <title>Overall Approach</title>
      <p>
        Matching algorithm
Casting the matching problem to an IR problem Similar to approaches
outlined in Falcon-AO [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ],[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and PRIOR ([
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]), a fundamental principle in our
approach is to cast the problem of model matching into a classical Information
Retrieval problem. Model elements (e.g. attributes or classes) from various
modeling representations (e.g. XML Schema, UML, EMF, ER) are transformed into
virtual documents. A virtual document consists of one or more fields capturing
the structural, lexical and semantic information associated with the
corresponding model element.
      </p>
      <p>
        A Vector Space Model (VSM) [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] is then adopted: each field F of a
document is represented as a vector in a NF -dimensional space, with NF denoting
the number of distinct words in field F of all documents. Traditional TF-IDF
(Term Frequency - Inverse Document Frequency) values are used as the value of
coordinates associated to terms. Formally, let DF denotes the vector associated
with the field F of a virtual document D, and DF [i] denotes the ith coordinate
of the vector associated with the field F of a virtual document D:
DF [i] = tfi ∗ idfi
tfi = |ti|/NF
idfi = 1 + log(N D/di)
(1)
(2)
(3)
where
– |ti| represents the number of occurrence, in the field F of document D, of
the term t corresponding to the ith coordinate of the vector DF ,
– N D corresponds to the total number of documents, and
– di is the number of documents in which t appears at least once in F
      </p>
      <p>The similarity sim(A, B) between two model elements A and B is computed
as the weighted mean of the cosine of the angle formed by their field vectors.
Formally, let D and D′ be the virtual documents corresponding to A and B,
respectively. Let q be the number of distinct field names in all documents.</p>
      <p>sim(A, B) =
cosine(DFk , DF′k ) =</p>
      <p>Pqk=1 αk ∗ cosine(DFk , DF′k )</p>
      <p>Pqk=1 αk
PiN=F1k DFk [i] ∗ DF′k [i]</p>
      <p>|DFk | ∗ |DF′k |
vu NF
uX(DF [i])2
|DF | = t
i=1
(4)
(5)
(6)
where αk is the weight associated with the field Fk, which indicates the relative
importance of information encoded by that field.</p>
      <p>In our Lucene3-based implementation, before building document vectors,
standard transformations, such as stemming/lemmatization, stop words removal,
lowercasing, etc, are performed. In addition to these standard transformations,
we also convert camel case words (e.g. “firstName”) into corresponding group of
space separated words (e.g. “first name”).</p>
      <p>Transforming model elements into virtual documents A key step in
our approach is the transformation of elements of a data model into virtual
documents. For simplicity of the presentation, we assume that the data model
is encoded as a UML Class diagram4</p>
      <p>The input of the transformation is a model element (e.g. attribute,
reference/association, or class). The output is a virtual document with the the
following fields:
– name. This field consists of the name of the input element.
– documentation. This field contains the documentation of the input model
element.
– containerClass. For attribute, reference and association, this field contains
the name and documentation of their containing class.
– path. This field contains the path from the model root package to the model
element (e.g. for an attribute ”bar” of the class ”foo” located in the package
”example”, the path is /example/foo/bar).</p>
      <p>– body. This field is made of the union of terms in all fields except path.
While the first two fields encode only lexical information, the next two fields
(containerClass and path) capture some of the structure of the modeling
elements. In our implementation, when the models to be compared appear very
similar, which translates to a very large number of discovered mappings, we
typically empirically adjust upwards the weight of the “containerClass” and “path”
fields to convey more importance to the structural similarity.</p>
      <sec id="sec-3-1">
        <title>3 http://lucene.apache.org/java/docs/</title>
        <p>4 Our implementation is able to handle more data model representations, including
XML Schemas, ER diagrams, and EMF ECore models.</p>
        <p>For the simple UML model shown in Figure 3.1, 5 virtual documents will be
created, among which is the following:
1. Virtual document corresponding to the class “Place”:
– name : “Place”
– documentation: “a bounded area defined by nature by an external
authority such as a government or for an internal business purpose used to
identify a location in space that is not a structured address for
example country city continent postal area or risk area a place may also be
used to define a logical place in a computer or telephone network e.g.
laboratory e.g. hospital e.g. home e.g. doctor’s office e.g. clinic”
– containerClass: “”
– path: “/simple test model/place”
– body :”place, a bounded area defined by nature by an external authority
such as a government or for an internal business purpose used to identify
a location in space that is not a structured address for example country
city continent postal area or risk area a place may also be used to define
a logical place in a computer or telephone network e.g. laboratory e.g.
hospital e.g. home e.g. doctor’s office e.g. clinic”
2. Virtual document corresponding to the attribute “Place id”:
– name : “place id”
– documentation: “the unique identifier of a place”
– containerClass: “place, a bounded area defined by nature by an external
authority such as a government or for an internal business purpose used
to identify a location in space that is not a structured address for example
country city continent postal area or risk area a place may also be used to
define a logical place in a computer or telephone network e.g. laboratory
e.g. hospital e.g. home e.g. doctor’s office e.g. clinic”
– path: “/simple test model/place/place id”
– body : “place id, the unique identifier of a place, place, a bounded area
defined by nature by an external authority such as a government or for
an internal business purpose used to identify a location in space that
is not a structured address for example country city continent postal
area or risk area a place may also be used to define a logical place in
a computer or telephone network e.g. laboratory e.g. hospital e.g. home
e.g. doctor’s office e.g. clinic”
βi =

 0
 P jN=F1k&amp; j6=i &amp; DFk [j]6=0 1
1</p>
        <p>NFk</p>
        <p>X
j=1 &amp; j6=i
if, for all j 6= i, DFk [j] = 0,
otherwise
(7)
(8)
Adding lexical and semantic similarity between terms The cosine
scoring scheme presented above (4) is intolerant to even minor lexical or semantic
variations in terms. For example, the cosine score computed using equation (4)
for the document vectors (gender: 1, sex: 0) and (gender:0, sex: 1) will be 0
although “gender” mentioned in the first document is clearly semantically
related to “sex” appearing in the second document. To address this limitation, we
modify the initial vector to add, for a given term t, the indirect contributions of
terms related to t as measured by a term similarity metric. Formally, instead of
using DFk (resp. DF′k ) in equation (4), we used the document vector DdFk whose
coordinates DdFk [i], for 1 ≤ i ≤ NFk , are defined as follows:</p>
        <p>
          DdFk [i] = DFk [i] + βi ∗
termSim(ti, tj ) ∗ DFk [j]
where
– termSim is a term similarity measure such as Jaccard or Levenshtein
similarity measure (for lexical similarity), a semantic similarity measure based on
WordNet [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ], or a combination of similarity measures. termSim(ti, tj )∗
DFk [j] in (7) measures the contribution to the term ti of the potentially
related term tj .
– βi is the weight assigned to indirect contributions of related terms.
        </p>
        <p>For efficiency, when comparing two document vectors, we only add in the
modified document vectors, the contributions of terms corresponding to at least
one non-zero coordinate of any of the two vectors.</p>
        <p>The equation (7) applied to the previous example transforms (gender:1, sex
:0) to (gender: 1, sex: termSim(“sex”, “gender”)) and (gender: 0, sex: 1) to
(gender: termSim(“gender”, “sex”), sex: 1). Assuming that termSim(“sex”,
“gender”), which is the same as termSim(“gender”, “sex”), is not equal to zero, the
cosine score of the transformed vectors will obviously be different from zero, and
will reflect the similarity between the terms “gender” and “sex”.</p>
        <p>For the results reported in the evaluation section, only the Levenshtein
similarity measure was used. Using a semantic similarity measures based on wordnet
significantly increasing the algorithm running time with a marginal improvement
of quality of the resulting mappings. The running time performance of semantic
similarity measures based on WordNet, was still unacceptable after restricting
related terms to synonyms and hyponyms.</p>
        <p>
          Our approach provides a tigher integration of cosine scoring scheme and a
term similarity measure. In previous work, e.g. Falcon-AO[
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], the application of
the term similarity measure (Levenshtein measure in Falcon-AO) is limited to
names of model elements, and the final score is simply a linear combination of
the cosine score and the measure of similarity between model element names.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Evaluation of Model Matching Algorithm</title>
      <p>To evaluate the model matching algorithm, we accumulated industry models and
customer data models from IBM architects who regularly build solutions for
customers. The specific model comparisons we chose were ones that IBM architects
need mapped in the field. In four cases out of 7 model matching comparisons,
the matching had been performed by IBM solutions teams manually. We tried
to use these as a ’gold standard’ to evaluate the model matching algorithm, but
unfortunately found that in 3 of 4 cases, the quality of the manual model
matching was exceedingly poor. We address this issue with a tool to assess matching
quality in the next section.</p>
      <p>As shown in Table 1, the industry models we used in the comparisons
included BDW (a logical data model for financial services), HPDM (a logical data
model for healthcare), MDM (a model for the IBM’s solution for master data
management), RDWM (a model for warehouse solutions for retail organizations),
and IAA (a model for insurance). Model A in the table is a customer ER model
in the healthcare solutions space, model B is a customer logical data model in
financial services, and model C is customer logical data model in retail. To
evaluate our model matching results, we had two IBM architects assess the precision
of the best possible match produced by our algorithm. Manual evaluation of the
matches was performed on sample sizes of 100 in 5 of 7 cases (all cases except
the IAA-BDW and A-HPDM comparisons). For IAA-BDW, we used a sample
size of 50 because the algorithm produced less than 100 matches. For A-HPDM,
we relied on previously created manual mappings to evaluate both precision and
recall (recall was at 25%). The sizes of these models varied from 300 elements
to 5000 elements.</p>
      <p>We make two observations about our results:
– (a) The results show a great deal of variability ranging from cases where we
had 100% precision in the top 100 matches, to 52% precision. This reflected
the degree to which the models shared a common lineage or common
vocabulary in their development. For example, RDWM was actually derived
from BDW, and this is clearly reflected in the model matching results. IAA
and BDW target different industries (and therefore do not have much in
common), and this is a scenario where the algorithm tends to make more
errors. We should point out that although IAA and BDW target different
industries (insurance and banking respectively), there is a real business need
for mapping common or overlapping concepts across these disparate models,
so the matching exercise is not a purely academic one.
– (b) Even in cases where the precision (or recall) was low, the IBM architects
attested to the utility of such a semi-automated approach to model matching,
because their current process is entirely manual, tedious and error prone.
None of the model mapping tools available to them currently provide results
that are usable or verifiable.</p>
      <p>Models Compared Number of matches Precision
A-HPDM 43 67%
B-BDW 197 74%
MDM-BDW 149 71%
MDM-HPDM 324 54%
RDWM-BDW 3632 100%
C-BDW 3263 96%
IAA-BDW 69 52%
We turn now to another aspect of our work, which is to somehow measure
the quality of ontology matching in the field. As mentioned earlier, we initially
started our work with the hope of using manual matchings as a gold standard
to measure the output of our matching algorithm, but were surprised to find a
rather large number of errors in the manually generated model mappings. A lot
of these errors were presumably due to the ad hoc nature of the manual mapping
process, leading to poor transcription of names, e.g., changes in spaces,
appending package names etc. when writing mapping results in a separate spreadsheet;
specification of new classes/attributes/relationships to make up a mapping, when
the elements did not exist in the original models etc. Also, there were cases in
which mappings were made to an absurdly generic class (such as Thing ) which
rendered them meaningless.</p>
      <p>In order to deal with the above issues, and also improve the accuracy of our
mapping tool, we decided to write a Lint Engine to detect suspicious mappings.
The engine runs through a set of suspicious mapping patterns, with each pattern
being assigned a severity rating and a user-friendly explanation, both specified
by the domain expert. We have currently implemented the following six mapping
patterns based on discussions with a domain-expert:
– Element not found : The pattern detects mappings where one or more
elements involved does not exist in any of the models. This pattern is assigned
a high severity since it indicates something clearly suspicious or wrong.
– Exact name mismatches: Detects mappings where a model element with an
exact lexical match was not returned. This does not necessarily indicate an
incorrect mapping, however does alert the user of a potentially interesting
alternative that may have been missed.
– Duplicate documentation: Detects mappings where the exact same
documentation is provided for both elements involved in the mapping. This may arise
when models or portions of models are copy/pasted across.
– Many-to-1 or 1-to-Many : Detects cases where a single element in one model
is mapped to a suspiciously large number elements in another model. As
mentioned earlier, these typically denote mappings to an absurdly generic
class/relation.
– Class-Attribute proliferations: Detects cases when a single class’ attributes/relations
are mapped to attributes/relations of several different classes in the other
model. What makes this case suspicious is that model mappings are a means
to an end, typically used to specify instance transformations.
Transformations can become extremely complex when class-attribute proliferations
exist.
– Mapping without documentation: Detects cases where all the elements
involved in the mapping have no associated documentation. This could arise
due to lexical and structural information playing a role in the mapping,
however the lack of documentation points to a potentially weaker match.</p>
      <p>We applied our Lint engine to the manual mappings to see if it could reveal
in more detail the defects we had observed. The results are summarized in the
Tables 2 - 5 below.
The results are quite shocking, e.g., in the BDW-MDM case, all 702 mappings
specified an element that did not exist in either of the two models. The only
explanation for this bizarre result is that mapping exercises, typically performed
in Excel etc, are hideously inaccurate - in particular, significant approximation
of the source and target elements is pervasive. Another point to note is that
humans like to try and cheat and map at a generic level, and this practice seems
to be quite pervasive, as such mappings were discovered in almost all the cases.
Finally, the lack of, or duplication of documentation can be identified in many
ways (e.g. products such as SoDA from Rational5) - but surfacing this during
the mapping validation is very helpful. It helps present an estimation of the
degree of confidence in the foundation of the mapping - the understanding of
the elements being mapped.</p>
      <p>The results were analyzed in detail by a domain expert who verified that
the accuracy and usefulness for the suspicious mappings was very high (in the
B-BDW case, only 1 suspicious mapping produced by Lint was actually correct).
The fact that the lint engine found roughly less than 1 valid mapping for every
10 suspicious ones is an indication of the inefficiency of manual mapping
practices. What the engine managed to do effectively is to filter from a huge pool of
mappings, the small subset that need human attention, while hinting to the user
what may be wrong by nicely grouping the suspicious mappings under different
categories.</p>
      <sec id="sec-4-1">
        <title>5 http://www-01.ibm.com/software/awdtools/soda/index.html</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Jian</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hu</surname>
          </string-name>
          , W., Cheng, G.,
          <string-name>
            <surname>Qu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Falcon-ao: Aligning ontologies with falcon</article-title>
          .
          <source>In: Proceedings of K-CAP Workshop on Integrating Ontologies</source>
          . (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Qu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hu</surname>
          </string-name>
          , W., Cheng, G.:
          <article-title>Constructing virtual documents for ontology matching</article-title>
          .
          <source>In: Proceedings of the 15th international conference on World Wide Web, Edinburgh</source>
          , UK (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Mao</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peng</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Spring</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>A profile propagation and information retrieval based ontology mapping approach</article-title>
          .
          <source>In: Proceedings of the 3rd International Conference on Semantics, Knowledge and Grid (research track)</source>
          , Xian, China (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Noy</surname>
            ,
            <given-names>N.F.</given-names>
          </string-name>
          :
          <article-title>Semantic integration: a survey of ontology-based approaches</article-title>
          .
          <source>SIGMOD Rec</source>
          .
          <volume>33</volume>
          (
          <issue>4</issue>
          ) (
          <year>2004</year>
          )
          <fpage>65</fpage>
          -
          <lpage>70</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Kolaitis</surname>
          </string-name>
          , P.G.:
          <article-title>Schema mappings, data exchange, and metadata management</article-title>
          .
          <source>In: Proceedings of the 24th ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems</source>
          , Baltimore, Maryland (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Bernstein</surname>
            ,
            <given-names>P.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Melnik</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Model management 2.0: manipulating richer mappings</article-title>
          .
          <source>In: Proceedings of the ACM SIGMOD International Conference on Management of Data</source>
          , Beijing, China (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Doan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Madhavan</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dhamankar</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Domingos</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Halevy</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Learning to match ontologies on the semantic web</article-title>
          .
          <source>The VLDB Journal</source>
          <volume>12</volume>
          (
          <issue>4</issue>
          ) (
          <year>2003</year>
          )
          <fpage>303</fpage>
          -
          <lpage>319</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Noy</surname>
            ,
            <given-names>N.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Musen</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          :
          <article-title>Prompt: Algorithm and tool for automated ontology merging and alignment</article-title>
          .
          <source>In: Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on on Innovative Applications of Artificial Intelligence</source>
          , Austin, Texas, USA (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Kotis</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vouros</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stergiou</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Capturing semanticstowards automatic coordination of domain ontologies</article-title>
          .
          <source>In: AIMSA</source>
          . (
          <year>2004</year>
          )
          <fpage>22</fpage>
          -
          <lpage>32</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Lambrix</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tan</surname>
          </string-name>
          , H.:
          <article-title>Sambo-a system for aligning and merging biomedical ontologies</article-title>
          .
          <source>Web Semant</source>
          .
          <volume>4</volume>
          (
          <issue>3</issue>
          ) (
          <year>2006</year>
          )
          <fpage>196</fpage>
          -
          <lpage>206</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Madhavan</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bernstein</surname>
            ,
            <given-names>P.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rahm</surname>
          </string-name>
          , E.:
          <article-title>Generic schema matching with cupid</article-title>
          .
          <source>In: VLDB '01: Proceedings of the 27th International Conference on Very Large Data Bases</source>
          , San Francisco, CA, USA, Morgan Kaufmann Publishers Inc. (
          <year>2001</year>
          )
          <fpage>49</fpage>
          -
          <lpage>58</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Castano</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>De Antonellis</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          , De Capitani di Vimercati, S.:
          <article-title>Global viewing of heterogeneous data sources</article-title>
          .
          <source>IEEE Trans. on Knowl. and Data Eng</source>
          .
          <volume>13</volume>
          (
          <issue>2</issue>
          ) (
          <year>2001</year>
          )
          <fpage>277</fpage>
          -
          <lpage>297</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>R.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haas</surname>
            ,
            <given-names>L.M.</given-names>
          </string-name>
          , Hern´andez, M.A.:
          <article-title>Schema mapping as query discovery</article-title>
          .
          <source>In: Proceedings of 26th International Conference on Very Large Data Bases</source>
          , Cairo, Egypt (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>R.J.</given-names>
          </string-name>
          , Hern´andez,
          <string-name>
            <given-names>M.A.</given-names>
            ,
            <surname>Haas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.M.</given-names>
            ,
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.L.</given-names>
            ,
            <surname>Ho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.T.H.</given-names>
            ,
            <surname>Fagin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Popa</surname>
          </string-name>
          ,
          <string-name>
            <surname>L.</surname>
          </string-name>
          :
          <article-title>The clio project: Managing heterogeneity</article-title>
          .
          <source>SIGMOD Record</source>
          <volume>30</volume>
          (
          <issue>1</issue>
          ) (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Bernstein</surname>
            ,
            <given-names>P.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ho</surname>
          </string-name>
          , H.:
          <article-title>Model management and schema mappings: Theory and practice</article-title>
          .
          <source>In: Proceedings of the 33rd International Conference on Very Large Data Bases</source>
          , University of Vienna, Austria (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16. Hern´andez,
          <string-name>
            <given-names>M.A.</given-names>
            ,
            <surname>Popa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Ho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            ,
            <surname>Naumann</surname>
          </string-name>
          ,
          <string-name>
            <surname>F.</surname>
          </string-name>
          :
          <article-title>Clio: A schema mapping tool for information integration</article-title>
          .
          <source>In: Proceedings of the 8th International Symposium on Parallel Architectures</source>
          , Algorithms and Networks, Las Vegas, Nevada, USA (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Raghavan</surname>
            ,
            <given-names>V.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wong</surname>
            ,
            <given-names>S.K.M.:</given-names>
          </string-name>
          <article-title>A critical analysis of vector space model for information retrieval</article-title>
          .
          <source>Journal of the American Society for Information Science</source>
          <volume>37</volume>
          (
          <issue>5</issue>
          ) (
          <year>January 1999</year>
          )
          <fpage>279</fpage>
          -
          <lpage>287</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Jiang</surname>
            ,
            <given-names>J.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Conrath</surname>
            ,
            <given-names>D.W.</given-names>
          </string-name>
          :
          <article-title>Semantic similarity based on corpus statistics and lexical taxonomy</article-title>
          .
          <source>CoRR cmp-lg/9709008</source>
          (
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Lin</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>An information-theoretic definition of similarity</article-title>
          .
          <source>In: ICML '98: Proceedings of the Fifteenth International Conference on Machine Learning</source>
          , San Francisco, CA, USA, Morgan Kaufmann Publishers Inc. (
          <year>1998</year>
          )
          <fpage>296</fpage>
          -
          <lpage>304</lpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>