<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Improving Automatically Created Mappings using Logical Reasoning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Christian Meilicke</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Heiner Stuckenschmidt</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrei Tamilin</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ITC-irst and University of Trento</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Mannheim</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>A lot of attention has been devoted to heuristic methods for discovering semantic mappings between ontologies. Despite impressive improvements, the mappings created by these automatic matching tools are still far from being perfect. In particular, they often contain wrong and redundant mapping rules. In this paper we present an approach for improving such mappings using logical reasoning in the context of Distributed Description Logics (DDL). Our method is orthogonal to the matching algorithm used and can therefore be used in combination with any matching tool. We explain the general idea of our approach informally using a small example and present the results of experiments conducted on the OntoFarm Benchmark which is part of the Ontology Alignment Evaluation challenge.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Motivation</title>
      <p>
        So far, work on representation of and reasoning with mappings has focussed on
mechanisms for answering queries and using mappings to compute subsumption
relationships between concepts in the mapped ontologies. These methods always assumed
that the mappings used are manually created and of high quality (in particular
consistent). In this paper we investigate logical reasoning about mappings that are not assumed
to be perfect. In particular, our methods can be used to check (automatically created)
mappings for formal and conceptual consistency and determine implied mappings that
have not explicitly been represented. We investigate such mappings in the context of
Distributed Description Logics [
        <xref ref-type="bibr" rid="ref1 ref13">1, 13</xref>
        ], an extension of traditional description logics
with mappings between concepts in different T-boxes. The functionality described in
this paper will become more important in the future because more and more ontologies
are created and need to be linked. For larger ontologies the process of mapping will not
be done completely by hand, but will rely on or will at least be supported by automatic
mapping approaches. We see our work as a contribution to semi-automatic approaches
for creating mappings between ontologies where possible mappings are computed
automatically and then corrected manually making use of methods for checking the formal
and conceptual properties of the mappings.
      </p>
      <p>
        In previous work we have proposed a number of formal properties of mappings in
Distributed Description Logics that we consider useful for judging the quality of a set
of mappings [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. In this paper, we refine and extend this work in several directions.
Debugging of mappings We propose a process for (semi-)automatically debugging
automatically created mappings making use of some of the properties mentioned above.
In particular we use the notion of mapping consistency to detect problems caused by
the mappings. For each potential problem, we determine the minimal set of mapping
rules responsible for the problem (minimal conflict set). For each conflict set, we try to
identify which mapping rule is incorrect and remove it form the mapping.
Implementation On top of the DRAGO reasoning system [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] we built a prototype of
mapping debugger for computing minimal conflict sets with respect to an inconsistency
caused by a mapping as well as some heuristics for automatic repairing of an
inconsistent mapping. We further added a minimization functionality for computing minimal
mapping sets from redundant ones.
      </p>
      <p>
        Experiments We tested the approach using the OntoFarm data set, a set of several rich
OWL ontologies describing the domain of conference management systems [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. We
used the CtxMatch matching tool to automatically create mappings between each of the
ontologies. We further automatically determined problems (in particular unsatisfiable
concepts) created by the mapping and tried to fix them automatically using the
debugging process proposed in this paper. In the concluding step of the experimental study,
we tried to compute for each mapping its logically-equivalent minimal version.
      </p>
      <p>The structure of the paper is as follows. We start with a brief recall of basic
definitions of Distributed Description Logics and explanations of the reasoning
mechanisms. Then we describe the intuitions of our debugging/minimization approaches
using a small example. Finally, we report on some preliminary experimental evaluation of
the techniques proposed in this paper and summarize the results.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Distributed Description Logic</title>
      <p>
        Distributed Description Logic framework (DDL) is a formal tool for representing and
reasoning with multiple ontologies pairwise linked by semantic mappings. In this
section, we briefly recall some key definitions and properties of DDL relying on the
original studies in [
        <xref ref-type="bibr" rid="ref1 ref13">1, 13</xref>
        ].
2.1
      </p>
      <p>Syntax and Semantics
Given a set I of indexes, used to enumerate a set of ontologies, a Distributed
Description Logics is then a collection {DLi}i∈I of Description Logics. Each ontology i is
formalized by a T-box Ti of DLi, so that the initial set of ontologies in DDL corresponds
to a family of T-boxes T = {Ti}i∈I . To distinguish the descriptions from various Ti in
the family, DDL utilizes a prefix notation to pin descriptions to ontologies where they
are considered in, e.g., i : X, i : X ⊑ Y . Semantic relations between pairs of
ontologies a represented in DDL by bridge rules. A bridge rule from i to j is an expression of
the following two forms:</p>
      <p>⊑
i : X −→ j : Y – an into-bridge rule</p>
      <p>⊒
i : X −→ j : Y – an onto-bridge rule
where X and Y are concepts of ontologies Ti and Tj respectively. The derived bridge
≡
rule i : X −→ j : Y can be defined as the conjunction of corresponding into- and
onto-bridge rule.
⊑</p>
      <p>Intuitively, the into-bridge rule i : Bachelor −→ j : Student states that, from
the j-th point of view the concept Bachelor in i is more specific than its local concept
⊒
Student. Similarly, the onto-bridge rule i : Scientif icEvent −→ j : Conf erence
expresses the more generality relation.</p>
      <p>A distributed T-box T = hT , Bi consists of a collection of T-boxes T = {Ti}i∈I
and a collection of bridge rules B = {Bij }i6=j∈I between them.</p>
      <p>The semantics of DDL is based on the key assumption that each ontology Ti in
the family is locally interpreted by interpretation Ii on its local interpretation domain
ΔIi . The semantic correspondences between heterogeneous local domains, e.g., the
representations of a registration fee in US Dollars and in Euro, are modeled in DDL by
a domain relation.</p>
      <p>A domain relation rij represents a possible way of mapping the elements of ΔIi
into the domain ΔIj : rij ⊆ ΔIi × ΔIj such that rij denotes {d′ ∈ ΔIj | hd, d′i ∈
rij }; for any subset D of ΔIi , rij (D) denotes Sd∈D rij (d); and for any R ⊆ ΔIi ×
ΔIj rij (R) denotes Shd,d′i∈R rij (d) × rij (d′). For instance, if ΔI1 and ΔI2 are the
representations of a registration fee in US Dollars and in Euro, then r12 could be a rate
of exchange function, or some other approximation relation.</p>
      <p>A distributed interpretation I = h{Ii}i∈I , {rij }i6=j∈I i of a distributed T-box T =
hT , Bi consists of a family of local interpretations Ii on local interpretation domains
ΔIi , one for each Ti, and a family of domain relations rij between these local domains.
A distributed interpretation I is said to satisfy a distributed T-box T = hT , Bi, written
I |= T, if all T-boxes in T are satisfied</p>
      <p>I |= Ti, if Ii |= A ⊑ B for all A ⊑ B ∈ Ti
and all bridge rules in B are satisfied:</p>
      <p>⊑
I |= i : X −→ j : Y, if rij (XIi ) ⊆ Y Ij</p>
      <p>⊒</p>
      <p>I |= i : X −→ j : Y, if rij (XIi ) ⊇ Y Ij</p>
      <p>Given a distributed T-box T = hT , Bi, one can perform some basic Distributed DL
inferences. A concept i : C is satisfiable with respect to T if there exist a distributed
interpretation I of T such that CIi 6= ∅. A concept i : C is subsumed by a concept
i : D with respect to T (T |= i : C ⊑ D) if for every distributed interpretation I of T
we have that CIi ⊆ DIi .
2.2</p>
      <p>DDL Inference Mechanisms
Although both in DL and Distributed DL the fundamental reasoning services lay in
verification of concepts satisfiability/subsumption within a certain ontology, in DDL,
besides the ontology itself, the reasoning also depends on other ontologies that affect
it through semantic mappings. This affection consist in the ability of bridge rules to
propagate the knowledge across ontologies in form of subsumption axioms.</p>
      <p>The simplest case illustrating the knowledge propagation in DDL is the following:
⊒ ⊑
i : A ⊑ B, i : A −→ j : G, i : B −→ j : H (1)</p>
      <p>j : G ⊑ H</p>
      <p>In languages that support disjunction, the simplest propagation rule can be
generalized to the propagation of subsumption between a concept and a disjunction of other
concepts in the following way:</p>
      <p>⊒ ⊑
i : A ⊑ B1 ⊔ . . . ⊔ Bn, i : A −→ j : G, i : Bk −→ j : Hk (1 ≤ k ≤ n)
j : G ⊑ H1 ⊔ . . . ⊔ Hn
(2)</p>
      <p>
        The important property of the described knowledge propagation is that it is
directional, i.e., bridge rules from i to j support knowledge propagation only from i towards
j. It has been shown in [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] that adding the inference pattern (2) to existing DL tableaux
reasoning methods lead to a correct and complete method for reasoning in DDL. This
method has been implemented in the DRAGO DDL reasoner.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>The Debugging Process</title>
      <p>In this section we will explain the general idea of our approach for improving
automatically created mappings based on reasoning about mappings in Distributed Description
Logics using a simple example. In particular, we consider two ontologies in the domain
of conference management systems, the same domain we did our experiments in. For
each ontology, i and j, we only consider a single axiom, namely:
i : Author ⊑ P erson
and</p>
      <p>j : P erson ⊑ ¬Authorization</p>
      <p>These simple axioms that describe the concept of a person in two different
ontologies – one stating that an author is a special kind of person and the other one stating that
the concepts Person and Authorization (to access submitted papers) are disjoint concept
– are enough to explain the important features of our approach. The approach consists
of the following steps.
3.1</p>
      <p>Mapping Creation
In the first step, we use any existing system for matching ontologies to create an initial
set of mapping hypotheses. In particular, we are interested in mappings between class
names, because these are the kinds of mappings that we can reason about using DDL
framework. In order to support automatical repair of inconsistent mappings later on,
the matching algorithm chosen should ideally not only return a set of mappings, but
also a level of confidence in the correctness of a mapping. For the sake of simplicity,
we assume that we use a simple string matching method that compares the overlap
in concept names and computes a similarity value that denotes the relative size of the
common substring1. Mappings are created based on a threshold for this value that we
assume to be 1/3. Applying this method to the example will result in the following two
mappings with corresponding levels of confidence:</p>
      <p>We further assume that the mapping method also applies some structural heuristics
to derive additional mappings and propagates the levels of confidence accordingly. For
instance, the fact that i : P erson is a superconcept of i : Author which is assumed to
be equivalent to j : Authorization may be used to derive the following mapping:
⊒
i : P erson −→ j : Authorization, 0.46</p>
      <p>In the same way, the fact that i : Author is a subconcept of i : P erson and the fact
that i : P erson is assumed to be equivalent to j : P erson may be used to the following
addition mapping:</p>
      <p>⊑
i : Author −→ j : P erson, 1.00</p>
      <p>We can easily see that the process has produced two incorrect mappings, namely
the ones with a confidence of 0.46. It could be argued that it is easy to get rid of these
incorrect mappings by raising the threshold to 0.5 for instance. This however is no
sustainable solution to the problem, because there might be mappings with a level of
confidence below 0.5 that are correct, on the other hand, there might still be
incorrect mappings with a confidence of more than 0.5. Instead of relying on artificially set
thresholds, we propose to analyze the impact of created mappings on the connected
ontologies and to eliminate mappings that have a malicious influence.
3.2</p>
      <p>Diagnosis
The mapping set described in the last step now serves as a basis for analyzing the
effect of mappings and detecting malicious mappings. This process is similar to the
well known concept of model-based diagnosis which has already successfully been
applied to the task of detecting wrong axioms in single ontologies. Similar to existing
approaches for diagnosing ontologies, our starting point are unsatisfiable concepts which
1 of course we use more sophisticated methods in the real experiments
and
and</p>
      <p>j : AuthorizationIj ⊆ rij (i : P ersonIi ) = j : P ersonIj</p>
      <p>Interpreting the inconsistency of the concept j : Authorization as a symptom, we
can now try to identify and repair the cause of this inconsistency. For this purpose, we
compute irreducible conflict set for this symptom. Here an irreducible conflict set is
a set of mappings that makes the concept unsatisfiable and has the additional property
that removing a mapping from the set makes the concept satisfiable again. the arguments
above it is easy to see that he have the following irreducible conflict sets:
≡ ≡
{i : P erson −→ j : P erson, i : Author −→ j : Authorization}
are interpreted as symptoms for which a diagnosis has to be computed. Compared to
the general task of diagnosing ontologies, we are in a lucky position, because we have
to deal with a much smaller set of potential diagnosis. In particular, we claim that the
ontologies connected in the first step do not contain unsatisfiable concepts. If we now
observe unsatisfiable concepts in the target ontology2 and assuming that the ontologies
themselves are correct, we know that they have to be caused by some mappings in the
mapping set.</p>
      <p>To illustrate this situation, we can have a look at our example again. Using existing
techniques for reasoning in DDL, we can derive that the concept Authorization is
globally unsatisfiable, i.e., j : AuthorizationI = ∅, because we have Authorzation ⊑
¬P erson and at the same time, we can infer Authorization ⊑ P erson. There are
two reasons for this, namely:
j : AuthorizationIj = rij (i : AuthorIi ) ⊆ rij (i : P ersonIi ) = j : P ersonIj
≡ ⊒
{i : P erson −→ j : P erson, i : P erson −→ j : Authorization}</p>
      <p>In classical diagnosis, all conflict sets3 are computed and the diagnosis is computed
from these conflict sets using the hitting set algorithm. For the case of diagnosing
mappings this is neither computationally feasible nor does it provide the expected result. In
≡
our example, the hitting set would consist of the mapping i : P erson −→ j : P erson
which, as we sill see later, is the only mapping that actually carries some correct
information.</p>
      <p>Our solution to the problem is to use an iterative approach that computes an often
not minimal hitting set by determining one conflict set at a time and immediately fixing
it in the way described in the next section. In our example, the algorithm will first
detect the second conflict and fix it, afterwards, the method checks whether the concept
j : Authorization is still inconsistent. As this is the case, the second conflict set will
be detected and fixed as well removing the problem.
2 the formal semantics of DDL guarantees that the addition of mappings cannot lead to
unsatisfiable concepts in the source ontology
3 in classical diagnosis often only minimal conflict sets are considered
3.3</p>
      <p>Heuristic Debugging
As mentioned above, the result of the diagnosis step is an irreducible conflict sets, in
particular a set of mappings that make a concept unsatisfiable and with the additional
property that removing one mapping from this set solves the problem in the sense that
the concept becomes satisfiable. The underlying idea of our approach is now that
unsatisfiable concepts are the result of wrong mappings. This means that each irreducible
conflict set contains at least one mapping rule that does state a correct semantic relation
between concepts and therefore should not be in the set of mappings. The goal of the
debugging step is now to identify this malicious mapping and remove it from the
overall mapping set. If we chose the right mapping for removal the quality of the overall
mapping set should be improved, because a wrong mapping has been removed. In the
case of our example, the first irreducible conflict set that will be considered consists of
the following two mappings one of which we have to remove:</p>
      <p>≡
i : P erson −→ j : P erson, 1.00</p>
      <p>≡
i : Author −→ j : Authorization, 0.46</p>
      <p>There are different ways now, in which a decision about the mapping to remove
could be made. The easiest way is to use an interactive approach where the conflict
sets are presented to a human user who decides which mapping should be removed.
≡
In our case, the user will easily be able to decide that the mapping i : Author −→
j : Authorization is not correct and should be removed. In the second iteration, the
following two mappings will be in the irreducible conflict set:</p>
      <p>≡
i : P erson −→ j : P erson, 1.00</p>
      <p>⊒
i : P erson −→ j : Authorization, 0.46</p>
      <p>For this set the user will be able to see immediately that the second mapping should
be removed, because it is not correct. This approach sound trivial, but in the presence of
large mapping sets, providing the user with feedback about potential problems in terms
of small conflict sets is of great help and often reveals problems that are hard to see
when looking at the complete mapping set.</p>
      <p>We can also try to further automate the debugging process by letting the system
decide, which mapping rule to eliminate. In cases where the matching system already
provides a measure of confidence, this is again quite simple, as we can simply remove
the mapping rule with the lowest degree of confidence. In our case this is again the rule
≡
i : Author −→ j : Authorization and removing it will lead to a better mapping set.
It is not always possible, however, to rely on the confidence provided by the matching
system, either because the system simply does not provide any or because the levels
of confidence provided are not informative. In our experiments, we often had the
situation where all mapping even though they were conflicting had a confidence of 100%
attached. In this case, we have to think of a new way of ranking mappings. An approach
that we used in our experiments that turned out to work quite well is to compute the
semantic distance of the concept names involved using WordNet synsets. For the
example above it is clear that this heuristic will also lead to an exclusion of the second
rule, because the class names in the first rule are equivalent and therefore have the least
semantic distance possible. In cases where no distinction can be made using this
heuristic, we have to switch back to the interactive mode and ask the user which mapping to
remove. In any cases, the debugging step leaves us with a single mapping that does
not create any inconsistencies. In order to get a complete set of correct mappings, we
can now infer all additional mappings that follow from this one which leads us to the
corrected final set of mappings in our case this final set if the following.</p>
      <p>≡
i : P erson −→ j : P erson, 1.00</p>
      <p>⊒
i : Author −→ j : P erson, 1.00</p>
      <p>In summary, the process above is a way to improve the quality of automatically
generated mapping sets by means of intelligent post-processing. Using formal properties
of mappings and logical reasoning we are able to detect wrong mappings by analyzing
their impact and tracking unwanted effects back to the mapping rules that caused them.
In this our method is not yet another ontology matching method, but it is actually
orthogonal to existing developments in the area of ontology matching as it can be applied
to any set of mappings. The approach can be extended in several directions. First of all
we can use symptoms other than concept satisfiability as a starting point for diagnosis.
Further, we can use the method on joint sets of competing mappings created by different
matching algorithms. This will help us to get a better coverage of the actual semantic
relations and the trust in the quality of the different matching algorithms provides us
with an additional criterion for selecting mappings to be discarded.
3.4</p>
      <p>
        Minimization
A further improvement of the debugged mapping can be achieved by removing
redundant mappings - mappings that logically follow from other mappings. In [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] we
defined the notion of minimality of a mapping that we use in this context to remove
redundant mappings. In the example for instance, the two mappings derived using
structural heuristics do not really add new information to the system, because they can be
derived from the two equivalence mappings that have been created first. In particular
⊒
i : P erson −→ j : Authorization, is redundant information, because:
i : AuthorIi ⊆ i : P ersonIi
=⇒ rij (AuthorIi ) ⊆ rij (P ersonIi )
=⇒
rij (P ersonIi ) = j : P ersonIj
rij (Author) ⊆ j : P ersonIj
(3)
(4)
(5)
(6)
      </p>
      <p>This means that for reasoning with automatically created mappings, we only have
to take into account the equivalence mapping between the person concept in the two
ontologies, because it is the basis for inferring the other one. For this reason, we
remove all mappings that can be shown to be redundant in the sense that they can be
derived from using other mappings from the set of mappings and only continue with
the resulting minimal mapping set that still carries all the semantics of the complete set.</p>
    </sec>
    <sec id="sec-4">
      <title>Experiments</title>
      <p>
        In this section we report on some preliminary experimental evaluation of the mapping
debugging/minimization techniques presented in the preceding sections. All the
experiments have been conducted on the prototype of the debugger/minimizer implemented
on top of the DRAGO DDL reasoner [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
4.1
      </p>
      <p>
        Experimental Setting
To perform experiments, we used a set of ontologies developed in the OntoFarm project
[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] which are used as a part of Benchmark in Ontology Alignment Evaluation
challenge.4 In particular, we selected several ontologies modeling the domain of conference
organization:
      </p>
      <p>
        Given this ontology test set, we apply the following experimental scenario. Using
the CtxMatch matching tool [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], we automatically compute mappings between pairs of
ontologies in the test set. Among the created mappings, we further identify those ones
which are capable of producing unsatisfiable classes and therefore need to be debugged
first. In the process of debugging, malicious bridge rules in mappings are automatically
diagnosed and removed in accordance with the heuristic debugging discussed in
Section 3. In the concluding step of the experimental study, we apply the minimization
algorithm to compute for each mapping a logically-equivalent minimal set of bridge
rules. Note that for those mappings which demand the debugging first the minimization
is applied to their repaired descendants.
The results of applying the heuristic debugging and minimization techniques to the
automatically generated mappings are summarized in Table 1 and Table 2. More
information about the test data and results can be obtained visiting the applications section
of the DRAGO reasoner web page.5
      </p>
      <p>During the debugging process we performed the following measurements: the
initial amount of bridge rules in the mapping to be debugged, number of classes which
become unsatisfiable due to the mapping, and finally the sets of bridge rules which are
diagnosed as malicious and are automatically removed by the debugging algorithm.
After the removal of malicious bridge rules, a mapping becomes repaired in a sense that it
is not capable of producing unsatisfiability anymore. As shown in Table 1, the results of
4 http://nb.vse.cz/∼svabo/oaei2006/
5 http://sra.itc.it/projects/drago/applications.html
e
g
d
liredb tcoun
tianE lrseu
e
g
d
liredb tcoun
tianE lrseu
applying the heuristic debugging approach proposed in Section 3 are quite reassuring –
all of the mappings automatically removed by our method are actually incorrect ones.</p>
      <p>To estimate minimization rate we measured the initial number of bridge rules and
the amount of logically entailed bridge rules discovered by applying the minimization
technique. As summarized in Table 2, the amount of the entailed bridge rules in a certain
automatically generated mapping varies from 50 to 80% to the initial number of bridge
rules in this mapping.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Discussion</title>
      <p>
        We have presented a method for automatically improving the result of heuristic
matching systems using logical reasoning. The basic idea is similar to existing work on
debugging ontologies and uses some non-standard inference methods for reasoning about
mappings introduced in previous work. The method feeds on the fact that most
existing matching algorithms ignore the logical implications of new mappings. This gap is
filled by our method that detects malicious impacts of generated mappings and traces
them back to their source. As we have shown in the experiments, in almost all cases
(in fact in all cases observed in the experiment) the unwanted effects were caused by
wrong mappings and we were able to remove them automatically thus improving the
correctness of the generated mapping. Actually, the idea of using logical reasoning in
the matching process is not new and has been proposed by others (e.g., [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ]), the way
it is used in our work, however, is unique, as it is the only approach that takes the effects
of mappings into account. We believe that this additional step can significantly improve
the quality of matching methods and should be integrated in existing matching
algorithms as far as they are concerned with expressive ontologies that support consistency
checking. In fact, the expressiveness of the language used to encode the ontologies to
be matched seems to be the only limitation of our approach which can only be applied
if the language supports consistency checking. In our experiments, we have seen that
we can improve the correctness of matching results by removing wrong mappings. So
far, we did not quantify this improvement, this has to be done in future work.
This work was partially supported by the German Science Foundation in the
EmmyNoether Program and by the European Union under grant FP6-507482 (KnowledgeWeb)
as part of the T-Rex exchange program.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>A.</given-names>
            <surname>Borgida</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Serafini</surname>
          </string-name>
          .
          <article-title>Distributed description logics: Assimilating information from peer sources</article-title>
          .
          <source>Journal of Data Semantics</source>
          ,
          <volume>1</volume>
          :
          <fpage>153</fpage>
          -
          <lpage>184</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>P.</given-names>
            <surname>Bouquet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Euzenat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Franconi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Serafini</surname>
          </string-name>
          , G. Stamou, and
          <string-name>
            <given-names>S.</given-names>
            <surname>Tessaris</surname>
          </string-name>
          .
          <article-title>Specification of a common framework for characterizing alignment</article-title>
          .
          <source>Deliver. 2.2</source>
          .4,
          <issue>KnowledgeWeb</issue>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>P.</given-names>
            <surname>Bouquet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giunchiglia</surname>
          </string-name>
          ,
          <string-name>
            <surname>F. van Harmelen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Serafini</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Stuckenschmidt</surname>
          </string-name>
          . C-OWL:
          <article-title>Contextualizing ontologies</article-title>
          .
          <source>In Proceedings of the 2nd International Semantic Web Conference (ISWC-03)</source>
          , volume
          <volume>2870</volume>
          <source>of LNCS</source>
          , pages
          <fpage>164</fpage>
          -
          <lpage>179</lpage>
          . Springer,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>P.</given-names>
            <surname>Bouquet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Serafini</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Zanobini</surname>
          </string-name>
          .
          <article-title>Semantic coordination: a new approach and an application</article-title>
          .
          <source>In Proceedings of the Second Internatinal Semantic Web Conference</source>
          , volume
          <volume>2870</volume>
          of Lecture Notes in Computer Science, pages
          <fpage>130</fpage>
          -
          <lpage>145</lpage>
          . Springer Verlag,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>D.</given-names>
            <surname>Calvanese</surname>
          </string-name>
          , G. De Giacomo, and
          <string-name>
            <given-names>M.</given-names>
            <surname>Lenzerini</surname>
          </string-name>
          .
          <article-title>A framework for ontology integration</article-title>
          .
          <source>In Proceedings of the Semantic Web Working Symposium</source>
          , pages
          <fpage>303</fpage>
          -
          <lpage>316</lpage>
          , Stanford, CA,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>M.</given-names>
            <surname>Ehrig</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Staab. QOM - Quick Ontology</surname>
          </string-name>
          <article-title>Mapping</article-title>
          .
          <source>In Proceedings of the 3rd International Semantic Web Conference (ISWC-04)</source>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>F.</given-names>
            <surname>Giunchiglia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Shvaiko</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Yatskevich</surname>
          </string-name>
          .
          <article-title>S-match: an algorithm and an implementation of semantic matching</article-title>
          .
          <source>In Proceedings of the European Semantic Web Conference (ESWS04)</source>
          , pages
          <fpage>61</fpage>
          -
          <lpage>75</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>F.</given-names>
            <surname>Giunchiglia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yatskevich</surname>
          </string-name>
          , and
          <string-name>
            <given-names>E.</given-names>
            <surname>Giunchiglia</surname>
          </string-name>
          .
          <article-title>Efficient semantic matching</article-title>
          .
          <source>In Proceedings of the European Semantic Web Conference (ESWS-05)</source>
          , pages
          <fpage>272</fpage>
          -
          <lpage>289</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>E. Hovy.</surname>
          </string-name>
          <article-title>Combining and standardizing largescale, practical ontologies for machine translation and other uses</article-title>
          .
          <source>In Proceedings of the 1st International Conference on Language Resources and Evaluation (LREC)</source>
          , pages
          <fpage>535</fpage>
          -
          <lpage>542</lpage>
          ,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>J. Madhavan</surname>
            ,
            <given-names>P. A.</given-names>
          </string-name>
          <string-name>
            <surname>Bernstein</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Domingos</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Halevy</surname>
          </string-name>
          .
          <article-title>Representing and reasoning about mappings between domain models</article-title>
          .
          <source>In Proceedings of the 18th National Conference on Artificial Intelligence (AAAI-02)</source>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>S.</given-names>
            <surname>Melnik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Garcia-Molina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and E.</given-names>
            <surname>Rahm</surname>
          </string-name>
          .
          <article-title>Similarity flooding: A versatile graph matching algorithm and its application to schema matching</article-title>
          .
          <source>In Proceedings of the 18th International Conference on Data Engineering (ICDE-02). IEEE Computing Society</source>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>N. F.</given-names>
            <surname>Noy</surname>
          </string-name>
          and
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Musen</surname>
          </string-name>
          .
          <article-title>The PROMPT suite: Interactive tools for ontology merging and mapping</article-title>
          .
          <source>International Journal of Human-Computer Studies</source>
          ,
          <volume>59</volume>
          (
          <issue>6</issue>
          ):
          <fpage>983</fpage>
          -
          <lpage>1024</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13. L.
          <string-name>
            <surname>Serafini</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Borgida</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Tamilin</surname>
          </string-name>
          .
          <article-title>Aspects of distributed and modular ontology reasoning</article-title>
          .
          <source>In Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI-05)</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14. L.
          <string-name>
            <surname>Serafini</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Stuckenschmidt</surname>
            , and
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Wache</surname>
          </string-name>
          .
          <article-title>A formal investigation of mapping languages for terminological knowledge</article-title>
          .
          <source>In Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI-05)</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <given-names>L.</given-names>
            <surname>Serafini</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Tamilin</surname>
          </string-name>
          . DRAGO:
          <article-title>Distributed reasoning architecture for the semantic web</article-title>
          .
          <source>In Proceedings of the 2nd European Semantic Web Conference (ESWC-05)</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16. H.
          <string-name>
            <surname>Stuckenschmidt</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Wache</surname>
            , and
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Serafini</surname>
          </string-name>
          .
          <article-title>Reasoning about ontology mappings</article-title>
          .
          <source>In Proceedings of the ECAI-06 Workshop on Contextual Representation and Reasoning</source>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <given-names>O.</given-names>
            <surname>Svab</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Svatek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Berka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Rak</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Tomasek</surname>
          </string-name>
          . Ontofarm:
          <article-title>Towards an experimental collection of parallel ontologies</article-title>
          .
          <source>In Poster Proceedings of the International Semantic Web Conference 2005 (ISWC-05)</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>