<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Automating OAEI Campaigns (First Report)</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Cassia Trojahn</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Christian Meilicke</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>ome Euzenat</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Heiner Stuckenschmidt</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>INRIA &amp; LIG</institution>
          ,
          <addr-line>Grenoble</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Mannheim</institution>
          ,
          <addr-line>Mannheim</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper reports the rst e ort into integrating OAEI and SEALS evaluation campaigns. OAEI is an annual evaluation campaign for ontology matching systems. The 2010 campaign includes a new modality in coordination with the SEALS project. This project aims at providing standardized resources (software components and data sets) for automatically executing evaluations of typical semantic web tools, including ontology matching tools. A rst version of the software infrastructure is based on a web service interface wrapping the functionality of the matching tool to be evaluated. In this setting, the evaluation results can be visualized and manipulated immediately in a direct feedback cycle. We describe how parts of the OAEI 2010 evaluation campaign have been integrated into the SEALS software infrastructure. In particular, we discuss technical and organizational aspects related to the use of the new technology for both participants and organizers of the OAEI.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        The Ontology Alignment Evaluation Initiative3 (OAEI) is a coordinated
international initiative that organizes the evaluation of ontology matching systems
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The main goal of OAEI is to compare systems and algorithms on the same
basis and to allow anyone for drawing conclusions about the best matching
strategies. The ambition is that from such evaluations, tool developers can learn
and improve their systems. The OAEI annual campaign provides the evaluation
of matching systems on consensus test cases, which are organized by di erent
groups of researchers. OAEI evaluations have been carried out since 2004.
      </p>
      <p>Although OAEI has been the basis for ontology matching evaluation over
the last years, additional e orts have to be made in order to catch up with the
growth of ontology matching technology, specially in two main directions: large
scale evaluation and automation of the evaluation process. The SEALS project4
aims at providing standardized data sets, evaluation campaigns for typical
semantic web tools and, in particular, a software infrastructure for automatically
executing evaluations. One of the ve semantic areas covered by SEALS is
ontology matching. The SEALS infrastructure will allow developers to run their tools
on an execution environment in both the context of an evaluation campaign and
on their own for a formative evaluation of their tool versions.</p>
      <p>OAEI and SEALS are closely coordinated and the plan is to integrate
progressively the SEALS infrastructure within the OAEI campaigns. The 2010 OAEI
campaign is the rst e ort in this direction. A subset of the OAEI tracks have
been included in this new modality. Participants are invited to extend a web
service interface5 and deploy their matchers as web services, which are accessed
in an evaluation experiment. This setting enables participants to debug their
systems, run their own evaluations and manipulate the results immediately in
a direct feedback cycle. On the other hand, runtime and memory consumption
cannot be correctly measured because a controlled execution environment is
missing. Further versions of the SEALS infrastructure will include the
deployment of tools in such a controlled environment.</p>
      <p>In this paper, we report the rst e orts on integrating OAEI campaigns
into the SEALS infrastructure. We describe the preparation of the evaluation
campaign as well as we comment on how the evaluation itself is conducted,
taking into account its partial automation. Furthermore, we present the SEALS
evaluation service that we have developed for this purpose and show how it has
been used at hand of a concrete example.</p>
      <p>The rest of the paper is structured as follows. We rstly present the
evaluation design of the 2010 evaluation campaign (x2). We comment on evaluation
work ows, data sets and criteria and metrics speci ed for this evaluation.
Secondly, we detail how the designed evaluation is being conducted (x3). Third,
we present the overview of the main software components of the SEALS
infrastructure (x4) together with an example of running evaluations (x5). Preliminary
results of the campaign are then presented (x6). Finally, we comment on the
lessons learned (x7) and conclude the paper (x8).
2</p>
    </sec>
    <sec id="sec-2">
      <title>Evaluation Design</title>
      <p>The design of an evaluation campaign is conducted previously to the execution of
the campaign itself. It involves to specify the data sets, criteria and metrics to be
considered in the campaign, as well as how the several components (matchers,
test providers, evaluators, etc.) interact in an evaluation experiment, i.e., the
evaluation work ow.
2.1</p>
      <sec id="sec-2-1">
        <title>Evaluation work ow</title>
        <p>
          An alignment can be characterized as a set of pair of entities (e and e0), coming
from the ontologies to be aligned (o and o0), related by a particular relation (r)
together with some con dence measure (n) expressing a degree of trust in the
fact that the relation holds [
          <xref ref-type="bibr" rid="ref1 ref2 ref3">2, 1, 3</xref>
          ]. From this characterization it is possible to
ask any alignment method to output an alignment, given: (i) two ontologies to be
        </p>
        <sec id="sec-2-1-1">
          <title>5 http://alignapi.gforge.inria.fr/tutorial/tutorial5/</title>
          <p>Test
o
o0
matcher
matching
params resources</p>
          <p>R
A
evaluator
m</p>
          <p>Result
aligned; (ii) partial input alignment (possibly empty); and (iii) a characterization
of the wanted alignment (e.g. one-to-one vs. many-to-many alignments).</p>
          <p>The quality of the generated alignment can be assessed regarding di erent
criteria. Figure 1 shows the evaluation work ow representing an OAEI evaluation
experiment, where several matchers are evaluated. The rst step of the work ow
is to retrieve from a database the test cases to be considered in such an
evaluation, where each test case consists of the two ontologies to be matched and the
corresponding reference alignment. Next, each matching system performs the
matching, taking as input parameters the two ontologies o and o0, and generates
the alignment A using a certain set of resources and parameters. An evaluation
component receives this alignment and computes a (set of) quality measure(s)
m { typically precision and recall { by comparing it to the reference alignment
R. Finally, each result interpretation is stored into the result database.</p>
          <p>This work ow represents a typical OAEI evaluation work ow. However, for
some data sets, which have no complete reference alignments, extensions for
this typical work ow have been designed. Usually, in such cases, the user has
the role of evaluator and alternative approaches, such as manual labeling, data
mining and logical reasoning, are applied for supporting the evaluation task. For
instance, according to Figure 1, for each test case, the available matchers are
executed and their generated alignments (A) are stored into the database. This
content is then used later by data mining techniques, whose results will be nally
analysed by the user.</p>
          <p>In previous OAEI campaigns, this work ow has been realized as follows: the
required test cases have been made available to the participants for download
and posteriorly used by the participants to generate the matching results with
their tool. The results have then been submitted to the OAEI organizers who
used evaluation scripts to apply measures and store the results.
2.2</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>OAEI data sets</title>
        <p>OAEI data sets have been extended and improved over the years. In the OAEI
2010 campaign, the following tracks and data sets have been selected:
The benchmark test aims at identifying the areas in which each matching
algorithm is strong and weak. The test is based on one particular
ontology dedicated to the very narrow domain of bibliography and a number of
alternative ontologies of the same domain for which alignments are provided.
The anatomy test is about matching the Adult Mouse Anatomy (2744 classes)
and the NCI Thesaurus (3304 classes) describing the human anatomy. Its
reference alignment has been generated by domain experts.</p>
        <p>The conference test consists of a collection of ontologies describing the
domain of organising conferences. Reference alignments are available for a
subset of test cases.</p>
      </sec>
      <sec id="sec-2-3">
        <title>The directories and thesauri test cases propose web directories (matching</title>
        <p>website directories like open directory or Yahoo's), thesauri (three large
SKOS subject heading lists for libraries) and generally less expressive
resources.</p>
        <p>The instance matching test cases aim at evaluating tools able to identify
similar instances among di erent data sets. It features web data sets, as well
as a generated benchmark.</p>
        <p>Anatomy, Benchmark and Conference have been included in the SEALS
evaluation modality. The reason for this is twofold: on the one hand these data
sets are well known to the organizers and have been used in many evaluations
contrary to the test cases of the instance data sets, for instance. On the other
hand these data sets come with a high quality reference alignment which allows
for computing the compliance based measures, such as precision and recall.
2.3</p>
      </sec>
      <sec id="sec-2-4">
        <title>Evaluation criteria and metrics</title>
        <p>The diverse nature of OAEI data sets, specially in terms of the complexity of
test cases and presence/absence of (complete) reference alignments, requires to
use di erent evaluation measures. For the three data sets in the SEALS
modality, compliance of matcher alignments with respect to the reference alignments
is evaluated. In the case of Conference, where the reference alignment is
available only for a subset of test cases, compliance is measured over this subset.
The most relevant measures are precision (true positive/retrieved), recall (true
positive/expected) and f{measure (aggregation of precision and recall). These
metrics are also partially considered or approximated for the other data sets
which are not included in the SEALS modality (standard modality).</p>
        <p>
          For Conference, alternative evaluation approaches have been applied. These
approaches include manual labeling, alignment coherence [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] and correspondence
patterns mining. These approaches require a more deep analysis from experts
than traditional compliance measures. For the rst version of the evaluation
service, we concentrate on the most important compliance based measures because
they do not require a complementary step of analyse/interpretation from
experts, which is mostly performed manually and outside an automatic evaluation
cycle. However, such approaches will be progressively integrated into the SEALS
infrastructure.
        </p>
        <p>Nevertheless, for 2010, the generated alignments are stored in the results
database (as detailed in x4) and can be retrieved by the organizers easily. It is
thus still possible to exploit alternative evaluation techniques subsequently, as
it has been done in the previous OAEI campaigns.</p>
        <p>All the criteria above are about alignment quality. A useful comparison
between systems also includes their e ciency, in terms of runtime and memory
consumption. The best way to measure e ciency is to run all systems under the
same controlled evaluation environment. In previous OAEI campaigns,
participants have been asked to run their systems on their own and to inform about the
elapsed time for performing the matching task. Using the web based evaluation
service, runtime cannot be correctly measured due the fact that the systems run
in di erent execution environments and, as they are exposed as web services,
there are potential network delays.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Evaluation Process</title>
      <p>
        Once the evaluation design has been speci ed, the evaluation campaign takes
place in four main phases:
Preparatory phase ontologies and alignments are provided to participants,
which have the opportunity to send observations, bug corrections, remarks
and other test cases;
Preliminary testing phase participants ensure that their systems can load
the ontologies to be aligned and generate the alignment in the correct format
(the Alignment API format [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]);
Execution phase participants use their algorithms to automatically match the
ontologies;
Evaluation phase the alignments provided by the participants are evaluated
and compared.
      </p>
      <p>The four phases are the same for both standard and SEALS modality.
However, di erent tasks are required to be performed by the participants of each
modality. In the preparatory phase, the data sets have been published on web
sites and could be downloaded as zip- les. In the future, it will be possible to
use the SEALS portal to upload and describe new data sets. In addition, the
test data repository supports versioning, which is an important issue regarding
bug xes and improvements that have taken place over the years.</p>
      <p>In the phase of preliminary testing, the SEALS evaluation service pays o
in terms of reduced e ort. In the past years, participants submitted their
preliminary results to the organizers, who analyzed them semi-automatically, often
detecting problems related to the format or to the naming of the required results
les. Via a time-consuming communication process these problems have been
discussed with the participants. It is now possible to check these and related issues
automatically (as detailed in x5).</p>
      <p>In the execution phase, standard OAEI participants run their tools on their
own machines and submit the results via mail to the organizers, while SEALS
participants run their tools via web service interfaces. They get a direct feedback
on the results and can also discuss and analyse this feedback in their results
paper6. Prior to the hard deadlines, for many of the data sets, results could not
be delivered in the past to participants by the organizers in time.</p>
      <p>Finally, in the evaluation phase, organizers are in charge of evaluating the
received alignments. For the SEALS modality, this e ort has been minimized
due the fact that the results are automatically computed by the services in the
infrastructure, as detailed in the next section.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Evaluation Service Architecture</title>
      <p>The evaluation service is composed of three main components: a web user
interface, a BPEL work ow and a set of web services. The web user interface is the
entry point to the application. This interface is deployed as a web application in
a Tomcat application-server behind an Apache web server. It invokes the BPEL
work ow, which is executed on the ODE7 engine. This engine runs as a web
application inside the application server.</p>
      <p>The BPEL process accesses several services that provide di erent
functionalities:
{ validation service ensures that (a) the matcher web service speci ed via
its endpoint (URL) is available; (b) this service implements correctly the
interface we have speci ed; and (c) the matcher generates an alignment in
the correct format (the validation service uses two simple ontologies in order
to test if the matcher generates alignments in the correct format). If it is not
the case, an output error message is generated to the user. This validation
is done prior to any evaluation.
{ redirect service is used to redirect the request for running a matching task
to the matcher service endpoint.
{ test iterator service is responsible for iterating over test cases and providing
a reference to the required les. These les are the source ontology, the target
ontology and the reference alignment. All the operations of this service make
use of the SEALS test data repository.
{ evaluation service computes measures such as precision and recall for
evaluating the alignments generated by the matching system.
{ result service is used for storing evaluation results in a relational database.
6 Notice that each participant in the OAEI, independently of the modality, has to
write a paper that contains a system description and an analysis of results from the
point of view of the system developer.
7 ODE BPEL Engine http://ode.apache.org/</p>
      <p>The user can start an evaluation by specifying the web service endpoint via
the web interface. This data is then forwarded to BPEL as an input parameter.
The complete evaluation work ow is executed as a series of calls to the services
listed above. The speci cation of the web service endpoint becomes relevant for
the invocation of the validation and redirect services. They implement internally
web service clients that connect to the URL speci ed in the web user interface.</p>
      <p>Test and result services require to access additional data resources. For test
data, the test web service accesses the SEALS repository, extracts the relevant
information and forwards the URLs of the required documents (source and target
ontologies and reference alignment) via the redirect service to the matcher, which
is currently evaluated. The result web service uses a connection to the database
to store the results for each execution of an evaluation work ow. For visualizing
and manipulating the stored results, an OLAP (Online Analytical Processing)
application is available. Results can be re-accessed at any time e.g., for comparing
di erent tool versions against each other.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Running an Evaluation</title>
      <p>
        For illustrating a complete evaluation cycle, we have extended the Anchor-Flood
system [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] with the web service interface8. This system has participated in the
two previous OAEI campaigns and is thus a typical evaluation target. The
current version of the web application described in the following is available at
http://seals.inrialpes.fr/platform/. In order to start an evaluation, one
must specify the URL of the matcher service, the class implementing the
required interface and the name of the matching system to be evaluated (Figure
2). Three of the OAEI data sets have been selected, namely Anatomy,
Benchmark and Conference. In this example, we have used the conference test case.
      </p>
      <p>Submitting the form data, the BPEL work ow is invoked. It rst validates
the speci ed web service as well as its output format. In case of a problem, the
concrete validation error is displayed to the user as direct feedback. In case of a
successfully completed validation, the system returns a con rmation message and
continues with the evaluation process. Every time an evaluation is conducted,
results are stored under the enpoint address of the deployed matcher (Figure 3).</p>
      <p>The results are displayed as a table (Figure 4), when clicking on one of the
three evaluation IDs in Figure 3. The results table is (partially) available while
the evaluation itself is still running. By reloading the page from time to time,
users can see the progress of an evaluation that is still running. In the results
table, for each test case, precision and recall are listed. Moreover, a detailed view
on the alignment results is available (Figure 5), when clicking on the alignment
icon in Figure 4. This detailed view lists those correspondences, that (a) have
been generated and are in the reference alignment (true positives), that (b) have
been generated but are not in the reference alignment (false positives), and that
(c) have not been generated but are in the reference alignment (false negatives).
8 Available at http://mindblast.informatik.uni-mannheim.de:8080/sealstools/
aflood/matcherWS?wsdl</p>
      <p>The user can visualize the results in an OLAP application by clicking on the
plot gure in Figure 3, in a similar way from what is shown in Figure 6, but
in a setting where only the results of his/her system are shown. Furthermore,
organizers have a similar tool for accessing the results registered for the campaign
as well as all evaluations being carried out in the evaluation service (even the
evaluation executed for testing purposes).
6</p>
    </sec>
    <sec id="sec-6">
      <title>Preliminary Results</title>
      <p>
        The OAEI 2010 campaign has counting with 15 participants [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] (16 participants
in 2009 [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]). Regarding the SEALS tracks, 11 participants have registered their
results for Benchmark, 9 for Anatomy and 8 for Conference. Some participants
in Benchmark have not participated in Anatomy or Conference and vice-versa.
The new technology introduced in the OAEI a ected both tool developers and
organizers to a large degree. In the following we highlight some of the outcomes
and describe the lessons we learned from the experiences made so far.
      </p>
      <p>As already argued, implementing the web service interfaced requires some
e ort on side of the tool developer. We stayed in contact with some of the tool
developers during this process and observed that the time required for
implementing the interface varied between several hours and several days depending</p>
      <sec id="sec-6-1">
        <title>9 http://om2010.ontologymatching.org</title>
        <p>on the technical skills of each developer. We also observed that the rst version of
the provided tutorial contained some unclear information resulting in problems
for some participants. From the feedback of the developers, we have improved
the tutorial. Another typical problem is related to the fact that some tool
developer had only restricted access to a machine that is available from the Internet.
These problems could nally be solved, however, system administrators of the
particular company or research institute should be contacted early.</p>
        <p>Once technical problems had been solved, the evaluation service has been
used by some of the participants in the phase of preliminary testing extensively.
Obviously, the direct feedback of the evaluation service has supported the
process of a formative evaluation well. Other participants used the service only for
submitting their nal results.</p>
        <p>Regarding the evaluation service performance, during the rst weeks the
runtime performance was suboptimal. We solved the underlying problems nally.
These problems might have been the reason for some participants to abandon
from the use during the rst weeks. Once the problems have been solved, we
contacted each participant in order to explain the problems and they started to
use the system again.</p>
        <p>On side of the organizers, the evaluation service reduced the e ort of
checking the formal correctness of the results to a large degree. In the past, it was
required to communicate many of the problems in a time-consuming multilevel
process. Typical examples are invalid xml, missing or incorrect namespace
information, unsupported types of relations in generated alignments, incorrect
directory structure and an incorrect naming style used for the submissions. All
of these problems are now directly forwarded to the tool developer in an
error message or in a preliminary result interpretation that does not t with the
expectations.</p>
        <p>Moreover, the organizers could analyse the results so far submitted at any
time and had an overview on the participants using the systems. However, while
some analysis methods are already available, a number of speci c services and
operations is still missing. The graphical support of the OLAP visualisation
does, for example, not support the generation of precision and recall graphs
frequently used by OAEI organizers. In particular, evaluation and visualisation
methods speci c for ontology matching are not supported. However, most of
these operations are already implemented in the Alignment API and will be
made available in the future.
8</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Final Remarks</title>
      <p>This paper has reported the rst e orts in integrating the SEALS evaluation
service in OAEI evaluation campaigns. A preliminary version of this service has
been exposed via a web service interface. For that reason, participants are asked
to make available their tools as web services, which will be accessed in the
evaluation experiment. The resulting approach o ers the minimal requirements needed
to execute a complete cycle of evaluation. The major bene t of this approach
is to allow developers to debug their systems, run their own evaluations, and
manipulate the results immediately in a direct feedback cycle. As a limitation,
runtime and memory consumption cannot be correctly measured because there
is no a controlled execution environment. Another important drawback is related
to the missing reproducibility of the generated results.</p>
      <p>In a second development iteration matching tools will be deployed and
executed on the runtime environment of the SEALS infrastructure. This allows
organizers to compare systems on the same basis, in particular in terms of
runtime. It also solves the problem of reproducibility. This is also a test of the
deployability of tools. The successful deployment relies on the Alignment API
and requires additional information about how the tool can be executed in the
platform and its dependencies in terms of resources, e.g., installed databases or
resources like WordNet, etc. For that reason the challenging goal of the SEALS
project can only be reached with support of the matching community and
depends highly on the acceptance of tool developers. We believe that an online
available evaluation service is a key component to raise the acceptance in the
community.</p>
      <sec id="sec-7-1">
        <title>Acknowledgements</title>
        <p>The authors are partially supported by the SEALS project (IST-2009-238975).</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>P.</given-names>
            <surname>Bouquet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ehrig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Euzenat</surname>
          </string-name>
          , E. Franconi,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hitzler</surname>
          </string-name>
          , M. Krotzsch, L. Sera ni, G. Stamou,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Sure</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Tessaris</surname>
          </string-name>
          .
          <article-title>Speci cation of a common framework for characterizing alignment</article-title>
          .
          <source>Deliverable D2.2</source>
          .1,
          <string-name>
            <surname>Knowledge</surname>
            <given-names>web NoE</given-names>
          </string-name>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>J.</given-names>
            <surname>Euzenat</surname>
          </string-name>
          .
          <article-title>Towards composing and benchmarking ontology alignments</article-title>
          .
          <source>In Proc. ISWC Workshop on Semantic Integration</source>
          , pages
          <volume>165</volume>
          {
          <fpage>166</fpage>
          ,
          <string-name>
            <surname>Sanibel</surname>
            <given-names>Island</given-names>
          </string-name>
          <source>(FL US)</source>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>J.</given-names>
            <surname>Euzenat</surname>
          </string-name>
          .
          <article-title>An API for ontology alignment</article-title>
          .
          <source>In Proc. 3rd International Semantic Web Conference (ISWC)</source>
          , volume
          <volume>3298</volume>
          of Lecture notes in computer science, pages
          <volume>698</volume>
          {
          <fpage>712</fpage>
          ,
          <string-name>
            <surname>Hiroshima</surname>
          </string-name>
          (JP),
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>J.</given-names>
            <surname>Euzenat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ferrara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Hollink</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Isaac</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Joslyn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Malaise</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Meilicke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nikolov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sabou</surname>
          </string-name>
          , F. Schar e,
          <string-name>
            <given-names>P.</given-names>
            <surname>Shvaiko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Spiliopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Stuckenschmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Svab-Zamazal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Svatek</surname>
          </string-name>
          , C. Trojahn dos Santos, G. Vouros, and
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          .
          <article-title>Results of the ontology alignment evaluation initiative 2009</article-title>
          . In P. Shvaiko,
          <string-name>
            <given-names>J.</given-names>
            <surname>Euzenat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giunchiglia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Stuckenschmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Noy</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <surname>A</surname>
          </string-name>
          . Rosenthal, editors,
          <source>Proc. 4th ISWC workshop on ontology matching (OM)</source>
          ,
          <source>Chantilly (VA US)</source>
          , pages
          <fpage>73</fpage>
          {
          <fpage>126</fpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>J.</given-names>
            <surname>Euzenat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ferrara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Meilicke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pane</surname>
          </string-name>
          , F. Schar e, P. Shvaiko,
          <string-name>
            <given-names>H.</given-names>
            <surname>Stuckenschmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Svab-Zamazal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Svatek</surname>
          </string-name>
          , and
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Trojahn dos Santos. Results of the ontology alignment evaluation initiative 2010</article-title>
          . In P. Shvaiko,
          <string-name>
            <given-names>J.</given-names>
            <surname>Euzenat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giunchiglia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Stuckenschmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Noy</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <surname>A</surname>
          </string-name>
          . Rosenthal, editors,
          <source>Proc. 5th ISWC workshop on ontology matching (OM)</source>
          ,
          <source>Shanghai (Chine)</source>
          , pages
          <fpage>1</fpage>
          {
          <fpage>35</fpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>J.</given-names>
            <surname>Euzenat</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Shvaiko</surname>
          </string-name>
          . Ontology matching. Springer, Heidelberg (DE),
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>C.</given-names>
            <surname>Meilicke</surname>
          </string-name>
          and
          <string-name>
            <given-names>H.</given-names>
            <surname>Stuckenschmidt</surname>
          </string-name>
          .
          <article-title>Incoherence as a basis for measuring the quality of ontology mappings</article-title>
          .
          <source>In Proc. of the ISWC 2008 Workshop on Ontology Matching</source>
          , Karlsruhe, Germany,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>H.</given-names>
            <surname>Seddiqui</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Aono</surname>
          </string-name>
          .
          <article-title>Anchor- ood: results for OAEI 2009</article-title>
          .
          <source>In Proceedings of the ISWC 2009 workshop on ontology matching</source>
          , Washington DC, USA,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>